For the past two years, headlines, policy statements, and social media feeds have been flooded with dire warnings about rising ocean temperatures. Every uptick in the graphs was treated as irrefutable proof of humanity’s march toward ecological collapse. The news cycle offered little room for nuance, and as usual, the loudest voices declared the end was nigh. But a recent tweet from Javier Viños, supported by a graph of global sea surface temperatures (SST), reminds us how quickly climate “emergencies” dissolve when confronted with even the faintest hint of natural variability.
Viños’s tweet is remarkable in its simplicity and restraint. For the first time in 21 months, global ocean temperatures have returned to levels seen in December 2015—nine years ago. Let that sink in for a moment. After two years of being bombarded with claims that the Earth’s oceans were on an unstoppable trajectory of warming, we find ourselves… back where we were nearly a decade ago. The graph he shared makes this clear, showing the average SST dipping out of the “anomalous” zone, contradicting two years’ worth of sensational headlines.
What the Data Shows
The chart, which aggregates SST data from NOAA and the University of Maine’s Climate Change Institute, plots daily SST readings from 60°S to 60°N, covering a broad swath of the Earth’s oceans. The most striking feature is the orange 2023 line that shows pronounced warmth—well above the 1991–2020 baseline average—before gradually declining. The 2024 data (dark red) follows suit, steadily falling back to levels not seen since 2015.
Viños describes this trend as part of “poorly understood natural climate variability,” a phrase that should be plastered across every climate model and policymaker’s desk. The graph itself illustrates this beautifully: the chaotic, squiggling lines of each year reveal the natural ups and downs of ocean temperatures, starkly contrasting the prevailing narrative that climate change operates on a simple linear trajectory of doom.
The Last Two Years of Hyperbole
In 2023, the narrative around ocean temperatures reached fever pitch. Every spike in temperature was portrayed as an existential crisis. Headlines screamed about unprecedented oceanic heatwaves, ecosystems pushed to the brink, and melting polar ice accelerating sea-level rise. Phrases like “off the charts” and “new normal” were thrown around with reckless abandon.
Yet here we are, with the average SST plummeting back to levels seen nearly a decade ago. What does this tell us? That ocean temperatures fluctuate. That long-term trends are more complicated than the alarmists would have you believe. And perhaps most importantly, that the overconfidence in computer models and the myopic focus on short-term anomalies is profoundly misplaced.
Misunderstanding Natural Variability
Viños’s choice of words—“poorly understood natural climate variability”—cuts to the heart of the issue. Despite decades of research and countless billions of dollars spent, the science of climate variability remains riddled with uncertainties. Climate models struggle to replicate observed phenomena like the Pacific Decadal Oscillation (PDO), Atlantic Multidecadal Oscillation (AMO), and El Niño-Southern Oscillation (ENSO).
Take 2023’s anomalous warmth, for example. It coincided with a strong El Niño event, which naturally warms surface waters in the Pacific and influences global weather patterns. While the media pounced on this as evidence of human-caused warming, a significant portion of it was likely due to this entirely natural phenomenon.
Moreover, the complexities of ocean-atmosphere interactions, deep-sea currents, and solar variability are still poorly understood. As a result, the idea that we can attribute every blip on the graph to anthropogenic CO2 emissions is not just simplistic—it’s scientifically irresponsible.
The Danger of Overreaction
The problem with climate hyperbole is not just that it’s wrong, but that it leads to bad policy. Over the past two years, nations have doubled down on costly decarbonization efforts, citing “unprecedented” ocean temperatures as justification. Policies like Net Zero, which aim to eliminate fossil fuel use entirely, have disrupted energy markets, driven inflation, and plunged millions into energy poverty—all in the name of “saving the planet.”
But what if this episode of warming was primarily natural in origin? What if the 2023 temperature spike was just another bump in the chaotic rhythm of natural variability? The billions spent on “fixing” the climate would then amount to a colossal waste, solving a problem that doesn’t exist or was never fully understood in the first place.
Lessons for the Future
Viños’s tweet and the accompanying data highlight the need for humility in climate science and policymaking. The complex interplay of factors that influence ocean temperatures defies simplistic explanations and linear trends. Instead of rushing to declare every fluctuation a crisis, we should acknowledge the vast uncertainties that still exist and adopt a more measured approach to both science and policy.
Policymakers would do well to remember the following:
- Natural variability is not a bug; it’s a feature of the Earth’s climate system.
- Short-term trends do not equal long-term trajectories. Two years of anomalous temperatures do not prove climate catastrophe, just as this return to 2015 levels doesn’t disprove it.
- Correlation is not causation. Just because temperatures are higher doesn’t mean human activity is the sole—or even primary—cause.
- Precautionary policies are not free. When governments pursue drastic measures like Net Zero without understanding the full picture, the economic and social consequences can be severe.
Conclusion
The graph shared by Javier Viños should serve as a wake-up call for anyone who’s been swept up in the climate hysteria of the past two years. While it’s tempting to view every uptick in temperature as evidence of impending doom, the reality is far more nuanced. Ocean temperatures are now back to levels seen in 2015, a stark reminder of the chaotic, unpredictable nature of the climate system.
In the end, the greatest threat to rational policy isn’t rising temperatures—it’s the unrelenting tide of hyperbole that drowns out careful analysis and critical thinking. If we truly want to address environment challenges, we must first learn to distinguish between signal and noise. And as this latest data shows, there’s a lot more noise out there than we’ve been led to believe.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


The uncertainties involved with what we think we know about the biosphere leads to one conclusion – WE SIMPLY DON’T KNOW. We don’t know what causes the variability. We don’t even know for sure what the variability actually *is*. We can’t measure some factors to the units digit and then use that to guess at what is happening in the thousandths digit – it boils down to WE SIMPLY DON’T KNOW.
A related aphorism from Winston Churchill.
’It is better to remain silent and be thought a fool, than to speak and remove all doubt.’
Climate scientists speak of ‘settled science’, thus lack of unknowns (like natural variation), thereby removing all doubt that they are just fools.
Almost every scientist for over a century believes the existence of a greenhouse effect is settled science. And that CO2 emissions add to the greenhouse effect, and they are a relatively new climate change variable.
Are they all fools for believing some climate science is settled?
Predictions of the future climate are not settled science — such predictions are not science at all. Even if predictions were science, there are a wide range of predictions, so it is obvious there is no consensus on a specific prediction.
There is some settled climate science
There are also too many scientists and politicians trying to shut down debate by claiming climate science is far more settled than it really is. Especially the claims that long term climate trends can be predicted.
“believes”
roflmoa.. not science , then
“CO2 emissions add to the greenhouse effect,”
You have yet to produce any real scientific evidence of CO2 causing any warming.
The El Nino Nutter and official WUWT Court Jester BeNasty2000 serves up yet another burst of verbal flatulence. Same message: Climate scientists know nothing, but I am an expert. That’s funny material.
No evidence again, hey dickie?
Just yapping and doing the Walz arm flap
I saw Walz’ wife and him singing in front of a Christmas tree yesterday. They shouldn’t do that in public.
Walz’ wife is the one who, during the radical Left’s attack on Minneapolis, after George Floyd died, wanted to open a window so she could inhale the smell of burning tires. Apparently, the smell of burning tires gives her pleasure.
At least we don’t have to put up with this fool as vice president. There *is* a God! 🙂
“Mostly peaceful protests…”, according to the gaslighting of the Marx Stream Media.
So, we have Shula, Ott and recently Nicolov with alternative explanations of Earth’s atmosphere and climate going directly against the GHE influence on the atmosphere including Co2 and explaining the conditions in different ways.
Obviously you consider them ‘nutters’ as in stupid people who dont know the science but if you examine their papers/ articles you will spot a coherance. Maybe they are not perfectly right on all points but who is?
My point is: who are you to pass judgement and why? Is it maybe because you dont believe in science and diversity. That you actually hang on to the coattails of those you consider ‘the experts’? And that you comfort yourself by attacking those who steer away from the consensus in regards to the GHE? All i see is your unscientific virtue signalling time and time again and flat out insulting people who come on this platform to discuss matters.
You are beyond contempt, doubly so because you have or seem to have a functioning brain. But I do suspect some psychological issues..
He’s very impressed with … himself.
Indeed but that is not my point which is that he simply attacks anyone who doesnt agree w the Co2 forcing idea and the consensus around it. Given the system as it is, full of hypotheses about the interaction of the variables it seems strange one would attack knowlegeable people who present alternative views. Given the fact he is not an overall unintelligent person his need for put down antics hints at a psychological disturbance.
I dont care what position you come from, this beligerent behaviour is unwarranted and noted.
His spat with Bnice is rather tedious by now and it takes two to tango.
And to add: the way he attacked Javier Vinos is revolting. What a fecker!
Yes it is.
He is a complete nobody and he makes that more obvious every day.
Most if not all climate ”scientists” believe human co2 has caused most if not all of the modern milding without evidence. They are all wrong to believe.
‘The climate is changing already causing increasingly devastating weather extremes and that is entirely due to human greenhouse gas emissions that need to be reduced to zero by 2050 or the consequences will be catastrophic’, that is what the IPCC politicians propagandists et al. mean by ‘settled science’.
I think you know that.
There is actual settled climate science and what the IPCC and leftists CLAIM is settled science. It is our job to differentiate. But the WUWT Peanut Gallery here just dismisses 100% of climate scientists, as if EVERYTHING they say is wrong. The Peanut Gallery is the CO2 Does Nothing crackpots who comment here and give this website a bad reputation.
Poor consensus brain-washed dickie-boi… All you do is YAP.
Your reputation for avoiding presenting science of any sort is legendary.
You have a reputation below that of a dead mullet.
Do I have to paste those three question…
… so you can continue to RUN AWAY !!!
I mostly agree with that comment, equivocal statements like ‘there is no evidence of CO2 causing warming’ are mischievous IMO because there is no way to isolate the CO2 signal within highly complex aggregated climate data.
Ergo, any co2 warming is practically meaningless.
The settled part of the science is that CO2 is a greenhouse gas. Which means it absorbs and emits energy. That’s what all those scientists agree on.
What happens after that, with regard to CO2 and the Earth’s atmosphere, is not settled at all.
After all these years, climate science still can’t tell us how much warmth a doubling of CO2 amounts adds to the atmosphere. The estimates are anywhere from almost zero to 5C per doubling.
“Almost zero” doesn’t fit your narrative, so you ignore it.
Part of the problem is that climate science focuses entirely on an aveage of an intensive property – temperature – to be an indicator of “climate”. That’s a farce.
If we were *truly* seeing catastrophic climate change we would be seeing similar catastrophic changes in the size of hardiness zones food grains around the globe.
Yet we aren’t seeing that at all. In fact we are seeing an increase in the hardiness zones with an accompanying growth of global harvests.
Climate is *much* more than just an idiotic average of an intensive property – a fact that Freeman Dyson laid out years ago.
There Is No “Our”.
No
What do they say Greene?
“Almost every scientist for over a century believes the existence of a greenhouse effect is settled science.”
Same old twaddle Richard. The data doesn’t give a sh!t what almost every scientist thinks and it’s the answer to the wrong question anyway.
Listen closely Richard, because when your buddy tryintobenice tries to educate you, you hide in a blind spot. Do try to listen.
The question is – “does atmospheric carbon dioxide levels going from 280ppm to 430ppm have any measurable effect on any global climate parameter”.
My falsifiable hypothesis is that YOU will not produce any scientific evidence that it does.
Pretty easy to falsify that eh? Come on TFN, Nyolci, Simon, Nick, Banton … pile on guys, help the guy …..
“Pretty easy to falsify that eh? Come on TFN, Nyolci, Simon, Nick, Banton … pile on guys, help the guy …..”
It’s so pathetic! We won’t hear from any of them! 🙂 That would be because none of them can provide evidence for the claims they make about CO2 and the Earth’s atmosphere. None of them. And that would be because there is no evidence to be had. They know it. We know it.
The more they RUN AWAY…
… the more people wake up and ask why they are incapable of providing evidence.
They are playing right into my hands. 🙂
It’s a long game, but more and more people are seeing the reality.
“Almost every scientist…”
And you took a poll of every scientist to establish this.
What about the Oregon petition? Is that where the “almost” comes in?
The Oregon Petition does not deny the greenhouse effect or the fact that manmade CO2 emissions increase that effect.
The Petition was anti-CAGW, NOT anti-AGW. You obviously never read it
The signatures were not verified to be scientists. Also, all the signatures represented only 0.25% of all USA physical science graduates over the preceding 43 years. So what do what do the other 99.75% of scientists think?
Petitions do not refute physics.
Nothing you have ever produced has shown even remotely that CO2 causes warming.
“If there are no measurements to support a baseless conjecture, etc etc….
And every name on the Oregon Petition has more scientific knowledge and credibility than you will ever have, dickie…
.. even the “Mickey Mouse” names that go snuck in by the AGW cultist you worship.
Blustering doesn’t provide any evidence.. maybe try something else?
If it was just physics and calculations we would already have a consensus.
If you use first principle physics and look at the climate system the first major unsolvable issue is also the most complex one: fluid dynamics. I don’t expect you to actually know the physics so maybe educate yourself before asserting strong…mmm..opinions.
Not to worry – Nick will be along shortly to tell us that computational fluid dynamics is the solution to all problems.
Even IF fluid dynamics can be equated we still wouldnt know the exact factor of influence on the climate. We need to remember we are still talking small changes in temperatures and weather patterns in a multifactoral, dynamic system which does not lend itself to precise attribution or equations. The idea that we could solve it by more computational power seems a fool’s errant to me. It will remain guesswork. That’s ok..
So do tell — what is your favorite value for ECS?
If CO2 concentration determines temperature, How did the Earth warm up at the end of the last Glacial Maximum, when atmospheric CO2 was only 180ppm?
“How did the Earth warm up at the end of the last Glacial Maximum, when atmospheric CO2 was only 180ppm?”
Did you ever get an answer?
Have you heard of Milankovitch Cycles?
I don’t believe you will ever get it.
“And that CO2 emissions add to the greenhouse effect”
What is the importance of CO2 relative to other GHGs?
What causal relationship between atmospheric CO2 and temperature are you referring to as settled science?
The addition of CO2 is too small to measure. This is why climate science has to postulate a positive feedback value where CO2 drives H2O to increase temperature far more than it actually can.
Bingo!
“Almost”..
https://www.youtube.com/watch?v=L1GgmBIew9Y
2 hours long, so you won’t watch it.
Brilliant, more reasons why energy balance cartoons are meaningless.
Yes.
You surely don’t know.
Well, we do know. This is just random shuffling of energy around the climate system. We do even know its magnitude. For that matter, it was you or your husband who had a road show with an article that was about model uncertainty. It had a nice graph showing a narrow range, the uncertainty coming from variability. That graph spelled out in a language understandable even to you the uncertainties coming from the various parameters (like emission scenarios, etc.).
Not again, Tim, this is tiring. You deniers never cease to fail even the simple things. ‘Cos it’s simple. When we average, we are opening up the smaller ranges. See? This is that easy. This is the method behind measuring gravitational waves, mind you. The displacements are in the range of the diameter of the proton. We are surely unable to measure those distances with any of our methods. We can only do that with some tricks, right?
It you who is wrong: uncertainty is quite real, and ignored by climatology.
Same old nasti.
Come on nikky, tell us what we “DENY” that you can provide solid scientific evidence for.
That means you have to actually provide that evidence, not just mindless bluster and Walz-like hand flapping, and gibbering with mindless Kamal-speak.
Tim just happens to be absolutely correct, it is you that is showing your brain-washed ignorance…. as always.
Somebody has infinite faith in his political masters.
You claim that climate scientists knew exactly the degree of natural variation is.
OK, show me the papers that prove your belief.
??? I’m talking about science.
As Tim. Or his soul mate, Jim.
No you are not talking about science.
You are not capable of that.
You are a total void in that department.
What kind of nutcase comment is that?
You only have one life mate. Live it in the phony fake-lovely world if you want, but there’s actually a real world that you do actually live in.
Did Russia bomb Kiev in your world yesterday, because they did in the world I live in?
Wattafok is the problem with you? Don’t you have a strange feeling when you come up with stuff like the above? Try to make sense next time.
“Try to make sense next time.”
Were you looking in the mirror when you typed that ??
“Well, we do know. This is just random shuffling of energy around the climate system.”
wow! so sciency…
Indeed! And easily measured too I assume…
Anything sciency here would involve mathematics that is way beyond a debate in the “comment section”. Internal variability is just that. They know more and more about how this particular randomness looks like. The graph that Tim or Jim have referenced showed a very narrow range for this, independent of temp.
Kamal-speak.. again!!
You are missing the point: it is not that first principle physics cannot be understood but that those physics cannot determine exact correlation or causation beyond the standard parameters. Looking at the Earth’s atmosphere and the climate you cannot properly assess and separate the signals. It is guesswork.
I do agree w Lindzen et al that certain physics mechanics are at work, we just cannot weigh them w any certainty.
No, it’s not. This is science, so I understand that you don’t understand it. But this is not guesswork. We (I mean science) have a pretty good understanding of it.
Nyolci,
“A pretty good understanding of it” means very little in real hard science.
If you are confident of your interpretation of results, observations, measurements, then you publish them, WITH proper estimates of uncertainty. If you rely on feelings that you are right, you are not a scientist. Therefore, you are not qualified to judge.
It really is so simple. Unless you show your numbers to the scientific community to allow more analysis, you have nothing of value.
Now, to reality. The IPCC has long shown selected numbers for a very important factor, the climate sensitivity. This is supposed to allow calculations of temperature change from a change in atmospheric CO2. The IPCC reports every few years for 20 years or more have failed to provide meaningful climate sensitivity factors. The range of values, from near to zero to about 5 or even 7, has not been improved over time.
In a hard science world, this lack of result would denote failure of the hypothesis that CO2 has caused significant warming.
It is, IMHO, only a huge injection of money to research that favours a control knob function for CO2 has kept global warming concepts alive. It has failed multiple, reasonable, neutral studies for 2 decades or more.
It’s dead, Jim.
Geoff S
No, this doesn’t mean that.
For the 100th time, I’m not a climate scientist. You neither. I just read them. I can recommend that to you, too. Helps your brain develop. We have the experts precisely to do the stuff we are not trained/skilled/talented/whatever in. I hope you get the point.
I’m not judging. Science is science precisely because it is verified. When a scientist say something you cannot just dismiss it out of hand.
Actually, it has. BTW the range has never been “from near to zero”, and the range has always been around from “two-ish” to “four-ish”.
Of course this is false. “Failure of the hypothesis” is way beyond your premises here. Again, what this “lack of result” (just quoting your words) means can only be properly assessed by people who have the knowledge necessary.
No, it hasn’t, and again, you’re too ignorant to understand this.
The Fool tries to tell Geoff Sherrington he is wrong.
Perhaps Geoff should stop writing bs.
/plonk/
“have a pretty good understanding of it.”
roflmao.. Talk about delusional !
You have continually shown you have very little understanding of anything.
I understand the process. The ‘ have a pretty good understanding of..’ is an indication of:’well, let’s just assume this is how it works’, stop thinking about it and use a narrow band set of assumptions so we wont look completely stupid w the modelling’. The 100% truth is that that is actually what ‘settled science’ means. It doesnt mean the science is settled but that people decided to work on that premise. However, you cannot use that argument in assessing the climate, given the system. In Einstein’s words: ” make it as simple as possible but not simpler”. But that is exactly what the likes of the IPCC do. And the alarmists weaponize that. So, motivation, politics and money try to force the issue. They cannot argue from first principles. Not from physics or any other field and come to any reliable set of narrow error margins.
Take Co2 and it’s influence on temperature. It is guesswork which is fine. To act as if this is equatable is actually the main issue. It is not and never can be ‘settled’. Certain elementary physics are proven almost by default and in practice. Any serious scientist who claims the climate system can be ‘tackled’ is lying. Alarmists are lying collectively. Anybody who settles for ‘settled science’ is fooling him/ herself.
No, and it’s just as if you’re actively trying to misunderstand it. When Newton formulated his laws, we could say we had a pretty good understanding of Physics, although that was just the very-very beginning of the classical era. We now know much more. And you wouldn’t say Newton’s work was “let’s assume this is how it works’.
You have to first understand what “the likes of the IPCC” does before you can say statements about that. The knowledge level of 99,9% of the people here about that subject can be characterized as “deranged bs”, mostly widely circulated (and invariably wrong) gossip and “studies” from con men like M and M.
This is not a guesswork, and, for that matter, it wasn’t even a guesswork 25 years ago when the toolbox of climate science was much more primitive than today.
Anytime you must use parameters rather than measurements or calculated values from a real functional relationship you are guessing, period. You can try to juice up your explanation but in the end it is a guess.
This is a very good illustration how bad the knowledge level of people here, and how little they understand from climate science in general, and modeling in particular (the two are not equivalent). Of course there’s nothing like you try to imply, and no, the bs from Willis is not an example.
You really are showing just how clueless you are about anything
You have almost zero understanding of basic science, maths or anything.
You have the brain of a demented muppet. !!
Tell me what university to attend to get a degree in “climate science.”
When you use parameterization in a model, that is a code word for assumptions. Assumptions are guesses. Qualified assumptions are better as they have been tested, but they are now educated guesses.
I could go on. I know what models are. I know simulators and emulators, too.
Just a foot note. Hindcasting is simply curve fitting.
My knowledge is exactly like yours in this matter.
No, this is plainly false, and it would be extremely helpful if you (at least) tried to understand modeling. At least broadly. The details are complicated but the principles are not. Then the garbage (that characterizes 99.9% of deniers’ take on this) would disappear and we would have meaningful debates.
No, you obviously don’t.
“Well, we do know.”
How do we know this when it is hidden due to using daily mid-point temperatures instead of daily absolute diurnal variation? Climate science starts wrong from the beginning and never recovers.
“It had a nice graph showing a narrow range, the uncertainty coming from variability.”
And that graph was for multiple measurements of the same thing using the same instrument under the same conditions. It was *NOT* for when you have single measurements of different things using different instruments under different conditions – i.e. climate science!
“When we average, we are opening up the smaller ranges”
You continually confuse sampling error (standard deviation of the sample means) with propagated measurement uncertainty from the “stated value +/- measurement uncertainty” of the individual data points in the data set.
It doesn’t matter what method you use to measure, you cannot increase resolution by averaging. Not even when you have multiple measurements of the same thing using the same instrument under the same conditions. You simply cannot know what you can’t know. All you are doing is evincing the belief that carnival hucksters with a cloudy crystal ball can actually see the future in detail!
And then proceeds to claim their air temperature averaging gyrations reduce “uncertainty” to absurdly small levels.
Jesus, you’re so lost in the sauce… No, this is not how it’s done, and it’s really so tiring to explain it to you again and again and again and again… For that matter, I’m not a climate scientist. I just read them, and try to understand them. And I can understand them. Please try it.
No, it wasn’t. It was the output of numerous runs of a climate simulation, with attribution to various factors shown.
No, and this is again the tiring part. BTW you always confuse uncertainty with temporal and spatial spread.
You can increase the resolution of the average. You are organically unable to understand this point.
Actually you can, and, for that matter, this is how calibration is done, this is how they “map” the distribution that gives us uncertainty.
HAHAHAHAAHAHAHAHAHAH
Sez the “expert”
Hm, another expert. So we have measurement uncertainty. We average measurements to get a yearly average. What is the uncertainty of the average? These idiots (and apparently you, too) think the uncertainty is somehow related to how the temperature changes during the year.
The idiot is in the mirror, fool.
This is the riposte I expected 😉 I knew all along that you would fail.
Fool.
You have very nice arguments 😉 Just as I’ve expected.
You deserve nothing more, IPCC shill.
Yeah, the usual stuff when you deniers are defeated… 😉 BTW I have nothing to do with the IPCC.
You obviously have absolutely zero comprehension of making engineering or scientific measurements.
Look at NIST TN 1900. That shows you how the uncertainty grows. Do you think averaging 12 months of highly variable monthly temperatures reduces the uncertainty? How about 365 days worth? What do you think the variance of temps that range from -10°C to 35°C over a years time might be?
Look up reproducibility uncertainty using changing conditions. You’ll find the experimental standard deviation is the appropriate figure of uncertainty.
You shouldn’t play the expert when you have no knowledge of the subject – measurements.
It reduces the uncertainty of the average. And this is independent of how variable the temp during the year. It only depends on the uncertainty of the individual measurements. You are organically unable to understand this, and as long as this is the case, you are completely lost in any debate.
The variance of temps has nothing to do with measurement uncertainty.
That’s done during calibration. Nothing to do with this.
The temperature change represents the range of values in the distribution. The range determines the variance. The variance is a direct measure of the uncertainty of the average.
Distributions with small ranges/variances have less uncertainty of the average than distributions with large ranges/variances. Do you even have a clue as to why that is?
Statisticians and mathematicians typically have a very low understanding of what statistical descriptors actually mean in the real world. That includes you.
They don’t even understand that as you add data elements from measurements of different things using different instruments under different conditions that the variance of the distribution will grow. They don’t really have any intuitive understanding of why that is (hint: Σ(X-u)^2 grows faster than n).
It’s why the climate scientists, statisticians, computer modelers, etc in climate science never even bother propagating the variances of their data sets as they average and then average the averages and then average those averages of averages! That includes you.
Yes, when we are interested in that. Measurement uncertainty is about how sure we are in our individual measurements.
No, this plainly is false, and it’s obvious that you confuse measurement uncertainty with temporal/spatial spread. As long as this is the case you are just spewing bs.
That’s how we know you aren’t interested in science. Instead, you are only interested in numbers, that is, you are at best a mathematician.
LOL, you are a joke. Measurement uncertainty is concerned with the variance in measurement of physical quantities, regardless of what you are measuring. For example, Kelvin is measured using neither temporal or spatial quantities yet NIST TN 1900 shows how temperature has an uncertainty when using a temporal spread.
Tbe temperatures used as an input to a GCM HAVE uncertainty. Those uncertainties DO propagate throughout any subsequent calculations whether you like it or not.
You are only going to make yourself look ignorant if you try to show GCM’s are free of uncertainty.
Again, some bs that is actually irrelevant to the topic beside of being plainly wrong.
Yes. Exactly. Then why are you and your husband masturbating about the daily spread when we talk about the daily average? That has nothing to do with uncertainty per se.
Do you have even the faintest understanding of why a handrail on a long staircase can’t be placed by using a common distance from each tread to the handrail attachment device?
Do you have even the faintest understanding of what fish plates are used for?
Measurement uncertainty is *NOT* just for individual measurements. It’s for anything physical that is made up of components that have measurement uncertainty. If you are building a beam to span a foundation you damn well better make sure you take total measurement uncertainty propagated from the individual elements in the beam into account. If you are 1/4″ short you are screwed and you have to start over again. Unproductive time and wastage of materials can totally eat up any profit on the project you might have counted on.
It’s something that blackboard statisticians like you have absolutely no knowledge of.
“Yes, when we are interested in that. Measurement uncertainty is about how sure we are in our individual measurements.
This only goes to show you know nothing of metrology, NOTHING.
from the GUM: “3.3.5 The estimated variance u2 characterizing an uncertainty component obtained from a Type A evaluation is calculated from series of repeated observations and is the familiar statistically estimated variance s2 (see 4.2). The estimated standard deviation (C.2.12, C.2.21, C.3.3) u, the positive square root of u2, is thus u = s and for convenience is sometimes called a Type A standard uncertainty. For an uncertainty component obtained from a Type B evaluation, the estimated variance u 2 is evaluated using available knowledge (see 4.3), and the estimated standard deviation u is sometimes called a Type B standard uncertainty.”
You simply don’t know enough about metrology to come on here and lecture those of us who have been involved in it for half century or longer in professional careers.
“No, this plainly is false, and it’s obvious that you confuse measurement uncertainty with temporal/spatial spread. As long as this is the case you are just spewing bs.”
Again, you show an absolutely zero knowledge of metrology. Measurement uncertainty has to do with measurements, and it doesn’t matter if those measurements are seconds, kilometers, feet, or inches.
You don’t even know enough statistics to understand *why* higher variance distributions lead to increased uncertainty in the average. This has *nothing* to do with sampling error either. If you have the entire population the variance of that population is *still* a metric for the uncertainty of the average.
Yes. But somehow you don’t do that. You confuse spread with measurement uncertainty. Spread is when beams come in different standard sizes, like 5m, 10m. Uncertainty is our non-belief in how well they approximate the stated length. Like you have a 10m beam that is actually 10.04m.
The same with temperature. We have a location on a high mountain. We get a profile like -5C in the morning, +2C at noon, whatever. We have another location in the valley where we have 5C in the morning, 15C at noon, whatever. This is spread, both spatial and temporal. The uncertainty for all measurements are the same, 0.5C. The uncertainty of the average only depends on the uncertainties of the measurements, it doesn’t depend on the values (*). So the average is like 0C in the morning, 8C at noon, whatever, we have a nice curve, with a spread, we can say something about the daily changes on average. But the uncertainty is 0.5/sqrt(2). The daily average would be around 2C with a very low uncertainty (depending on the number of independent measurements). For that matter, uncertainty would be the same if we had two locations with a very different temp profile from the above.
(*) we assume that uncertainties are independent of the reading. This is reasonable for modern instruments in wide temp ranges.
Uncertainty in science has nothing to do with how sure we are about the measurements or any non-belief.
Really? 🙂 Good boy. Please try to react in substance next time, okay?
“You confuse spread with measurement uncertainty.”
A measurement uncertainty interval is the range of values that can be reasonably assigned to the measurand. “range of values’ IS A SPREAD!
Your 0.5/sqrt(2) IS HOW CLOSELY YOU HAVE ESTIMATED THE AVERAGE OF THE DATA. It is *NOT* the measurement uncertainty.
You confuse the sampling error, i.e. 0.5/sqrt(2) with the measurement uncertainty. Sampling error *adds* to the measurement uncertainty.
If you have a temp of 10C +/- 1c and a temp of 16C +/- 1C what is your measurement uncertainty of the average. The average could range from (9C + 15C)/2 = 12C to (11C + 17C)/2 = 14C. Thus your measurement uncertainty of the average is +/- 1C making your average 13C +/- 1C. It is *NOT* 1C/sqrt(2), that is the sampling error. In fact, if you were preparing a complete uncertainty budget you would probably do something like +/-1C + (1C/sqrt(2) = 1.7C.
Jesus Fokkin Christ… You’re truly hopeless. BTW you assume here that the true value is always in the uncertainty interval, right? Anyway, of course this above is completely wrong.
Sampling error is an entirely different thing if you use these terms correctly (which I doubt).
“BTW you assume here that the true value is always in the uncertainty interval, right? “
No, I don’t. The uncertainty interval is that which encompasses those values that can be reasonably assigned to the measurand. The operative word is “reasonably”. That doesn’t mean that outliers might not exist.
What *you* are apparently implying is that the uncertainty interval should always be from -infinity to +infinity. That is *NOT* reasonable.
Nor is what I posted wrong. The clue is that you can’t show where it is wrong. You just employed the argumentative fallacy of Argument by Dismissal. Don’t show where something is wrong, just claim that it is. So very typical of you. I showed my work. Where is yours?
Sampling error *is* a factor in measurement uncertainty. If you have sampling error then your best estimate of the property value has uncertainty. That is an *additional* uncertainty factor which adds to the measurement uncertainty. It means the interval encompassing the reasonable values that can be assigned to the measurand expands. It may not be a direct addition but it is an addition nonetheless.
When you pull a sample of measurements from a data set do you pull only the stated values? Or do you pull “stated values +/- measurement uncertainty”? Do you then propagate those measurement uncertainty values onto the mean of the sample leaving you with a sample mean of “mean +/- measurement uncertainty”? Then you pull a second sample. Do you wind up with a second “mean +/- measurement uncertainty”?
Typically the sampling error is considered to be the dispersion of the sample meanS. But if those means have their own measurement uncertainty how do you handle that? In the real world it actually increases the sampling uncertainty!
In climate science they just throw away measurement uncertainty rigtht at the start and assume that measured temperatures are all 100% accurate. No adjustment for systematic bias. No weighting for different variances. No attempt to adjust for different microclimates.
If the temperature in Las Vegas is the same as in Miami then, by God, they have the same climate! And their average describes the climate at every point in between!
??? The legendary Curse of the Gormans has been activated again. No, I don’t. I only imply that the measured value can be outside the uncertainty interval (w/r/t the true value).
At this point we are not interested in the actual factors behind uncertainty. We have a fokkin instrument with a well known characteristic obtained with calibration. That’s what we are interested in.
Yes, and I don’t understand why you assume someone would do otherwise.
Yes, and I don’t understand why you assume someone do otherwise. Propagation has very simple and clear laws that you don’t understand though.
This above assertion is illustrative of how most of you deniers are out of touch with how things are done in climate science despite years or literally even decades of participating in these forums, reading all the denier bs from the net.
I cannot convey the feeling of hopelessness I feel when I read bs like this above. No one has ever claimed that.
BTW I haven’t heard from you how you would compare climate using enthalpy(!). I still want to get entertained by the genius himself. Or herself. I’m not sure about your role in your relationship with Jim.
Well that was a note of incoherent anti-science gibberish.
(Tmax – Tmin)/2 = midpoint temperature.
This is not the same as average temperature.
Daily temperature variations do not follow a pure sinusoidal curve.
Given T^4, one needs to be very careful in what one assumes about temperatures.
Nowadays they use the actual average of readings that have been obtained at standardized times. The need for the formula above was because of how the Liquid in Glass thermometers worked. They were replaced by digital ones in the early 80s the latest. (Please note that the NOAA’s (or whatever) description of this is “wrong”, or more like sloppily worded before you get a hard on and throw a web reference on me.)
??? Have I ever claimed that so? BTW this is completely irrelevant to the subject.
What is the point of your writing? The midpoint temperature? That is not actually used nowadays?
Daytime temps are very close to sinusoidal. Night time temps are very close to exponential decay. What’s the average of temps taken during the day and night?
It would be completely irrelevant even if it was true.
It’s laser interferometry, isn’t it?
Where does averaging come into play? Or are you referring to a bulk measurement?
Pls read how it’s done. This is not just a “hey, we’ve just measured a displacement” type thing. They have to run an extremely complicated post processing (using various models 🙂 ) to get signal candidates. So a lot of maths is done here, not just averaging.
Could you recommend some papers?
Try wikipedia first. That’s good for a start, and they reference papers. Back to the topic, the displacement is like 2e-18, and the wavelength is like 3e-7, a 10^11 relative difference, so the change in the interference pattern is extremely small. For detection you have to use all the tricks in the book, and averaging is just the warming up here.
I was under the impression that resonance was as far from averaging as one can get.
So, absolute FAILURE to produce anything yourself. As always.
Red herring alert.
Nothing to do with climate modeling.
Jim, not again, the Gorman’s curse is working. You claim there’s no average climate, you can’t average temperature, blablabla. The debate was about this, it has nothing to do with modeling, you fokkin genius.
Red herring alert.
The measurements you are referencing, displacement and wavelength are not the measurements being discussed.
Oh, so you’ve just misunderstood something again. Good boy 😉
And you have zero understanding of anything
You are an “understanding” black hole !
Papered over with lies.
There is no average climate.
There are dozens of climates, but no global average climate. That is a fiction.
You can use a calculator and compute an average of temperature readings, but that does not make an average temperature.
The prophet has spoken, now we know 😉
When using scientific notation, one cannot have an answer with more decimal precision than the least precise value in the calculation. If 1, not 1.0 or 1.00, is your least precise value, your answer can have no decimal places. You can add an uncertainty however you wish, but the calculation cannot add decimal precision that is not in the values used.
So, no. I do not see. It is not that easy.
So when you don’t use “scientific notation”, you can? Just kidding 😉 Be careful with wording, don’t be sloppy.
When you average(*) multiple measurements, you can. (*) The actual mathematical transformation may not be trivial.
“While it’s tempting to view every uptick in temperature as evidence of impending doom, the reality is far more nuanced.”
True, the claims of doom are based on unsound attribution, but is it really about “nuance?” No. It is blindingly obvious that blaming a long-term global average surface warming trend of ~0.015C per year (inferred from the UAH LT data) on human emissions of CO2 has never been technically sound. The annual cycle for the planet as a whole is ~3.8C of warming and cooling. Vinós covered this in his first book. It is not reasonable at all to have ever claimed that “we know” the major cause for 1 part out of ~250 of amplified warming or inhibited cooling is our fault when the rest is plainly natural and cyclical.
I appreciate the U of ME “Climate Change Institute” for publishing these plots based on ERA5.
https://climatereanalyzer.org/clim/t2_daily/?dm_id=world
(And yes, I know the global average surface temperature is a bogus way to evaluate the situation anyway.)
Nice post, and good for Javier. Guterres ‘boiling oceans’ just stopped boiling.
We know that climate naturally varies on multicenntennial scales. Vikings buried their MWP dead in Greenland churchyards now solid permafrost. The last Thames Ice Fair (signaling a coming end to the intervening LIA) was in 1818.
We know that climate also varies on multidecadal scales. Evidence includes Arctic summer ice and the related ‘Stadium Wave’, and even AR4 WG1 SPM figure 4.
We don’t know why for either time scale. And so natural variation is ignored by IPCC charter choice in climate models—one reason they fail so badly. The CFL constraint means important stuff must be parameterized. Parameters are tuned to best hindcast 30 years by CMIP dictum. Tuning inevitably drags in natural variation, which the ‘CO2 control knob’ presumes does not exist. And so the models fail.
The hindcasting problem reminds me of Judith Curry’s post about Edward Lorenz’ views on attribution using models. She quotes Lorenz extensively in that post. A key excerpt:
“This somewhat unorthodox procedure would be quite unacceptable if the new null hypothesis had been formulated after the fact, that is, if the observed climatic trend had directly or indirectly affected the statement of the hypothesis. This would be the case, for example, if the models had been tuned to fit the observed course of the climate. Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound.”
From this post in 2013.
https://judithcurry.com/2013/10/13/words-of-wisdom-from-ed-lorenz/
Now honestly show the graph starting at 0°C or even better 0°K.
0C is just an arbitrary number.
0K is not physically attainable as far as we understand it (essentially time itself becomes non-existent).
Neither are sensible. Possibly the actual range of experienced oceanic temperatures on the planet might be sensible. That’s around -1C to 30C or thereabouts?
> 0C is just an arbitrary number.
Never have I seen such a blatant display of scientific ignorance. I don’t usually denigrate but this deserves a special callout.
Some times you need to step back and think about a comment before rushing to post a response. Zero C was arbitrarily assigned to the freezing point of water (on planet Earth and its temperature and pressure). As far as the Universe goes, that’s pretty arbitrary.
If we could, at least, occasionally be shown the ratio of atmospheric CO2 to atmosphere, we would see that the CO2 level is far below one that would influence the Earth’s air temperature. For example, at an atmospheric CO2 level of 425ppm, the ratio of CO2 to air is 1/2353. Thus, each cubic foot of CO2 would be dispersed over 2353 cubic feet of air. That microscopic presence of a gas like CO2 isn’t going to make more than a quiver in the Earth’s temperature if even that.
A few grams of arsenic can kill a grown man.
Just because something is small is not sufficient to prove that it can have no impact.
CO2 acts by absorbing a photon of energy, passing that energy to another molecule in the atmosphere and then absorbing another photon. It can do this millions of times per second.
How much heat is produced by vibrating those molecular bonds? Give it up.
I refer you to:
https://climatemodels.uchicago.edu/modtran/
https://climatemodels.uchicago.edu/modtran/modtran.doc.html
“The MODTRAN model simulates the emission and absorption of infrared radiation in the atmosphere. The smooth curves are theoretical emission spectra of blackbodies at different temperatures. The jagged lines are spectra of infrared light at the top of the atmosphere looking down. The total energy flux from all IR light is labeled as Upward IR Heat Flux, in units of Watts / meter2. The model demonstrates the effect of wavelength-selective greenhouse gases on Earth’s outgoing IR energy flux.”
Ignoring complete every other method of energy transfer in the atmosphere. !
D’OH !
Mr niceman:
It seems to have escaped your notice that the atmosphere is surrounded by the vacuum of space.
Just how do you propose that “energy transfer” is done via conduction or convection into it?
I’ll let your bain-cell ponder that – second-thoughts no, it would take to long and you’ll do just give your usual brain-fart of a response.
The spectra are measured from TOA:
“….spectra of infrared light at the top of the atmosphere looking down”.
As such that is what is seen from space.
The process of GHGs attenuating LWIR that exits Earth radiated from the Earth’s BB temp of 255K.
The surface has an ave temp of 288K.
I’ll let you brain-fart an answer for that too.
https://www.drroyspencer.com/2011/12/why-atmospheric-pressure-cannot-explain-the-elevated-surface-temperature-of-the-earth/
“Thought Experiment #2 on the Pressure Effect
Imagine we start with the atmosphere we have today, and then magically dump in an equal amount of atmospheric mass having the same heat content. Let’s assume the extra air was all nitrogen, which is not a greenhouse gas. What would happen to the surface temperature?
Ned Nikolov would probably say that the surface temperature would increase greatly, due to a doubling of the surface pressure causing compressional heating. And he would be correct….initially.
But what would happen next? The rate of solar energy absorption by the surface (the energy input) would still be the same, but now the rate of IR loss by the surface would be much greater, because of the much higher surface temperature brought about through compressional heating.
The resulting energy imbalance would then cause the surface (and overlying atmosphere) to cool to outer space until the rate of IR energy loss once again equaled the rate of solar energy gained. The average temperature would finally end up being about the same as before the atmospheric pressure was doubled.”
Putting it simply with an analogy:
You pump up your bicycle tyres.
They get hot (because) of the gas laws. Compressing air within the same volume heats it.
You go away for a few hours.
You come back and measure the tyres temp.
They have gone cold.
Ergo pressure alone cannot maintain the heat in the air.
in an atmosphere it will cool as LWIR escapes to space.
That incite seemed to escape Nikolov…..
The effect is a one-time instantaneous one.
IE: it is the act of compression that is the cause AND not the continuation of the increased pressure.
If that were the case then Nikolov would have discovered a source of perpetual free energy.
Do away with nuclear (fission or fusion) and we could all have a tank in the garage containing air at a ridiculously high pressure that would be permanently at a high temp as a result. Set up a circulation of piping from it and, voila, you have a perpetually heated house with no energy required into the tank.
And therefor free!!
Another load of anti-science kamal-speak gibberish, pertaining to NOTHING.
Not one part of your comment was remotely related to anything real in the atmosphere.
You have proven you have absolutely zero clue what you are talking about. !
You seem to think that gravity isn’t always operating
How more dim-witted can you get !!
“You seem to think that gravity isn’t always operating”
Dear God, help the afflicted!
Err, is not the tyre always “operating”, oh nice one!?
Of course it is !!!!!!!!!!
Thing is the air molecules are only compressed together the once (hypothetically) and not constantly (as in being apart then repeatedly being pushed closer cyclically due gravity).
Gravity only does that the once (if it could be switched on from off).
Air molecules are in closer proximity due the compression but the heating ONLY takes place in the compression process.
Pushing stuff together causes them to warm.
Leaving them pushed together doesn’t.
Else your car/bike tyres would be constantly warm.
And we would have free energy.
You may have noticed that we don’t
The process that gravity does is the same. It squeezes air molecules closer. When that has happened the air cools in a tyre and in the atmospere.
This it seems to be for the benifit of those with the smallest of science knowledge and are not deluded by ideology.
Or perhaps for someone like Neutral… who comes here and sees only the manic thread-bombing of themanwhothinks he’snice and lap-dog terrier partner karlo.
“The process of GHGs attenuating LWIR that exits Earth radiated from the Earth’s BB temp of 255K.”
What a load of anti-science BS mantra.
You think the atmosphere’s mass does nothing.. so funny.
Your ignorance is truly profound
See above.
Earth’s atmospheric mass is constantly under the pull of gravity.
Then it indeed does nothing in regard to magically staying warm because of it’s “weight”.
Other than forming a LR that corresponds to -g/Cp.
The surface temp is set by the GHE.
Which sees Earth’s BB temp of 255k moved vertically such that there more LWIR photons escape to space than back-radiate.
At that point -g/Cp down to the surface gives us the temp there. Currently around 288k but slowly rising.
Oh, and you’re most welcome.
If you would like the rest of your immense ignorance corrected just ask.
I can’t help but wonder if in RL Blanton might an editor for one of the corrupt Fake Data climatology journals whose real job is to keep the hoax alive.
Anthony is a senior meteorologist at the UK Met Office. I have no issues with him or his comments nor would I attempt to engage him in a deep debate about meteorology.
However, that institution is run by monkeys who claim minimal margins while 29% of UK weather stations operate with a ±5°C uncertainty.
He is quite arrogant, no doubt.
The issue is how to measure the impact. Arsenic killing a person provides a pretty accurate measurement for its impact on the human body. Actually measuring the impact of CO2 on the temperature of the atmosphere represents a totally different problem.
While UAH is only a metric, not a true temperature, it is given in units of temperature. The problem with that is UAH has no way to measure path loss at any sample point in order to weight the irradiance it receives in the microwave bands. If it can’t measure the path loss for the radiance then it can’t actually determine temperature without a large measurement uncertainty. That measurement uncertainty makes it impossible to actually determine the impact to temperature by CO2. The measurement uncertainty subsumes the possible difference making it impossible to identify.
We all have voices in our heads telling us sciency things but, for comments like that, it’s best to show everyone your full mathematical analysis – if you want to have any credibility that is (especially on this site).
OISST V2.1 is a portmanteau statistic, cobbled together out of satellite measured Sea Surface Skin Temperatures, buoy measurements,ship readings and the kitchen sink.
It is officially an ESTIMATE — with the caveat “The monthly temperatures displayed here are estimates specific to OISST, and any apparent record high or low values in OISST should be considered with caution and evaluated against other datasets.” [ source ]
The weather and climate effects of SST(skin) are doubtful — as that is the temperature of the top few millimeters of ocean water and represents mostly the incoming solar energy plus or minus the mixing caused by winds speeds.
The entire active range of the graph is 1 degree. Anyone familiar at all with the seas know that 1 degree C is nothing compared to the actual range found in water temperature by depth — even a few feet or meters can have a 10 degree difference.
SST(skin) is like measuring land temperature taken on roadway surfaces….
And the estimates area plotted without any uncertainty limits — this is a big part of the deception. Adding ±2°C would triple the length of the y-axis.
Why only ±2°C?
According to the experts here, uncertainty grows the more you average. If the daily average is based on 1000 measurements with an uncertainty of ±1°C, the uncertainty in a daily value would be ±31°C. The uncertainty in the annual average would be ±600°C.
Of course, if there is as much as ±2°C uncertainty, the claims of this article are nonsense. How do you know temperatures are back to those of 2015?
Weasel.
How do we know the temperature back to 1850 to 7 decimal places?
We don’t.
And yet UKMet “data” shows otherwise.
https://www.metoffice.gov.uk/hadobs/hadcrut5/data/HadCRUT.5.0.2.0/analysis/diagnostics/HadCRUT.5.0.2.0.analysis.summary_series.global.annual.csv
Your source says it is only known to within about 0.05 C.
Where does the 0.05 C come from?
Columns C and D.
ask a silly question 🙁
Which is arrant nonsense…
… based on a really bad understanding of measurement procedure and accuracy calculations.
You didn’t ;look at the link did you? LOL
I looked. The very first row in the spreadsheet shows an anomaly of -0.4177114. How do you get an anomaly out to 8 decimal places from temperatures that were probably recorded in the units digit? How do you even get an anomaly with *any* decimal places from temperatures recorded in the units digit?
For temps recorded in the units digit the uncertainty interval should be somewhere between +/- 1C and +/- 0.5C.
The spreadsheet shows far less uncertainty than this for the 1800’s data. For the 95% it shows somewhere between -0.05C and -0.3C.
This data just isn’t believable. It simply does not follow basic metrology rules or significant digit rules for physical science.
This what passes for “settled science”.
Of course I looked. That’s how know it is specified via columns C and D.
Still zero mathematical understanding.
You need to finish Junior High, bellboy !
Each measurement is individual. Each has an uncertainty of ±1°C. Multiple measurements, even from the same temperature reading device, can’t undo that. Using multiple measuring devices it is simply not possible. So are you talking one measurement device or many?
The average of many measurements.
There’s that lack of mathematical understanding.. Well done. !
Yep!
In order for the law of large numbers to be used you would need to make all the measurements with the same measuring device preferably all at the same time as well. You can’t get 4 significant figures from a plus or minus one degree device. Not honestly anyway.
Honesty doesn’t matter when you’re on a crusade to save the world from that monster under your bed.
bellboy still thinks that monster under its bed is real.
Nope. We oilfield trash often use over a dozen instruments and procedures, performed over many years, to evaluate any one of many dozens of reservoir parameters. We assess the resulting uncertainty, use those parameters as reservoir simulation inputs, make multiple realization runs, tie them into stochastic economic analyses, secure CapEx and OpEx budget funding, and make our employers trillions.
“uncertainty” s/b “uncertainties”. bigoilbob regrets the error…
You don’t understand anything about uncertainty, blob.
Fail. LOL.
Ah yes, here is Doktor Profesor Weasel of Metrology invoking the magic of averaging that cancels all uncertainty.
Weasel.
I can’t help you with your musteloid obsession. But in answer to your maths problem – no, I’ve explained to you many times that averaging does not cancel all uncertainty.
What you “explain” is nonsense.
Only if you are averaging multiple measurements of the same quantity.
This does NOT apply to air temperature gyrations.
And you still have no understanding of metrology, let alone uncertainty, which you demonstrate with each and every verbal tirade.
“What you “explain” is nonsense.”
Strange, it’s nonsense even when I’m agreeing with Karl.
You can not remove all uncertainty by taking multiple measurements. This is on part because multiple measurements can only at best reduce the uncertainty not eliminate it, and the rate at which you reduce the uncertainty depends on the square root of the sample size, so it’s a diminishing return. And even if you could take an infinite number of measurements you still would only be removing the random independent uncertainty, and this still leaves systematic errors, biased in the methodology etc. And then there’s also the question of the definition of the measurand.
This applies both to measurement uncertainty, as well as the uncertainty coming from random sampling.
When pinned down, you always start talking out of the other side of the mouth.
And you ignored the fact that air temperature measurements DO NOT QUALIFY.
“the rate at which you reduce the uncertainty depends on the square root of the sample size”
For the umpteenth time, you are describing the sampling error, not the measurement uncertainty. You can increase the sample size to infinity and all you do is decrease the sampling error, YOU DO NOT REDUCE THE MEASUREMENT UNCERTAINTY ONE IOTA!
You have *still* not internalized the difference between precision and accuracy.
You can put every bullet in the same hole, high precision, but if that hole is in a tree 20 feet to the left of the target your accuracy is trash. Dividing by sqrt(n) only tells you precision, how close to being in the same hole each shot was. It tells you NOTHING about how accurate your shots were!
Random errors and systematic bias ARE ACCURACY METRICS, not precision metrics for how closely you have determined the population average.
You just keep conflating sampling error with measurement uncertainty all while you claim you don’t!
One day you will actually read and respond to what I’m saying rather than your own fantasy.
I was talking about measurement uncertainty. I’m explaining why it’s not possible to “eliminate” all measurement uncertainty by averaging multiple measurements.
“You can increase the sample size to infinity and all you do is decrease the sampling error, YOU DO NOT REDUCE THE MEASUREMENT UNCERTAINTY ONE IOTA!”
Shouting doesn’t make it any more true. It doesn’t matter if you are propagating measurement uncertainty over multiple measurements, or looking at the uncertainty of an average from a sample – because the maths is the same in both cases. If you don;t agree with applying probability theory to propagating measurement uncertainty, you need to explain why you think the General Equation for Propagating Uncertainty is wrong, and supply a reference for what technique you would use in its place.
“You have *still* not internalized the difference between precision and accuracy.”
Read what I said. “And even if you could take an infinite number of measurements you still would only be removing the random independent uncertainty, and this still leaves systematic errors, biase[s] in the methodology etc.”
What do you think accuracy (or trueness) is if not the systematic error in the measurments?
“Dividing by sqrt(n) only tells you precision, how close to being in the same hole each shot was.”
Hence why I’m saying you cannot eliminate uncertainty.
What you claimed:
“the rate at which you reduce the uncertainty depends on the square root of the sample size”
As you’ve been told zillions of times, this DO NOT APPLY to combining a zillion air temperature measurements into the anomalies so beloved by climastology.
“Hence why I’m saying you cannot eliminate uncertainty.”
More equivocation. You just ignore non-random elements of uncertainty, except to pay lip service as thus:
“What do you think accuracy (or trueness) is if not the systematic error in the measurments[sic]?”
Another indication you have no real knowledge of the subject, yet you hold yourself out as an expert on “the maths”.
You said: “the rate at which you reduce the uncertainty depends on the square root of the sample size”
“I was talking about measurement uncertainty.”
No you weren’t. Sample size determines SAMPLING UNCERTAINTY, not measurement uncertainty!
“I’m explaining why it’s not possible to “eliminate” all measurement uncertainty by averaging multiple measurements.”
You can’t do that by describing SAMPLING ERROR.
If you have multiple measurements of the same thing using the same instrument under the same condition with NO SYSTEMATIC BIAS you can assume the average is the most viable ESTIMATE of the measurand. But even with this you also have to assume that *all* of the random error is Gaussian – which leaves you the burden of justifying the assumption.
“ because the maths is the same in both cases.”
The maths are *NOT* the same. How many times does it need to be explained to you? If you fire four shots at a target and they all wind up 3ft from the bullseye and are evenly spaced around the 3ft circumference how do you increase your accuracy by averaging them? How do you increase the accuracy by dividing (4*3) by sqrt(4)? how do you increase the accuracy by dividing 3′ by sqrt(4)?
It’s no different with the global temperature data set. Think of those temperatures as a set of bullet holes surrounding a bullseye. Do you really think you can make the accuracy of the average better by dividing by how many shots you took? Suppose you have 10,000 shots, all of them landing on a 3ft radius circle from the bullseye? Why do you think the uncertainty of their average can be divided by sqrt(10000) in order to get a valid measurement uncertainty interval?
“Hence why I’m saying you cannot eliminate uncertainty.”
The measurement uncertainty is how far that one hole is from the bullseye! It is *not* how many shots you put in that one hole that could be 20ft away from even the target let alone the bullseye!
He doesn’t understand the hole he has dug himself into.
“No you weren’t.”
Oh yes I was.
This was ultimately in reference to leefor saying
“Sample size determines SAMPLING UNCERTAINTY, not measurement uncertainty!”
If you measure something N times, you are taking a sample of N measurements.
“The maths are *NOT* the same. How many times does it need to be explained to you?”
Once would be sufficient.
“If you fire four shots at a target and they all wind up 3ft from the bullseye and are evenly spaced around the 3ft circumference how do you increase your accuracy by averaging them?”
What has that got to do with using the same maths for a sample of different things and measurement uncertainty propagation. Systematic errors are systematic errors.
“It’s no different with the global temperature data set.”
Apart from the obvious difference that global temperature data sets are not derived by simply averaging a random sample.
“Think of those temperatures as a set of bullet holes surrounding a bullseye. Do you really think you can make the accuracy of the average better by dividing by how many shots you took?”
If you mean take an average of multiple shots, yes. The average of 100 shots surrounding the bullseye will be a better indicator of the position of the bullseye than a single shot.
“Suppose you have 10,000 shots, all of them landing on a 3ft radius circle from the bullseye? Why do you think the uncertainty of their average can be divided by sqrt(10000) in order to get a valid measurement uncertainty interval? ”
You are not dividing the uncertainty of the average by √10000. You divide the standard deviation of the shots by √10000. And I think this will be correct (on the assumptions that the shots are independent and have no systematic bias) because that’s what the maths tells you, as has been understood for a century or more.
“The measurement uncertainty is how far that one hole is from the bullseye!”
No. That’s the measurement error. You don;t know what that value is, hence there is uncertainty. You can estimate the range of distances that are most likely, and that’s the value given to the uncertainty.
“It is *not* how many shots you put in that one hole that could be 20ft away from even the target let alone the bullseye!”
Would you just stop changing the subject. First you say you 10000 shots all falling on a circle centered on the bullseye. Now you are claiming all 10000 shots went through a single hole 20ft from the bullseye. Clearly in the later case you have a very precise gun with a very bad systematic error. The shots are precise, but not true. If you know that the gun has up to a 20 foot systematic error, you can state it as a dependent uncertainty, or do what the GUM says and correct for the known error, and apply an uncertainty reflecting the uncertainty of that correction.
If you have no idea what the systematic error is, then you are out of luck. That’s why I keep suggesting it’s better to use many different guns rather than use the same gun 10000 times.
What a totally irrelevant and erroneous load of gibberish !!
He has a special talent here.
Error is not uncertainty!
You can’t even learn basics, but you thrive on beating people over the head and insisting that you are the “expert”.
He’s a monkey.
Averaging an intensive property cancels *NO* uncertainty. You can’t average an intensive property and get anything physically meaningful. That means that the measurement uncertainty isn’t physically meaningful either. Even if it was physically meaningful the *average* uncertainty is *not* a measure of the accuracy of the average.
“You can’t average an intensive property and get anything physically meaningful.”
You keep telling yourself that, and I keep disagreeing. Just asserting it for the hundredth time does not advance the argument. All I can do is again point out the contradiction between you thinking the average temperature has no meaning, and the fact you are quite happy to use the average temperature when you think it proves your point.
“That means that the measurement uncertainty isn’t physically meaningful either.”
So why have you been arguing you know what it is the past three years? If it has no meaning why not just ignore it and accept the pause uncertainty based on the stated values, rather than your meaningless measurement uncertainty?
It’s not *me* saying this. You’ve been given reference after reference stating it.
Wikepedia: “According to International Union of Pure and Applied Chemistry (IUPAC), an intensive property or intensive quantity is one whose magnitude is independent of the size of the system.[3] An intensive property is not necessarily homogeneously distributed in space; it can vary from place to place in a body of matter and radiation. Examples of intensive properties include temperature, T; refractive index, n; density, ρ; and hardness, η.”
from chem.libretexts.org: “hysical properties can be extensive or intensive. Extensive properties vary with the amount of the substance and include mass, weight, and volume. Intensive properties, in contrast, do not depend on the amount of the substance; they include color, melting point, boiling point, electrical conductivity, and physical state at a given temperature. For example, elemental sulfur is a yellow crystalline solid that does not conduct electricity and has a melting point of 115.2°C, no matter what amount is examined (Figure 2.3.1
2.3.1). Scientists commonly measure intensive properties to determine a substance’s identity, whereas extensive properties convey information about the amount of the substance in a sample.”
from clm.org: “On the other hand, an intensive property is a physical or chemical property that is independent of the size or quantity of a system. In other words, intensive properties are non-additive, meaning that the value of the property remains the same regardless of changes in the size of the system.”
Climate science, and you as well, continue making the mistake of thinking that if the temperature here is A and the temperature there is B then there must be an average temperature C that is (A+B)/2. The average (or mean) is a STATISTICAL DESCRIPTOR of set of data. It is *NOT* a physical measurement. It has no physical meaning. The average value of a six sided die is 3.5. Try to physically roll a 3.5!
This whole idea of “there must be an average value” is the justification for climate science assuming they can “average” the temperatures of stations within a given distance of an unknown station and use it to infill the temperature at the unknown station. It’s a physical science farce. And you can’t seem to accept that fact.
“It’s not *me* saying this.”
And yet none of those quotes say you cannot average an intensive property.
“The average value of a six sided die is 3.5. Try to physically roll a 3.5! ”
Which has nothing to do with intensive properties, just your usual inability to understand that an average does not have to exist as an individual element to be useful.
More weaselly equivocation.
More random insult generation, which makes no specific point about what I said. If you think I’m equivocating, say what you disagreed with.
More random anti-science gibberish
Maybe you should take St. michel’s and Nitpick Nick’s advice and killfile me.
“And yet none of those quotes say you cannot average an intensive property.” It does say it is not additive. If it is not additive (or subtractive), it cannot be averaged.
In fact, it doesn’t “cancel” any uncertainty.
As usual you didn’t answer his question!
I would appreciate it if you would explain something to me. The Empirical Rule in statistics says that with a normal probability distribution, the standard deviation is approximately the range divided by 4; thus, ± 2 standard deviations would encompass ~95% of all samples.
Therefore, if we want to estimate the standard deviation for the average global surface air temperature (GSAT), say for a day in mid-July, all we really need to know is the hottest diurnal temperature (probably in the Sahara or DV) and the coldest, probably the Russian station in Antarctica. Let’s say that the hottest temp is about 50 °C and the coldest temp is about -60 °C, for a range of 110 °C. Therefore, the estimated standard deviation would be about 28 °C, and the 2-sigma uncertainty (95% probability) would be about ± 55 °C. (The distribution is slightly skewed.)
My question to you is why does the Rule of Thumb not give a small estimate to the nearest 1/100th °C, which is commonly reported for a global average, but instead gives us an estimate that is in approximate agreement with your ±31°C, more than 3 orders of magnitude larger?
To get your “±600°C,” did you divide by 365,000 samples?
You are talking about the deviation of global temperatures, not the uncertainty of the average.
Assuming a completely random sample of size N you would get a standard error of the mean equal to the standard deviation divided by √N. Of course any actual global average is not based on a random sample, and any estimate of the uncertainty is much more complicated. But the idea that the uncertainty of the average is the same as the standard deviation of all temperatures over the earth is just absurd.
“To get your “±600°C,” did you divide by 365,000 samples?”
No I multiplied by √365,000. It’s what some here insist is the correct way to calculate the measurement uncertainty.
You truly are displaying your mathematical ignorance today , bellboy !!
Typical karlo like response. Just assert the maths is wrong with zero explanation of what you think is wrong. Any request for an explanation will probably be met by mote childish name calling, and an excuse along the lines of “you wouldn’t understand the answer”.
Stop whining.
I say its wrong , because it is wrong, and it is pointless explaining why, because you don’t have the capacity to comprehend.
You have obviously NEVER had anything to do with science or engineering.
You are basically just a low-end amateur.
With the temerity to lecture experienced professionals on subjects he doesn’t understand.
Weasel.
Who are these “some”? Name them, if you dare.
You are either lying, or just demonstrating your abject lack of expertise in the subject (again).
Tim Gorman and Jim Gorman for a start. Others, such as yourselve, may or may not believe it. You keep defending the Germans and attack me every time I disagree, but that may be just you don’t understand what they are claiming.
Thank you for confirming that you are a liar.
What lie. This is your usual MO, just claim someone is lying with no actual lie identified. And you think I’m the weasel.
Are you saying it’s a lie that the Germans argue that uncertainty increases with sample size? If so it makes it clear how little attention you are paying to these arguments.
“It’s what some here insist is the correct way to calculate the measurement uncertainty.” — You.
And I said who the some where. I’ve still no idea why you think that’s a lie.
Here for instance is Jim making exactly my point
https://wattsupwiththat.com/2024/12/03/uah-v6-1-global-temperature-update-for-november-2024-0-64-deg-c/#comment-4004543
The individual uncertainties are ±1.8, but he claims the uncertainty of the average is ±3.
You don’t understand uncertainty AT ALL, and you don’t see the corner you’ve painted yourself into.
It is YOU and the climate trendologists who claim absurd 10mK uncertainties for your abuses of air temperature data.
To justify this YOU claimed, and continue to claim, this is valid because of “averaging”.
Now you are trying to weasel out by claiming that averaging does not reduce all random uncertainties, plus it doesn’t reduce “systematic error” (once again demonstrating you STILL don’t understand that uncertainty is not error).
None of this equivocation helps your claims —
First, you have no idea how much uncertainty is random and how much is non-random. They are treated the same in metrology.
Second, if non-random uncertainty doesn’t go away with averaging, then it accumulates as you glom more and more temperatures together.
Third, the same is true of any random uncertainty that isn’t removed with your averaging magic.
Fourth, you STILL didn’t grasp the point of Kip’s comment.
Oops.
You *still* can’t tell the difference between the standard deviation of the sample means (mis-named by the statistics world as “standard error of the mean”) and measurement uncertainty.
Your “standard error of the mean” is a metric for SAMPLING ERROR. It has nothing to do with accuracy of the mean. If you had a data point for every single member of the population you could calculate the mean with a value for “standard error of the mean” of 0 (zero).
And that would tell you *NOTHING* about how accurate that mean you so carefully calculated actually is.
Why you continue to confuse sampling error with measurement uncertainty is just beyond me.
“mis-named by the statistics world as “standard error of the mean””
Yes, why would a statistician know what the term they invented was named. Lots of things in maths and any other field tend to have names that can be misunderstood by those outside the field. Error does not mean a mistake, significant does not mean important, deviation does not mean a perversion.
There isn’t really any difference between deviation and error in statistics. But by convention the term standard deviation and standard error are used for different things to avoid confusion. Standard deviation for parameters of a distribution, standard error for the deviation of a sampling distribution.
This helps to avoid confusion, but I guess you prefer the confusion.
This is the heart of your problem.
Go learn some real metrology.
“Yes, why would a statistician know what the term they invented was named.”
Because most statisticians are never trained on data of the form “stated value +/- measurement uncertainty” but only on the form “stated value”.
*YOU* are a prime example!
“There isn’t really any difference between deviation and error in statistics.”
Even this statement shows how uniformed you are concerning metrology. Deviation, as in standard deviation, is a STATISTICAL DESCRIPTOR of a data set. In the real world error is unknowable when it comes to measurement, it is not a physical quantifiable value. That’s the whole underlying concept in the GUM! Even if you *could* know the true value of a property associated with a measurand, error is *NOT* a STATISTICAL DESCRIPTOR.
You still haven’t learned that the standard deviation is only an indicator of measurement uncertainty in specific situations, it is *NOT* generally useful in most real world situations. Go look up what the GUM says about Type B measurement uncertainty.
Most real world measurement situations involve measuring different things with different instruments under different conditions – global temperature data is a prime example. The standard deviation of that data tells you absolutely nothing about the measurement uncertainty associated with the data or anything derived from that data. You *must* propagate the individual measurement uncertainties associated with each data point into a coherent whole. That means *adding* them all up using the most appropriate propagation method. Finding the average measurement uncertainty of the data points does nothing but spread the total measurement uncertainty around the data points in equal increments. That average measurement uncertainty is not, in itself, a measurement nor is it even associated with a specific measurement. The average measurement uncertainty is a STATISTICAL DESCRIPTOR and hos no real meaning in the real world. You can’t *measure* the average value.
If you would just learn this basic rule and take it to heart you’d be much further ahead: “statistical descriptors are *NOT* measurements”.
“Because most statisticians are never trained on data of the form “stated value +/- measurement uncertainty” but only on the form “stated value”. ”
This is to the question as to why Tim thinks Standard Error of the Mean is an incorrect term.
“Even this statement shows how uniformed you are concerning metrology. Deviation, as in standard deviation, is a STATISTICAL DESCRIPTOR of a data set.”
Yes, that’s the point. Standard Deviation is a descriptive statistic. It describes the data, using it in place of the standard error is confusing, as the standard error of the mean is an inferential statistic.
“In the real world error is unknowable when it comes to measurement, it is not a physical quantifiable value.”
Standard error is not “an error”, any more than Standard Deviation is a deviation. It’s a value that describes the expected (as in average) error. You do not generally know what the actual error is, if you did there would be no need for a standard error.
The same applies if you call it the standard deviation of the mean. It’s expressing an expected deviation between your sample mean and the true mean. You still don;t know what the actual deviation is.
“Standard error is not “an error”, any more than Standard Deviation is a deviation.”
ROFL! Standard error is not an error? Yet you defend it being described as an error? Why do you think *I* use the term “standard deviation of the sample means”? Why do you think I keep telling you that you need to abandon the term “standard error of the mean”? If standard error isn’t an error then why is it named an error?
When it comes to measurements the standard deviation *IS* a deviation. It is the range of values that can reasonably be assigned to the measurand and is given by a plus and minus value.
You just keep on showing how you live in statistical world instead of the real world of measurements!
Another example of him weaseling and talking out of both sides of his mouth.
He still thinks that in the equation (standard error) = SD/sqrt(n) that the SD is the standard deviation of a single sample. He doesn’t realize that assuming the SD of the sample is the SD of the population ADDS uncertainty to the determination of the average!
Nope! And he’ll never learn/acknowledge it because his entire house of cards is built on top of it.
Yes I do think SD is the standard deviation of a single sample. It’s an estimate of the population standard deviation σ. The fact that you still think that is an odd thing to say, just proves how little you understand on the subject.
And yes, using a sample SD adds uncertainty. This is to some extent handled by the n – 1 factor, and using a Student distribution. But the main lesson is the larger the sample size the better. Just as the standard error of the mean reduce with sample size, so to does the standard error of the standard deviation.
Of you want better ways to handle all these uncertainties, then you need to use more complicated models and MC or MCMC methods.
ESTIMATES, i.e. guesses, ADD measurement uncertainty. They don’t decrease it. Every time you assume the SD of a single sample is exactly the same as the population SD, i.e. a true value, you have ADDED an additional component of measurement uncertainty.
As I’ve shown you, Bevington shows that adding sample size is a self-limiting proposition because of the addition of outliers generated by random fluctuations. Growing the sample size can, it and of itself, increase sampling error.
Measurement uncertainty of the temperature data doesn’t require very sophisticated methods of statistical analysis.
Exactly what is the distribution function you would use to generate anything associated with Monte Carlo? Temperatures have wide natural variability (by latitude, longitude, elevation, geography, terrain, etc) – how do you write a distribution function for this?
There is not even any guarantee that global temperature is even Gaussian because the earth isn’t a perfect sphere let alone the different geography of the oceans and continents!
“Standard error is not an error?”
Correct.
“Yet you defend it being described as an error? ”
No.
“Why do you think *I* use the term “standard deviation of the sample means”?”
Because you prefer to make up your own terms, and like the confusion it causes? I keep pointing out that absolutely none of your sources uses that term. Nobody uses means plural. Nobody inserts the word ‘samples”.
Taylor calls it the Standard Deviation of the Mean (SDOM), and notes that another common name for it is standard error if the mean. The GUM uses the term Experimental Standard Deviation of the Mean, with an unexplained note to the effect that it is incorrect to call it standard error of the mean.
” If standard error isn’t an error then why is it named an error?”
If standard deviation isn’t a deviation why is it named s deviation?
The answer is that both are compound phrases that have specific meaning in statistics. The both describe the expected deviation or error, not a single deviation or error.
“When it comes to measurements the standard deviation *IS* a deviation.”
It is not. I’m sure much if your confusion stems from you just assuming you know what these terms mean. Deviation is the difference between the the mean and a value. Look at Taylor page 98 if you don’t understand this.
The standard deviation is the biased average of all the absolute deviations.
“You just keep on showing how you live in statistical world instead of the real world of measurements!”
You keep demonstrating you don’t understand either world.
Projection time.
“standard error of the mean”
The measurement uncertainty interval encompasses those values that can be reasonably assigned to the measurand. The sampling error (i.e. your “standard error”) does *NOT* define that interval, the propagation of the measurement uncertainty does (as does a metric using the standard deviation of the data).
You have yet to internalize the fact that the variance of a data set is a direct metric for the accuracy of the mean. A wide variance means those values surrounding the mean are very close to the value of the mean and which of those values is the *actual* mean is “uncertain”, thus defining the measurement uncertainty. That has nothing whatsoever to do with sampling error.
“The measurement uncertainty interval encompasses those values that can be reasonably assigned to the measurand.”
Why do you always misquote the GUM. It’s the the “dispersion of the values that could reasonably be attributed to the measurand”, not values that could be assigned.
“The sampling error (i.e. your “standard error”) does *NOT* define that interval”
It does exactly what the definition says. Characterizes a dispersion of values that could reasonably be attributed to the mean. If you derive a confidence interval from the SEM, you are saying that it’s the range of values the mean could have which would give you a reasonable expectation of getting the observed sample. That is the values it would be reasonable to attribute to the mean.
“the propagation of the measurement uncertainty does”
The propagation only tells you about the measurement uncertainty, but guess what – the method is the same. The measurement uncertainty is just the standard error of the mean of all the individual uncertainties.
“You have yet to internalize…”
It never occurs to you that there are good reasons why I don’t. Whenever you use that phrase it normally means what you are saying is rubbish.
“…the fact that the variance of a data set is a direct metric for the accuracy of the mean.”
See.
How is the variance of a data set a measure of the accuracy of the mean? If all your data points are identical your variance is zero, but you’ve no way of knowing if that means the mean is accurate, becasue as you keep pointing out there could be a large systematic error in all your measurements.
More importantly, if say you are using Spencer’s assumed 28°C standard deviation for global temperatures, with a variance of 784°C², what does that 784 tell you about the accuracy of the mean?
“A wide variance means those values surrounding the mean are very close to the value of the mean and which of those values is the *actual* mean is “uncertain”, thus defining the measurement uncertainty.”
Gibberish. If you’ve taken a large sample of measurements with a standard deviation of 28°C, and say a mean of 14°C, then you have a good idea that the true mean is close to 14, and so know which values are close to that mean and which are not.
Well , at least you titled your last paragraph correctly.
It certainly is totally meaningless gibberish.
A classic bellcurveman rant, completely unreadable.
“According to the experts here, uncertainty grows the more you average.”
I’m not sure you’ve correctly reproduced what “the experts here” have actually said, Bellman, but I would have thought it was obvious to anyone who knows what the scientific concept of uncertainty is as to why and how “uncertainty grows the more you average”. It is simply because in averaging any set of numbers you automatically lose all the information you had about those individual numbers in the resulting average and you’ve no way of getting it back just from the average itself. This absolute loss of information represents a corresponding increase in your total uncertainty about the whole set of numbers that can only grow larger as the number of individual items in the set is increased. Hence, “uncertainty grows the more you average”.
You’ve pretty much nailed it. The statistical descriptor named “variance” is a direct metric for the uncertainty of the average value. As you add elements to a data set, variance inevitably increases because the Σ(X-u)^2 goes up faster than n does at least in most real world situations where you are measuring different things with different instruments under different conditions. Another explanation is that variance is related to range. As you add more and more measurements of different things taken with different instruments under different conditions the greater the range of values you are likely to encounter – i.e. the range goes up driving the variance up which drives the uncertainty up.
As variance goes up, again in most real world situations, the “hump” around the average gets “shorter” and “wider”. That means that more of the values surrounding the average get closer to the average which in turn means that the uncertainty as to which value is the *true” average gets higher.
Statisticians and computer programmers never seem to understand exactly what statistical descriptors actually mean in the real world. They are not “data”. They are not “measurements”. Metrology is not a subject they study at all. I have yet to find a university level statistics book from a math major curriculum that addresses metrology, all the books I have found are in the engineering vein of study. Most statisticians and computer programmers don’t even understand that measurement uncertainty modulates the standard deviation calculated from the stated values. I.e. if the standard deviation of the stated values is +/- a then the actual uncertainty of the measurements is +/-a +/- u so the uncertainty interval ranges from -(a+u) to +(a+u), and not from -a to +a. It’s the same thing with sampling. You can’t just find the standard deviation of the sample means to determine the standard error, you have to modulate that value with the propagated measurement uncertainty that accompanies the measurements in the samples.
Most climate scientists, statisticians, and computer programmers in climate science just use the meme that “all measurement uncertainty is random, Gaussian, and cancels” so they can focus only on stated values and ignore the measurement uncertainty. They don’t even weight the northern/southern temperatures used in the global average based on the different variances of the temperatures because of seasonal changes. They just jam everything together willy-nilly – and wind up averaging the diameters of apples and pineapples – which is nothing more than finding the average of a multi-modal distribution. You may as well average the heights of a Shetland pony with the heights of an Arabian horse and say the average is the average height of a horse.
uncertainty (well, dispersion) around the average (mean)
That dispersion is related to the reasonable values that can be attributed to the measurand. A larger variance means the dispersion is larger meaning that the range of reasonable values that can be assigned to the measurand is larger as well.
But this is unacceptable in climatology, the range must be made small by whatever means are at hand, or by hand waving.
That’s getting back to sampling, hence the SEM comes into play.
And, yes, s.d / sqrt(n) is applicable as an estimator for this.
We really need to look at all of the sources of uncertainty, how they propagate, and whether they cancel.
As far as I can see (as per the ream of paper example from ages ago), the resolution limit is irreducible, so a fixed +/- 1/2 the resolution limit applies as an uncertainty floor.
“We really need to look at all of the sources of uncertainty, how they propagate, and whether they cancel.”
So close. How about “tend to cancel as their number and variety increases”.
blob — another trendology ruler monkey who doesn’t understand what he yaps about.
The “monkeys” here are those who think that accuracy does not improve averages and trends, as the number and variety of those measurements increases. In fact, they are the same “monkeys” who go one step further and think that those averages and trends are significantly changed by them, for the evaluations commonly discussed here. IOW, they are typing the encyclopedia Britannica, and will get there sooner or later.
Just another blob word salad.
If you shoot ten arrows at a bullseye and the all are the same distance from the bullseye how does the average value of that distance improve the accuracy of what you did?
How does it decrease the trend of your accuracy?
You *still* don’t understand precision and accuracy, not even after having it explained to you at least a dozen times.
How’z bout the path they lay, if you shoot a million of them at bulls eyes along the road, from the same distance rom each bulls eye. Do they veer off the road? I’ll help. No, they trend exactly the same.
ROFL!
If they don’t hit the bullseye then the trend will be just as inaccurate as the data.
Assume the “true value” is 0 (zero), the bullseye. All of your shots hit exactly 3ft away from the bullseye. Your trend will be 3, a horizontal line. How is 3 = 0?
Your trend will be just as inaccurate as your data points!
“If they don’t hit the bullseye then the trend will be just as inaccurate as the data.”
Nope. Instead of yarning, let’s look at the evaluations that actually get posted here. Take any group of GAT data over time. Now, drop every value by a given “systematic” error. Does the trend change? Yes, rhetorical. I’ll credit you with knowing the answer. And you don’t even need partial deriv’s to see why…
Folks, I can already hear the buzz of the motorized goal post motor. I’m ready…
You’er insane — chem E is in sad shape if you are representative of the profession.
There are no “folks” (except maybe the voices in your head).
Reading comprehension. Chem E is probably in great shape. Aks Dr. Frank. The discipline laughs at his statistical delusions, but I’m sure he keeps up with opportunities.
As for the line I followed, Petroleum E, it is indeed in sad shape. US enrollees down well over 80%. Employment down quite a bit. Few international ex pat opportunities, with day rates down over 60% from my heyday 25-10 years ago. The hot house flower of US intense frac stimulation is dying, with almost no tier 1 acreage left, frac hits, competitive drainage, equipment cannibalization, hands leaving to go back to their backup jobs.
Post Arab spring, I ended up running completions for a leading east coast E&P for all of 2 weeks. It culminated with me deciding that I did not want to crown my career by arguing with Halliburton over pennies and more deeply polluting several West Virginia counties. My biggest achievement – keeping my kids out of the patch. It took both my words and my example. Often gone, on the phone day and night, moving from play to play, rotating in and out of conflict zones, 5 weeks on/5 weeks off….
Did you want me to read this mess?
Obviously you were not required to take any English writing courses.
I see. You think a trend of 3 is the same as a trend of 0.
You are assuming the slopes of the trend will be the same. The problem is that you can’t *prove* that. A trend line from data with measurement uncertainty can actually have a negative slope while the stated values alone give a positive slope. Using anomalies doesn’t help because they are based on absolute values that have their own measurement uncertainty.
That is only one piece of finding the uncertainty in the trend. What needs to be done is plotting every combination of error you can have.
Change only the 1st value up and find the trend.
Change only the last value up and find the trend.
Change only the first 2 values and find the trend.
Change only the last 2 values and find the trend.
Change the 1st & 3rd values and find the trend.
Change the nth & n-2 values and find the trend.
….
Do the for all the combinations you can have from the n values. You will end up with a bow tie shape within which any trend line you can draw will be inside the uncertainty window. You will see the trend line can even change the sign of the slope.
That is what happens when you have uncertain data points. Simply adding the + value to all points and finding the new trend or add ing the – value to all points and find the trend doesn’t address all the different combinations when each data point can change independently.
“the bullseye”. Confused still. The number of “bullseyes” equals the number of data points used in the trend evaluation. Raise or drop them by a value, and how does that trend change?
I credit you with the intelligence to know you are dancing around this, out of embarrassment…
Still the nutter.
Because not all measurements have the same systematic bias! You don’t just offset ALL of the values by the same amount!
Take the paint on the stevenson screen. Do all those measurement stations get the same direct sunlight? Do they all have the same length of sun exposure? Do they all have the same microclimate? *THAT* is what the measurement uncertainty interval is supposed to account for! And the combination of all the station’s measurement uncertainty *does* impact the possible trend line slopes.
“Because not all measurements have the same systematic bias! You don’t just offset ALL of the values by the same amount!”
I knew that what I heard was that goal post motor warming up! You magically went from “ ten arrows at a bullseye and the all are the same distance from the bullseye” [bold mine], to “not all measurements have the same systematic bias”.
Yes, some “systematic errors” are big some are small. Some are up, some are down. What holds is that, the more of them and the more various they are – as in the evaluations under discussion – the more they resemble a set of distributed values. Even sillier of you, you imagine a scenario where these many, many, various “systematic” errors, line up over years to significantly change trends. I.e., your encyclopedia typing monkeys.
Keep monkeying away here in subterranea, G’s. It’s where you belong…
“I knew that what I heard was that goal post motor warming up! You magically went from “ ten arrows at a bullseye and the all are the same distance from the bullseye” [bold mine], to “not all measurements have the same systematic bias”.”
I assume from this that you think all the paint on all the temperature measuring stations have the same change at any point in time, whether the station has been in service for one year or ten years?
“Yes, some “systematic errors” are big some are small. Some are up, some are down.”
I assume from this you think that on some measurement stations the paint gets MORE reflective from sun exposure while others get equally LESS REFLECTIVE? Or that the resistors in the electronics get smaller in value in some of the stations while they get greater in value in an equal number of others?
“the more they resemble a set of distributed values.”
The problem is that *YOU* assume that distribution is always random and Gaussian and therefore it all cancels. In the real world the distribution is almost always asymmetric and is quite likely to be severely skewed. Again, paint almost always gets less reflective from sun exposure and electronic components almost always expand from heat exposure. Very seldom does paint get more reflective from sun exposure and very seldom do electronic components shrink from heat exposure. And this applies to almost *everything* in the microclimate. Trees usually get taller and larger, increasing shading and decreasing wind velocity. Insect infestations usually grow and not shrink so their detritus gets larger and not smaller. And on and on and on ……
He has zero comprehension of bias and calibration in measurements. But he does have a large hat size.
You can deflect and propose red herrings all you want. Here is a graphic of “arrow hits” on a target.
See that red dot? That is indicative of the standard deviation of the mean. That is the SD/√n. You pick the SD and n you would like. I chose 5 and n=1000000. That makes the SDOM 5/√1000000.
What you and others keep proposing is that the measured value (let’s make it zero) is something like “0±0..005”. In other words, you could shoot a fly off a wire at 50 yards EVERY TIME.
When in reality the dispersion of values that can be attributed to the mean is makes the measurement uncertainty “0 ±5”. Which means you would be lucky to EVER HIT THE FLY.
Powerball lucky.
So close! Errors under the correct conditions tend to cancel. The cancelation can only take place with a known true value and repeatable conditions. That is,
Show us how these conditions are met when measuring temperatures.
One also needs to know the true value in order to calculate the errors.
This is why NIST TN 1900 used standard deviations to calculate uncertainty. Using this method, the dispersion of measurements around the mean do not cancel. Instead the dispersion provides the opportunity to determine a probability distribution that provides statistical parameters that describe the dispersion.
When you are dealing with uncertainty, that is, standard deviations, uncertainties ADD, ALWAYS.
You can’t mix errors and uncertainty, they are not the same. Errors require knowing a true value which is seldom if ever accurately known. It is why science and industry have moved to uncertainty.
He been told this many, many times but still can’t grasp the concept.
The sample SD can only be ASSUMED to be the population SD. Every time you make an assumption you add measurement uncertainty, probably Type B. For instance, adding northern hemisphere temps to southern hemisphere temps creates a multi-modal distribution, cold temps with higher variance to warm temps with lower variance. Pull a sample from that distribution and how closely will the sample SD be to the population SD?
This is a big problem with the climate models. Every time they *assume* a parameter for something it *is* a GUESS. That adds measurement uncertainty which they totally ignore. If that parameter is off even a little bit in an iterative process it will accumulate at every iteration. But climate science either blows this off or they set artificial bounds to keep the model from blowing up because of the accumulated uncertainty caused by the guess.
Resolution is only a piece of the measurement uncertainty. It is not the total. Like you say, you need to look at *all* the sources of uncertainty. That is handled by ISO by requiring a measurement uncertainty budget that is detailed.
That’s sampling uncertainty, not measurement uncertainty.
Certainly, sample selection and size come into play. The sample SD is only an estimator of the population SD, and some animals are more equal than others.
The first step is ti identify sources of uncertainty. The second is to identify their order of magnitude. Resolution is fixed, so resolution uncertainty should always be propagated as is.
“That’s sampling uncertainty, not measurement uncertainty.”
Sampling uncertainty is an additive factor to measurement uncertainty of the population.
“The first step is ti identify sources of uncertainty”
Sampling uncertainty is a source of uncertainty – “measurement” uncertainty. Sampling uncertainty means you have an uncertainty as to what the mean is. That’s no different than having an uncertainty over the measurement itself. Do they add directly? Probably not. But they do add.
Resolution uncertainty *is* one factor, but it’s not the only factor. And, as I’ve pointed out before, I have a frequency counter in front of me whose resolution is far greater than its accuracy. Quoting a frequency out to 8 digits when the accuracy is only good to the sixth digit is perpetrating a fraud on anyone depending on the measurement for their own purpose.
That was the point.
It’s not the only factor, but it does have to be directly additive rather than in quadrature.
That is one scenario. We also have the case of high quality 0.001″ micrometers whose accuracy is at least as good as their resolution (e.g. Mitutoyo 103-178).
Now let’s measure big end journals ground by different operators on different grinders over a period of 1 week to see if they are within their spec of 1.4375″ to 1.4380″.
Repeat with a 0.0001″ micrometer (e.g. Mitutoyo 103-132)
“I’m not sure you’ve correctly reproduced what “the experts here” have actually said”
The “experts” change there story every time – but it always comes back to the idea that the measurement uncertainty of the average is the same as the uncertainty of the sum. As you add multiple things the uncertainty of the sum grows, and therefore so does the uncertainty of the average.
This was the first of Tim’s comments I replied to, almost 4 years ago.
https://wattsupwiththat.com/2021/02/24/crowd-sourcing-a-crucible/#comment-3193098
I pointed out he was mixing up the uncertainty of the sum with that of the mean and suggested the uncertainty of the mean would be 5C / 100. And as always happens the discussion threads all over the place, but here we have Tim say
https://wattsupwiththat.com/2021/02/24/crowd-sourcing-a-crucible/#comment-3193241
And then when I try to pin him down and ask
He says
https://wattsupwiththat.com/2021/02/24/crowd-sourcing-a-crucible/#comment-3193339
And follows up with the first of many occasions where he fails to understand what a partial derivative is.
“This absolute loss of information represents a corresponding increase in your total uncertainty about the whole set of numbers that can only grow larger as the number of individual items in the set is increased. Hence, “uncertainty grows the more you average”.”
What has that got to do with the measurement uncertainty of an average? If I add up or average a large number of exact values, say all my bills for the month, I’ve “lost” the information of each individual value, but I can still say the sum has no uncertainty.
This is just more evidence that you’ve NEVER bothered to actually study Taylor’s tome. You are trying to equate a *count* with a measurement. A count of objects is *NOT* a measurement, it is a *count*. Counts have no measurement uncertainty unless there is uncertainty in your counting methodology.
But he can beat you over the head with partial differentiation.
he doesn’t understand relative uncertainty at all. When I told him the exponent of a component in a functional relationship was solely a sensitivity/weighting factor for the uncertainty of that component and the other components didn’t apply – he accused me of not knowing how to do partial derivatives. Not realizing that the exponent *is* the partial derivative of the component under examination!
I don’t think he realizes even now why components with exponents greater than 1 become larger contributors to measurement uncertainty of the functional relationship than those with an exponent of 1.
Of course he doesn’t realize it, all he thinks about is sum[Xi]/N.
It is truly the only tool in the climatology toolbox.
And both of them continue to push this partial derivative crap.
“When I told him the exponent of a component in a functional relationship was solely a sensitivity/weighting factor for the uncertainty of that component and the other components didn’t apply – he accused me of not knowing how to do partial derivatives”
You’ve had ample opportunity to demonstrate you do understand them.
“I don’t think he realizes even now why components with exponents greater than 1 become larger contributors to measurement uncertainty of the functional relationship than those with an exponent of 1.”
If f = x^2, then ∂f/∂x = 2x. Is that easy enough for you to understand.
But this does not necessarily mean it will be a larger contributor than one with an exponent of 1. That will depend on the size of x.
Say, f = x^2 + y. Then ∂f/∂x = 2x, ∂f/∂y = 1
u(f)^2 = (2x)^2u(x)^2 + u(y)^2
If x is 0.1, then
u(f)^2 = 0.01u(x)^2 + u(y)^2
Dufus!
Well argued. I’ll have to check with an expert if 0.01 is really larger than 1.
“ f = x^2 + y”
When are you going to learn the difference between a sum and a product?
What has that got to do with your point? You keep trying to bring the discussion to products and then using that to distract from the average which is a sum not a product. You still haven’t answered what you think ∂f/∂x is when f = (x + y) / 2.
But back to your claim. Let f = xy². Then ∂f/∂x = y², ∂f/∂y = 2xy.
Is the y compent a bigger contributor to the overall uncertainty? That depends on the values of y and x. If x = 1 and y = 10, the the weight for the x component is 100, and for y it’s 20. So in that case, no it isn’t.
This should be clear when you translate it to relative uncertainties.
u(f)² = [u(x)/x]² + [2u(y)/y]²
If y is more than twice x, then the uncertainty of y has less of an impact than the uncertainty of x.
f = (x+y)/2 ==> x/2 + y/2
x/2 is a quotient. Use relative uncertainty and you get
u(x)/x + u(2)/2 for the first term.
Since u(2)/2 = 0 because u(2) = 0 you wind up with u(x)/x for the first term.
Similarly for the second term, y, you get u(y)/y.
So u(f)/f = u(x)/x + u(y)/y
“But back to your claim. Let f = xy². Then ∂f/∂x = y², ∂f/∂y = 2xy.”
f = xy^2
This is a product so you must use relative uncertainty.
∂f/∂x = y^2 ∂f/∂y = 2xy
Divide by xy^2 to get the relative uncertainties and you wind up with
(y^2)u(x) / xy^2 ==> u(x)/x
and
(2xy)(u(y)/xy^2 ==> 2u(y)/y
So u(f)/f = u(x)/x + 2u(y)/y
The exponent of y becomes a weighting factor for the RELATIVE uncertainty of y.
This is exactly what Possolo shows doing for the uncertainty of a barrel.
Your problem is that you can’t do simple algebraic division in your head and you have no idea of when relative uncertainty applies.
Instead of claiming I can’t do simple partial derivatives (which I did in my head for the discussion of Possolo’s example, LEARN BASIC METROLOGY.
Stop coming on here and trying to lecture us on metrology. You can’t even identify simple sum/differences from products/quotients. Nor do you understand basic algebra. x^2 *is* (x) * (x), and y^2 is (y)*(y).
x * x is a product. You use relative uncertainty. y*y is a product and you use relative uncertainty. u(x*x)/(x*x) = u(x)/x + u(x)/x = 2u(x).
“So u(f)/f = u(x)/x + u(y)/y”
Wrong. You are adding two components. When you add you have to use absolute uncertainties.
It would be s lot easier for you if you did the adding part first, then divide by two.
“This is a product so you must use relative uncertainty.”
That’s enough banging my head against an exceptionally thick brick wall for the time being. If you still can’t see you are confusing two different equations, despite all my explanations, then you never will. You are just to motivated to get the wrong answer.
(x+y)/2 IS A QUOTIENT!
Like I said, you can’t even tell addition/subtraction from product/quotient. Let alone do the algebra associated with each!
It’s not an adfition and a quotient. What do you think x + y means. You have to break it up and do each part srperatlu using the specific rules, just as Taylor explains. Or you use the general equation with the correct partial derivatives.
It’s so telling that you can’t address this simple point, whilst insulting my algebraic skills.
I demonstrated this to you before and you didn’t understand then and you probably won’t now.
Dr. Taylor in Section 3.8, Page 66, says;
You want to define a model of y=f(x)=X1, where X1=(x1/n+x2/n+ …+xn).
X1 is a random variable and the mean is defined as x₆ₑₛₜ. This is a sequence of sums and quotients. The sequence breaks down into:
u(y)/y = √[{(∂f/∂x1)(u(x1)/x1)}+{(∂f/∂n)(u(n)/xn}+ …]
As has been pointed out from many sources, constants and counting numbers have no uncertainty. Consequently, u(n) = 0 and the term drops out.
You refuse to recognize that you must break your function into its pieces part to evaluate the uncertainty.
As to your example of:
You are basically using an example we have shown you from a publication of Possolo & Meija where the volume of a cylinder is analyzed.
Lest you forget, the “2” used as the weighting factor of the radius uncertainty originates from R². Hmmm, funny how you just pick up on something that has no relation ot what is being discussed.
You keep making the same mistakes
Confusing the general rule and thg specific rules.
“You want to define a model of y=f(x)=X1, where X1=(x1/n+x2/n+ …+xn).”
And there you go again, overcomplicating things. y = (x1/n+x2/n+ …+xn). Noneed to add an extra variable with a confusing name.
“X1 is a random variable and the mean is defined as x₆ₑₛₜ.”
Again confusing different things. If we are only talking about the measurements uncertainty of an exact average, the result of the average is not random. If you want to say this is a random sample you can, in which case we will end up with the SEM. But this was supposed to be about measurement uncertainty.
“This is a sequence of sums and quotients.”
It is, but you don’t care about that if using the general equation. Just put the correct partial derivative against each uncertainty term. And on this case each term has a derivative of 1/n.
”
u(y)/y = √[{(∂f/∂x1)(u(x1)/x1)}+{(∂f/∂n)(u(n)/xn}+ …”
Wrong. What is so difficult about just looking at the equation and putting the correct values into it. There is no divide by xi anywhere in the equation.
“Consequently, u(n) = 0 and the term drops out.”
What is ∂f/∂x1?
“ou are basically using an example we have shown you from a publication of Possolo & Meija where the volume of a cylinder is analyzed.”
No I am not. Please try to understand that you can use the general equation with any function. You do not have to keep refering back to one specific example.
Say, f = x^2 + y.
The uncertainty of a component with a power, e.g. x^n is
u(q)/q = n * u(x)/x
The uncertainty is *NOT* (2x)^2 * u(x)^2
The uncertainty is 2u(x)/x
Once again, you have not studied Taylor at all.
LEARN WHEN YOU HAVE TO USE RELATIVE UNCERTAINTY!
x^2 is (x * x). That is a product. The uncertainty of a product must be done in relative uncertainty. The uncertainty becomes (1)u(x)/x + (1)u(x)/x ==> 2u(x)/x
STOP CHERRY PICKING AND ACTUALLY STUDY THE LITERATURE!
Stop writing everything on bold all caps. Try to read what I said. Stop torturing us both.
“U(q)/q = n * u(x)/x”
Correct. But you dropped the y term. Try resolving it using both terms and you cannot just divide through by q.
“The uncertainty is *NOT* (2x)^2 * u(x)^2”
Never said it is. But
u(q)² = (2x)² u(x)² + u(y)²
Is a correct equation for the function I described. Using the specific rules you have to convert any relative uncertainty in order to add the y term.
You *still* don’t understand. When you have a functional relationship like
q = (x+6)/2 YOU DON’T KNOW the value of q. You don’t measure q. You measure x and y. It’s only after you have measured s and y and calculated q that you know its value.
That means that in u(q)^2/q^2 = (u(x)./x)^2 + (2u(y)/y)^2
It’s why Taylor shows u(q)/q = u(x)/x for the relationship q = Bx.
In that relationship you actually measure q. The uncertainty you can find is u(q)/q. You don’t know x and therefore can’t know u(x). So once you know q and u(q) you can find x and u(x) by dividing q and u(q) by B.
The other thing relative certainty provides is the ability to find the uncertainty when the components have different dimensions.
E.g. pv = nRT. n, R, and T have different dimensions. You can’t directly add their uncertainties. But you *can* directly add their relative uncertainties.
(u(pv)/pv)^2 = (u(n)/n)^2 + (u(T)/T)^2 (since R is a constant it has no uncertainty)
In fact, this is a second reason why you must relative uncertainty for V = πHR^2. H is in meters and R^2 is in m^2. Different units so adding u(H) and u(R^2) together doesn’t match dimensions. But you *can* split the equation into πHRR and find u(H)H and u(R)/R which leads to uncertainty terms of u(H)/H and 2u(R)/R. where the dimensions all match.
Admittedly the GUM doesn’t elucidate this very well but Taylor does. The process goes:
-Measure the component values
-Determine their relative uncertainties in percentages
-Add the relative uncertainties to get a total percentage
-Calculate the functional relationship to get an absolute value
-Multiply that calculated value by the relative uncertainty percentage to get an absolute value.
It’s the whole reason for seeing values like volume given as “stated value +/- percentage”. E.g. 1% of FullScaleReading on a voltmeter. You already *KNOW* the 1% measurement uncertainty and the full scale reading *before* you know the actual voltmeter reading. The absolute total uncertainty doesn’t even appear until *after* you have calculated the relative uncertainty.
Eq 10 only gives you the absolute value of the uncertainty. But to get there you first have to find the relative uncertainties! It’s why Possolo shows the uncertainty of the volume of a barrel as u(V)/V being made up u(H)/H and 2u(R)/R. You can’t find u(V) until you know V. By the time you know V you had better also know total relative uncertainty beforehand. And the weighting factor for each of the component relative uncertainties does *not* come from the partial derivative. It is a weighting factor determined by the exponent of the component! As I showed it is derived by converting x^2 into (x * x) and finding the relative uncertainty of u(x)/x and adding it in twice to the total relative uncertainty value. The partial derivative only serves to convert the relative uncertainty into an absolute uncertainty. BUT YOU NEED TO KNOW THE RELATIVE UNCERTAINTY FIRST!
For the umpteenth time, if you would actually STUDY Taylor instead of just cherry picking things you think confirm your misconceptions you would know this.
But you won’t learn. You’ve proven that over and over.
Numbers Is Numbers, so he won’t understand dimensional analysis. Instead he’ll accuse you of not knowing how to add.
Ain’t this the truth.
Good grief. It’s Christmas Eve, and I’m not wasting my time going down these endless stream of consciousness comments all day. This whole rant is just the usual Gorman distraction. It’s a simple question about whether x + y is addition or division. Tim won’t answer that because I suspect he knows deep down what the result will be – so instead we get this incoherent and generally wrong ramble.
I’ll just address a couple of points:
“In that relationship you actually measure q. The uncertainty you can find is u(q)/q. You don’t know x and therefore can’t know u(x). So once you know q and u(q) you can find x and u(x) by dividing q and u(q) by B.”
This from someone who’s favorite insult is that I can’t read.
The verbatim quote from Taylor (3.9):
Somehow Tim reads this as you are measuring q in order to calculate x.
And the thing is, I have no idea what purpose Tim thinks this continual misreading serves. It makes no difference.
“In fact, this is a second reason why you must relative uncertainty for V = πHR^2. H is in meters and R^2 is in m^2. Different units so adding u(H) and u(R^2) together doesn’t match dimensions.”
No. You are not adding u(H) to u(R), let alone to u(R^2). You are multiplying each term by the partial derivative. Ignoring the squaring you constants you have
R^2 * u(H) and HR * u(R)
R is in meters, H is in Meters, u(R) is in meters, u(H) is in meters
R^2 * u(H) is in m^2 ✕ m = m^3
HR * u(R) is in m ✕ m ✕ m = m^3
Both are the same dimension as volume.
“Somehow Tim reads this as you are measuring q in order to calculate x.”
ROFL!!
As usual, your lack of reading comprehension skills is showing again!
From Taylor: “or we might measure the thickness T of 200 identical sheets of paper and calculate the thickness of a single sheet as t = (1/200) x T.”
Taylor: “This rule is especially useful in measuring something inconveniently small but available many times over, such as the thickness of a sheet of paper or the time for a revolution of a rapidly spinning wheel.
This is *EXACTLY* what Taylor is doing. q is the thickness of 200 sheets of paper with a defined measurement uncertainty.
In his example he gives T = 1.3 +/- 0.1 inches (i.e. the thickness of a stack of 200 sheets of paper).
Taylor: “It immediately follows that the thickness t of a single sheet is
t = (1/200) x T ==> 0.0065 +/- 0.0005 inches”
The relative uncertainty of T is 8%. The relative uncertainty of t is 8%. Both exactly the same.
STOP CHERRY PICKING!
You *never* get anything right from your cherry picking!
“You are multiplying each term by the partial derivative”
I’ve shown you multiple times now how this is wrong, so wrong. You refuse to learn. Taylor covers this in Section 3.4 on Page 55 in his second edition.
Rule 3.10
if q = x^n
u(q)/q = n (u(x)/x)
He shows the same thing in Section 3.7 in Rule 3.26.
This can be derived quite easily by breaking a variable with an exponent into its component parts, finding the relative uncertainty of each component part, and then summing them.
x^n = x * x * … *x with n x’s.
The uncertainty then becomes the sum of “n” u(x)/x values which equals
n * u(x)/x
STOP CHERRY PICKING!
Actually study up on the literature!
“This is *EXACTLY* what Taylor is doing. q is the thickness of 200 sheets of paper with a defined measurement uncertainty.”
Sigh. X is the stack q is the single sheet and 1/200 is the exact number you use to multiply X to get q. Why you even thinks this matters is a mystery. My guess is you never got to the stage of understand you can multiply something by a fraction. You still think multiplication is just repeated addition.
Damn man, put down the bottle!
I gave you the quotes from Taylor. He is measuring a stack of 200 sheets of paper!
All he does is substitute T for q and t for x.
You atrocious lack of reading comprehension is showing again!
Happy Christmas to you too.
But really, why do you think T, the measured stack of paper, is substituted for q, rather than x, which is explicitly described as the measured value? Please try to answer without resorting to your childish insults.
“But really, why do you think T, the measured stack of paper, is substituted for q, rather than x, which is explicitly described as the measured value? Please try to answer without resorting to your childish insults.”
Because that is EXPLICITLY what Taylor says. At the very start of the example he states:
“For example, we might measure the diameter of a circle and then calculate its circumference, c = π x d; or we might measure the thickness T of 200 identical sheets of paper and then calculate the thickness of a single sheet as t = (1/200) x T.” (bolding mine, tpg)
t = (1/200)T ==> T = 200t i.e. same as the equation q = Bx where “T” and “q” are the same and “t” and “x” are the same thing.
Like I keep saying, you have an absolute problem with basic algebra!
Taylor: “Because ẟB = 0, this implies that
ẟq/q = ẟx/x”
This equation works both ways! You can find the uncertainty is q and the value of q and then know the relative uncertainty of the component pieces! Or you can find the relative uncertainty of the component pieces and know the relative uncertainty of the total!
Which you do is based on your ability to determine “q” and “x”. It’s a lot easier to find “q” (i.e. “T”) of a stack of paper then it is to find “x” (i.e. “t”) of a single sheet of paper! But this is also based on the assumption that all of the sheets of paper are exactly the same – i.e. your “average” value applies to each component. If that stack of paper is made up of card stock, #20lb copy paper, and construction paper that assumption will not apply! This is why you must also know the assumptions that Possolo makes in TN1900 – which you *NEVER* bother to list out and therefore always get it wrong when you try cherry picking from that example.
Me: “Please try to answer without resorting to your childish insults.”
Tim: “Like I keep saying, you have an absolute problem with basic algebra!”
Oh well.
“Because that is EXPLICITLY what Taylor says. At the very start of the example he states:”
You have an odd definition of “explicitly”.
“or we might measure the thickness T of 200 identical sheets of paper and then calculate the thickness of a single sheet as t = (1/200) x T.”
T is the measured thickness of 200 sheets. Implicitly that is x in the equation. t is the value calculated by multiplying the measured value by 1/200, implicitly making it q in the equation.
“t = (1/200)T”
The same form as
q = Bx.
“==> T = 200t”
Of course that relationship is implicit in the equation. It’s just makes no sense to relate it to the equation Taylor states for the case, where x is measured value and q is Bx.
“i.e. same as the equation q = Bx where “T” and “q” are the same and “t” and “x” are the same thing.”
Only if you ignore the explicit assumption that x is the measured value and q is the calculated value.
So, now what do you do with the uncertainty calculation
δq = |B|δx
Using your reformulation, you don’t know the uncertainty of δx, a single sheet of paper, but you do know δq. So you will have to transpose the equation to
δx = (1/|B|)δq
which is just getting you back to where you would have been if you’d just used Taylor’s actual definitions to start with.
“It’s a lot easier to find “q” (i.e. “T”) of a stack of paper then it is to find “x” (i.e. “t”)”
It’s only easier because you made it harder in the first place by switching the roles of x and q. It’s just as easy to find “x” (i.e. “T”) of a stack of paper, then to find “q” (i.e. “t”).
I just don;t get what point you think you are making, or why you dig your heals in every time I point out you are mixing up the variables.
“This is why you must also know the assumptions that Possolo makes in TN1900 – which you *NEVER* bother to list out and therefore always get it wrong when you try cherry picking from that example.”
A fine lie, coming from someone who won’t even acknowledge the assumption Taylor is making, that x is the measured value and q the calculated value.
Here’s one of many times I’ve listed out the assumptions in TN1900 Example 2.
https://wattsupwiththat.com/2024/07/14/the-hottest-june-on-record/#comment-3941335
You haven’t listed them as stated.
This doesn’t even make sense. Show the text that you interpreted to say a “true average”.
You do realize the average or mean of the 22 days IS the actual average of the 22 Tmax temperatures.
You just continue to make assertions without actually showing quotes to support them. It makes you look silly.
Here is what the document actually says.
This does not say “random error”. It says “random variable”, not random error. Random error is YOUR interpretation.
Nor does the example say that the “random errors” cancel. In fact, they use the mean and standard deviation as the statistical parameters of a probability distribution to describe the uncertainty.
“You haven’t listed them as stated.”
That’s becasue I prefer to think for myself rather than do what you keep doing, regurgitating chunks of text without understanding their meaning. Why keep asking me to list the assumptions if you don’t want me to demonstrate what I understand what they mean?
“This doesn’t even make sense.”
Point 6 is saying what you already quoted.
I put “true” in quotation marks if that’s what you are quibbling about. I mean true in the sense that this is the definition of the measurand being measured – rather than say the actual average of daily temperatures.
“You do realize the average or mean of the 22 days IS the actual average of the 22 Tmax temperatures.”
Not in this exercise it’s not. That’s just the estimate (or measurment) of the actual average. As you quote
Here is what the document actually says.”
Which is saying what I said. εi is the error term. It’s considered to be a random variable in the model. What do you think “measurement error model” means?
“Nor does the example say that the “random errors” cancel.”
What do you think happens when you average random variables?
“In fact, they use the mean and standard deviation as the statistical parameters of a probability distribution to describe the uncertainty.”
Why do you think they divide that standard deviation by √22?
It is not an error term. Here is what the example in NIST TN 1900 says.
In other words, εi defines a probability distribution surrounding τ. In addition, each εi is assumed to be a random variable itself. Measurement uncertainty under the GUM guidelines uses standard deviations, not cancelation of error terms.
You probably have no idea why this statement is made. Because each “ti = τ + εi” is considered to be a sample with its own distribution. That allows one to justify using the standard error of mean as a measure of how well this distribution estimates the population mean. As Dr. Taylor says in his book:
This is one reason I have a problem with NIST TN 1900 Example 2. The same quantity is not being measured each time. NIST has detailed that the 95% interval for where the mean may lie expands the SDOM by a “k” factor in order to better explain what the uncertainty is. One should remember that this is an example and not a detailed description of how to determine the entire uncertainty.
Also you will note that the actual uncertainty is more than the expanded SDOM. It also should include the measurement uncertainty of each measurement in the series.
Read GUM carefully and as many times as it takes to understand what it is saying.
You are not willing to learn metrology. Your emphasis on errors is misplaced and has no value in todays world.
In the past, one could have a series of readings and assume they consisted of:
x+1, x-1, x+2, x-2, …, x+n, x-n
where x is the true value and all errors canceled. That is no longer used and should be forgotten.
The current usage is an estimated value with an interval surrounding the estimated value determined by a standard deviation value dependent upon the probability distribution the measurements create. There is no cancelation, none whatsoever. There is always an interval describing the dispersion of readings that can be attributed to the mean and that is known as the measurement uncertainty.
“It is not an error term.”
TN1900
And,
“Because each “ti = τ + εi” is considered to be a sample with its own distribution.”
No. Each daily measurement is assumed to come from an identical distribution. Why do you think the quote from Taylor says they all have the same width?
“It must first be asked, “To what extent are the repeated observations completely independent repetitions of the measurement procedure?””
Which depends on how you are defining the measurand. In this case the measurand is the average temperature for that station for that month, and the daily temperatures are independent repetitions of the same measurand.
If you wanted to claim this was the average temperature for all months, or for all of the country, you answer would be that they are not independent repetitions as they are all taken from the same month and the same location.
I suspect, you are reading more into that paragraph then is intended, but as you just keep cutting and pasting it, without explaining what you think it means, we will never know.
“where x is the true value and all errors canceled.”
Nobody has ever said that all errors cancel. That’s just your usual straw man argument.
“There is no cancelation, none whatsoever.”
Any reference for that?
Again, why do you think TN1900 Ex 2, divides the standard deviation by √N?
“There is always an interval describing the dispersion of readings that can be attributed to the mean and that is known as the measurement uncertainty.“
No wonder Bellman is confused. He thinks an average is a measurand, but it’s not—it’s just a calculation. The measurand is the actual quantity being measured, in this case, air temperature.
He’s correct that daily temperatures are independent, but they aren’t ‘repetitions’ because they aren’t measured under identical conditions. Each measurement has its own distinct distribution instead.
Actually, daily temperatures measurements aren’t always independent. For example, after a snowstorm, the snow on the ground can influence temperature readings in the following days, creating a dependency between measurements.
Averaging these values without accounting for this dependence would violate the independence assumption of the central limit theorem. Climate science doesn’t care though.
Where did you learn to read? From NIST TN 1900:
“εi” is a random variable with 22 observations
ε1 is a random variable,
ε2 is a random variable,
…
ε22 is a random variable.
Where
μ1 = μ2 = … = μ22 and
σ1 = σ2 = … = σ22
This is exactly what Dr. Taylor says.
Did you forget the other requirement from Dr. Taylor?
Quit cherry picking things you have no knowledge about. It is obvious you have not spent time or effort to study uncertainty in measurements. Your opinions are missing vital
Why didn’t you address the entirety of the sampling in your response? You only assert that you are measuring the same measurand.
Let’s deal with the intricacies.
A single sample, i.e., the monthly average
The property is the variation in temperature.
A given specimen is a single temperature.
If I am testing the hardness of a 12 inch Grade 8 bolt, I test several locations both around and the length. This is exactly the property of a monthly average. When computing the average hardness I must not only find the uncertainty in the separate measurements, but also in each measurement.
If this sounds familiar, think about the uncertainty involved with the linear regressions you trumpet.
“Where
μ1 = μ2 = … = μ22 and
σ1 = σ2 = … = σ22”
Correct – that’s why they are identically distributed.
“Quit cherry picking things you have no knowledge about.”
Stop resorting to these ridiculous jibes. It really doesn’t help your case.
“You only assert that you are measuring the same measurand.”
That’s the way TN1900 treats it.
“Let’s deal with the intricacies.”
You’re back to GUM F.1.1.2, I take it.
The point of F.1.1.2 is that you can take, say a lump of a material. Measure some component of it 20 times and think you have 22 independent observations. But if you are trying to measure a quantity of the material in general rather than this specific lump – then your observations are not truly independent because they all depend on that particular lump.
“A single sample, i.e., the monthly average”
Yes, this particular May month. That’s why you can;t assume you have 22 independent observations of May temperatures in general.
“The property is the variation in temperature.”
No. The property is the average temperature. Remember, that’s the measurand.
“A given specimen is a single temperature.”
No it isn’t. If you are going to try to interpret the NIST example through this particular GUM section, you take the month NIST are measuring as the “single sample” that is being used for the experiment. The daily values are the “sampling that is part of the measurement procedure.”
I do think the GUM is poorly written here. They use sample and sampling in two different ways in the same sentence.
“If I am testing the hardness of a 12 inch Grade 8 bolt, I test several locations both around and the length. This is exactly the property of a monthly average.”
“exactly” doing some heavy lifting.
“When computing the average hardness I must not only find the uncertainty in the separate measurements, but also in each measurement.”
That’s not at all what F1.1.2 is saying. And what on earth do you mean by both the separate measurements, and each measurement?
“If this sounds familiar”
It sounds like you’re confused about something, if that’s what you mean.
As so often you go on all these tangents, rather than stick to the point, if indeed there ever was one.
LOL. The temperatures are all the same? Regardless of how you normalize them, you can not get the same means out of 20 some different means.
The uncertainties, i.e., the standard deviations of each measurement is the same? I know NIST assumes this for instructive purposes, but for you to argue that applies to real world analyses is ridiculous.
Tell you what, take a months temperature from where you live and do an analysis just like NIST has done in 1900 and post it.
“The temperatures are all the same?”
It would be so much less work for you, to just accept that you don’t know what these terms mean.
Each temperature recorded on a specific day is assumed to be taken from a random variable with the same probability distribution – hence identically distributed. If you roll a collection of 20 sided dice, you will get a lot of different numbers, but each number came from an identical distribution.
And in case you forgot, the idea that these are identically distributed is one of the assumptions made in TN1900. It’s not a good assumption as you expect temperatures to warm during May.
Here’s a reference:
https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables
Me (John Power): “This absolute loss of information represents a corresponding increase in your total uncertainty about the whole set of numbers that can only grow larger as the number of individual items in the set is increased. Hence, ‘uncertainty grows the more you average’.”
You (Bellman): “What has that got to do with the measurement uncertainty of an average? If I add up or average a large number of exact values, say all my bills for the month, I’ve “lost” the information of each individual value, but I can still say the sum has no uncertainty.”
Let’s take this one step at a time.
You: “What has that got to do with the measurement uncertainty of an average?”
Me: If you’ve done your calculation correctly, there can be no measurement uncertainty of an average because an average is the result of a calculation, not a measurement. However, any measurement uncertainty present in the primary data will be automatically carried forward into the results of any calculations that are performed upon that data, such as by averaging, summing, etc..
According to Information theory, the only possible way of eliminating uncertainty from your analysis is to input fresh information. Merely performing internal calculations on the primary data won’t work because it doesn’t input any new information into your analysis.
You: “If I add up or average a large number of exact values, say all my bills for the month, I’ve “lost” the information of each individual value, but I can still say the sum has no uncertainty.”
Me: Indeed you can say that, but only if you are absolutely certain that your bills are correct. Otherwise, you will possess a degree of uncertainty about them and that will feed through into your calculation of the sum.
“an average is the result of a calculation, not a measurement”
True. An average is a statistical descriptor of a distribution, it is not a measurement. You can’t convince statisticians of this apparently.
If you have the entire population then the average is located quite closely, depending on the resolution available anyway. If you have only a sample of the population then sampling theory says you will have uncertainty in locating the population average.
“According to Information theory, the only possible way of eliminating uncertainty from your analysis is to input fresh information.”
It might be possible but it isn’t guaranteed either.
“Indeed you can say that, but only if you are absolutely certain that your bills are correct”
Counts aren’t a distribution. The values aren’t given with an uncertainty interval so there isn’t anyway to tell if they are correct or not.
It’s nice to see another poster that has some expertise in measurement. I just love the statistician’s assertion that you can increase instrument resolution and accuracy by averaging. They are all primarily black-board geniuses, not real world makers.
Weaselman dips into the enemies files to demonstrate his intellectual superiority.
To karlo, being able to use the internet, demonstrates your intellectual superiority.
Oh ouch this hurts I’ll have to bow to your great metrological technical abilities now.
“As you add multiple things the uncertainty of the sum grows, and therefore so does the uncertainty of the average.”
I’ve left at least three messages in the threads just today explaining this.
As you add elements to the distribution the variance grows. You don’t seem to have a basic understanding of that variance *is*.
V = Σ(X-u)^2 / n
As you add more X values the sum Σ(X-u)^2 goes up. it goes up faster than n does – at least for measurement situations where you are combining single measurements of different things using different instruments under different conditions.
And V is a direct measure of the uncertainty of the average. You would think that someone familiar with statistical descriptors would understand that the standard deviation is the square root of variance. As variance goes up so does the standard deviation. And the larger the standard deviation is the larger the uncertainty of the average is.
You are so tied into the meme that the standard deviation of the sample means is the measurement uncertainty of the average you can’t conceive of anything else. It’s a statisticians blind spot since they *never* work with data points of the form “stated value +/- measurement uncertainty” but only of the form “stated value”. You can’t even understand that the samples are made up of uncertain measurements, i.e. stated values +/- measurement uncertainty. That means that the mean of the sample is uncertain, and uncertain based on the propagation of the measurement uncertainty elements of the data points.
You keep claiming you understand this but you *never* demonstrate that you do. Just like you claim you don’t use the meme “all measurement uncertainty is random, Gaussian, and cancels” but that meme appears in simply EVERYTHING you post.
“I pointed out he was mixing up the uncertainty of the sum with that of the mean and suggested the uncertainty of the mean would be 5C / 100.”
5C/100 IS THE AVERAGE UNCERTAINTY! The average uncertainty is *NOT* the uncertainty of the average. All you do in finding the average uncertainty is come up with a common value you can assign to each data element and come up with the total uncertainty.
The GUM explains that a Type A measurement uncertainty is the standard deviation of the measurements – NOT the average measurement uncertainty of the individual measurements.
“3.3.5 The estimated variance u^2 characterizing an uncertainty component obtained from a Type A evaluation is calculated from series of repeated observations and is the familiar statistically estimated variance s^2 (see 4.2). ”
There is NOTHING* in the GUM about dividing either the variance or the standard deviation by “n’ to get the measurement uncertainty of the average.
Even in the case of multiple measurements of the same thing using the same instrument under the same conditions where, if justified, you can assume the average is the best estimate of the measurand’s property you *still* use the standard deviation of the measurement values as the measurement uncertainty of that average, not the average measurement uncertainty.
You keep being told that you need to STUDY metrology using tomes like Taylor’s and the GUM. Yet you continue to do nothing but cherry pick from them those things you think confirm your misconceptions. WHY?
“And follows up with the first of many occasions where he fails to understand what a partial derivative is.”
ROFL!! It was *YOU* that didn’t understand the use of relative uncertainty when evaluating the measurement uncertainty of functional relationships made up of products and/or quotients. You accused me of not understanding partial derivatives and you *still* cling to that misconception even after being shown how the cancellation of associated elements CANCELS when using relative uncertainty, leaving the exponent of the element under examination as a weighting factor.
You *still* haven’t figured out how Possolo came up with the uncertainty of the radius of a barrel being 4 * u(R)^2/R^2 with no appearance of the variable H! Or how the measurement uncertainty component of H is u(H)^2/H^2 with no appearance of R!
And he still has zero comprehension of the implications of non-random uncertainty, still thinks it is “error”.
he still thinks that *random* contributions to measurement uncertainty is “error”.
Yep!
“I’ve left at least three messages in the threads just today explaining this. ”
That’s your problem. Maybe try for one message that makes sense. And preferably isn’t the length of War and Peace.
First though – can you just say whether you agree or not with the with the comment “As you add multiple things the uncertainty of the sum grows, and therefore so does the uncertainty of the average.”
This is the relevant point, as I was called a liar for suggesting you thought that.
“As you add elements to the distribution the variance grows.”
No it doesn’t – or at least not normally. If by adding elements you mean from a common population, the variance of the sample will tend to the variance of the population. If you are adding elements from different populations, the variance will tend to the variance of the mixture of those distributions.
The only way you will get an increasing variance is if you are adding elements in a systematic way. E.g. start with 10 small values, then add ten large ones. But that is not demonstrating that variance increases with sample size, it’s just a by-product of the order you are adding them.
“You don’t seem to have a basic understanding of that variance *is*.”
Another pathetic put-down. Tim really seems to believe this is the way to win an argument. It often seems he accepts it’s the only way he can win the argument.
“V = Σ(X-u)^2 / n”
Correct.
“As you add more X values the sum Σ(X-u)^2 goes up.”
OK.
“it goes up faster than n does”
Really? Could you provide a mathematical proof of that conjecture.
Apart from the lack of proof, I think you are not saying what you actually mean. Suppose all the values of X are between -1 and +1. X – u will be less than 1, (X – 1)^2 is even smaller. How is it possible that the sum is increasing faster than n, when n increases by 1 year time?
“And V is a direct measure of the uncertainty of the average.”
And the predicable argument by assertion. No it isn’t. Firstly standard uncertainty is the standard deviation, not variance. And secondly, the uncertainty of the average is not the standard deviation of the population. The uncertainty of the mean is the range of values that could reasonably be attributed to the mean. This is given by the SEM, or if you prefer the standard deviation of the mean.
“You would think that someone familiar with statistical descriptors would understand that the standard deviation is the square root of variance.”
And you would be correct to think that. Everyone here understands it. Even you understand it. So why keep repeating it.
“And the larger the standard deviation is the larger the uncertainty of the average is.”
Correct, in as long as the sample size is fixed. A sample of size 100, with a standard deviation of 10 will have an uncertainty of the mean of 1. Whereas a sample of size 100 and an SD of 20 will have an uncertainty of 2.
But your premise is still wrong, becasue you still think that the standard deviation increases with sample size. That is your first mistake. The second being you keep forgetting to the SD divide by root n.
“You are so tied into the meme that the standard deviation of the sample means is the measurement uncertainty of the average”
I really should stop reading as soon as you use the word “meme”. But as usual you are wrong. What I’m saying is the standard error of the mean can be thought of as the uncertainty of the mean. That is not the “measurement” uncertainty, though that will be part of it.
“That means that the mean of the sample is uncertain”
The mean of a sample is uncertain. That’s the whole point of looking at the SEM. Each element in the sample is a random value taken form the population. It will also have more randomness coming form the measurement uncertainty. If you want to add that extra uncertainty directly you can do it using the general equation – but that requires you actually accepting how it works. However it’s not really necessary as the measurement uncertainty will usually be small compared to the sampling uncertainty, and as I keep pointing out, the measurement random uncertainty is already present in the measured data in the form of extra variance.
“Just like you claim you don’t use the meme “all measurement uncertainty is random, Gaussian, and cancels””
And that should really be the end of the conversation.
“5C/100 IS THE AVERAGE UNCERTAINTY! ”
Stop shouting. It’s a sure sign you are wrong. As in this case.
It is not the average uncertainty. You’ve been shouting this nonsense for years, and I’m still not sure you even know what you mean by it. In this case each measurement has an uncertainty of 0.5°C. the average uncertainty can only be 0.5°C. How you think 5/100 = 0.05°C is an average is beyond the understanding of mankind.
“There is NOTHING* in the GUM about dividing either the variance or the standard deviation by “n’ to get the measurement uncertainty of the average.”
Another gem from someone who claims I never read the texts. Not only does he never notice the parts of the GUM which do just that, he’s also never read any of the dozen or so times I’ve pointed it out to him.
4.4.3 equation 5.
It’s literally telling you the variance of the mean (q^bar) is given by dividing the variance of q_k by n.
“ROFL!! It was *YOU* that didn’t understand the use of relative uncertainty when evaluating the measurement uncertainty of functional relationships made up of products and/or quotients.”
You claimed that if u(q)/q = u(x)/x, it means that u(q) = u(x). That’s your argument for why you think the uncertainty of the mean is the uncertainty of the sum. You refuse to accept what Taylor tells you, that a special case is that if q = Bx, where B is an exact value with no uncertainty, then u(q) = |B|u(x). You still fail to see that applying this to an average inevitably means that u(average) = u(sum) / N.
“You accused me of not understanding partial derivatives and you *still* cling to that misconception even after being shown how the cancellation of associated elements CANCELS when using relative uncertainty, leaving the exponent of the element under examination as a weighting factor.”
Typical Gorman diversion. Rather than address the simple question of the partial derivatives of the mean function, you start rabbiting on about cancellations and relative uncertainties which have nothing top do with the case. It’s beyond me how someone cl;aiming to have a partial understanding of calculus is incapable of simply doing the correct partial derivative and plug it into the correct equation. He only seems to be able to understand an equation by incompatible examples.
Brief question again. If f(x,y) = (x + y) / 2, what is ∂f/∂x?
Hint, the correct answer is 1/2.
“You *still* haven’t figured out how Possolo came up with the uncertainty of the radius of a barrel being 4 * u(R)^2/R^2 with no appearance of the variable H!”
Sorry, but you are becoming beyond help. I’ve explained it to you multiple times. Besides, you are looking for the volume of the barrel, not the radius. You already know the uncertainty of the radius. And the uncertain of the volume does include the uncertainty of H.
Rather than try to mangle the equation until you get the right result, just answer this.
If V = πR²H, what is ∂V/∂R and ∂V/∂H.
If your answers are not
∂V/∂R = 2πRH
and
∂V/∂H = πR²
then I’m going to repeat my accusation that you do not understand calculus.
This is literally all you have — mindless accusations.
He simply does not understand that 2πHR/πHR^2 = 2/R
He’ll never understand Possolo’s example because he can’t do basic algebra!
And Tim doubles down on his lies. It’s so telling.
I’ve pointed out to Tim on multiple occasions that 2πHR/πHR^2 = 2/R. It’s fundamental to understanding why multiplying and dividing quantities simplifies to adding relative uncertainties.
The problem is the way Tim tries to contort it into a claim that therefore the partial derivative of X/n is 1.
Note that whilst he’s happy to just lie about me, he still has avoided answering the simple questions about the specific partial derivatives.
“ You still fail to see that applying this to an average inevitably means that u(average) = u(sum) / N.”
No one is disputing that the average uncertainty is u(sum)/N. What is being disputed is that the average measurement uncertainty is the uncertainty of the average. You’ve been shown over and over that is not the case!
Lay 10 boards with a measurement uncertainty (u1 to u10) end-to-end and your uncertainty for the length will be the sum of the individual uncertainties. u1 + u2 + … + u10. All you do when finding the average uncertainty is making u1 = u2 = … = u10 = (u1+u2+…+u10)/n.
In the real physical world NO ONE CARES about the average value or the average measurement uncertainty. Telling me that the average length of a load of 2″x4″ boards is X does *NOT* allow me to construct a beam from a selection of them to span a foundation and be assured that the beam will actually span the distance! Telling me that the average measurement uncertainty is u(X) won’t help either. The bean counters may be able to use the average length to calculate the number of board-feet to be charged for but, again, that is not the PHYSICAL real world.
You keep ignoring what Taylor says abut q = Bx!
“That is, the fractional uncertainty in q = Bx (with B known exactly) is the same as that in x.”
Since this equation is a product you *must* use fractional uncertainty.
Taylor says: “According to the rule (3.8), the fractional uncertainty in q = Bx is the sum of the fractional uncertainties in B and x. Because u(B) = 0, this implies
u(q)/q = u(x)/x
Rule 3.8: Uncertainty in products and quotients.
“Rather than address the simple question of the partial derivatives of the mean function, you start rabbiting on about cancellations and relative uncertainties which have nothing top do with the case. “
IT HAS EVERYTHING TO DO WITH THE CASE!
When you are finding the measurement uncertainty of a product you MUST use relative uncertainties!
Take πHR^2. The partial with respect to R is 2πHR. Now, divide that by πHR^2 to make it a relative uncertainty and what do you get:
YOU GET 2/R. Multiply u(R) by the 2/R and what do you get?
You get 2u(R)/R.
The exponent of R becomes the weighting factor for the uncertainty of R. If R and H have equal relative uncertainties because you use the same instrument to measure both, R will dominate the uncertainty because of the weighting factor!
Your problem is that 1. you don’t understand when you must use relative uncertainty and, 2. you can’t believe I did that in my head and came up with the same factor for the uncertainty contribution from R that Possolo did! When I told you the exponent of R was a weighting factor for the uncertainty of R you couldn’t believe it and accused me of not being able to take a partial derivative!
This is the THIRD time I’ve explained this to you. And you STILL don’t understand the use of relative uncertainty and what it does. You don’t know basic algebra, you don’t know basic calculus, and you don’t know basic metrology. You just keep right on making a fool of yourself.
“No one is disputing that the average uncertainty is u(sum)/N.”
I’m disputing it. You are the one who keeps claiming it, and you ate wrong. U(sum) is not the sum of the uncertainties, it’s the uncertainty of the sum.
“What is being disputed is that the average measurement uncertainty is the uncertainty of the average. You’ve been shown over and over that is not the case!”
This is just pathetic. You’ve been saying this for years, and you never acknowledge that I am not saying the average uncertainty is the uncertainty of the average. I’m specifically saying that assuming random uncertainty the uncertainty of the average is smaller than the average uncertainty. The only person I’ve seen claiming the average uncertainty is the uncertainty of the average is Pat Frank.
The fact you keep claiming I’m saying it suggests you are either trolling, or suffering from some congitive decline.
“Lay 10 boards with a measurement uncertainty (u1 to u10) end-to-end and your uncertainty for the length will be the sum of the individual uncertainties. u1 + u2 + … + u10”
Only if you assume there is total dependence between all the measurement uncertainties.
“All you do when finding the average uncertainty is making u1 = u2 = … = u10 = (u1+u2+…+u10)/n.”
Yes. That would be the upper limit of measurement uncertainty of the average. But it’s ot what you get if you assume some randomness in the uncertainties.
“In the real physical world NO ONE CARES about the average value or the average measurement uncertainty. ”
Simply astonishing. Your entire argument over the last 4 years has been about the average global value and what it’s uncertainty is. Why do you care about it so much if you think no one in the teal world cares about it?
“Telling me that the average length of a load of 2″x4″ boards…”
Ye gods! We’re back to this are we? If Tim can’t think of a use for an average then no one can.
““That is, the fractional uncertainty in q = Bx (with B known exactly) is the same as that in x.”
Yes. That’s the point. The fractional uncertainty is the same is the absolute uncertainty changes. If you accept that the fractional uncertainty of the sum is the same as the fractional uncertainty of the average, then what have you been arguing about these past four years?
“Since this equation is a product you *must* use fractional uncertainty. ”
Are we finally getting to Tim’s misunderstanding? He thinks that once you know the fractional uncertainty, you cannot them translate that back into an absolute uncertainty. If only he rember the part of Taylor where explains that you have to keep changing between the two, but end up translating it to an absolute uncertainty.
“When you are finding the measurement uncertainty of a product you MUST use relative uncertainties!”
I’ve explained this to you before. I doubt explaining it again will have any more effect. But you are confusing two different things.The general rule, which uses partial differentials and absolute uncertainties. And the specific rule got multiplying and dividing which uses relative uncertainties and no partial derivatives. Using either the general or specific rule will give you the same result. The specific rules are derived from the general rule. But you cannot mix them. You do not put relative uncertainties into the general rule just because you know the result will simplify to relative uncertainties.
“Take πHR^2.”
Not again. Why not look at the average given that’s what we are talking about?
“The partial with respect to R is 2πHR. Now, divide that by πHR^2 to make it a relative uncertainty and what do you get:”
You get exactly what I keep telling and you now agree with. Why do you keep torturing us both like this?
“The exponent of R becomes the weighting factor for the uncertainty of R”
I suspect your problem is you never learnt to actually do the math. All you ever do is look for an example that works and then try to shoehorn it into places they don’t apply. In this case the exponent for R leads you to a weighting value for the relative uncertainty. That only works when your function involves no adding.
But we are agreed that you can use this for the volume of the cylinder, could you at least try again with the uncertainty of an average. You’ve now accepted that in the cylinder π is still a factor in the partial derivative. So what happens when you divide a value by n? And then what do you get when you apply that to the general rule?
The only one here not doing the math is the one that doesn’t understand that x^2 = x * x
No partial derivatives required to find the total relative uncertainty of x^2.
Now try that with x^(1/2).
It works just fine!
As usual, you can’t even do simple algebra. Why do you depend on me doing it for you? Why don’t you learn simple algebra?
if q = sqrt(x) = x^(1/2) then
u(q)/q = (1/2) u(x)/x
What is so fricken hard about this?
And how did you come to that conclusion using the repeated multiplication method?
I’ve still zero idea why you think this matters.
From Taylor’s special rule regarding powers as repeated measuments
Any guesses as how he generalizes the rule?
Yes, by multiplying by (1/2), the exponent of the component value.
Did you bother to read the title of the rule section where it is derived for the general case?
“Uncertainty in Any Function of One Variable”
With this restriction what does dq/dx consist of? It consists of the exponent of “x” times x^(n-1).
You continue to get confused between calculating absolute uncertainty and relative uncertainty. When you have different components with different dimensions you *have* to use relative uncertainty to calculate total uncertainty. That can then be used to find the absolute value of the uncertainty of the total. It’s why Possolo came up with what he did for volume. u(V)/V]^2 = (2u(R)/R)^2 + (u(H)/H)^2. R^2 and H *do* have different units which is why you have to calculate their relative uncertainties first and then add those. Once you have the total relative uncertainties then you can calculate the absolute uncertainty of the total, V.
My guess is that if you look at Taylor’s Section 3.11 you will not be able to state the implied assumption made throughout that section. You *never* understand the basics so I’m pretty sure you’ll not be able to figure out the unstated but implied assumption of that section!
Once again Tim fails to remember what he’s arguing about.
“Yes, by multiplying by (1/2), the exponent of the component value.
Did you bother to read the title of the rule section where it is derived for the general case?”
You claimed that becasue raising to a power is just repeated multiplication, you didn’t need to use calculus to work out the uncertainty, just the rules for multiplying dependent uncertainties.
My point is that you could do that, but calculus is more general as it can be applied to cases where repeated multiplication is impossible. As Taylor tries to tell you, that only works when you are rising to the power of a positive integer.
Your response is to point to the section
“Uncertainty in Any Function of One Variable”
Which is my point. That section shows you how to use the derivative of the function to work out the uncertainty for any function.
“With this restriction what does dq/dx consist of?”
It’s the derivative of the function q with respect to x.
“It consists of the exponent of “x” times x^(n-1).”
Yes, that’s who it works for a power.
“You continue to get confused between calculating absolute uncertainty and relative uncertainty.”
Keep telling yourself that if it makes you feel better. I stand by my own results.
“When you have different components with different dimensions you *have* to use relative uncertainty to calculate total uncertainty.”
No you do not. Maybe you are the one confused. You use the general equations, either for one variable of multiple variables with absolute uncertainties. They don’t work if you use relative uncertainties. What you can do, in some cases, is simplify the equation to one involving relative uncertainties.
“It’s why Possolo came up with what he did for volume. u(V)/V]^2 = (2u(R)/R)^2 + (u(H)/H)^2.”
Yes, that’s an example of when you can simplify the equation – when your function consists of nothing but multiplications and powers.
The assumptions of the general equation for approximating the combined uncertainties are of the top of my head that:
“He thinks that once you know the fractional uncertainty, you cannot them translate that back into an absolute uncertainty.”
Your lack of reading comprehension skills is showing again. What I’ve said is that you can’t know the absolute uncertainty until AFTER you’ve calculated the relative uncertainty. And the weighting factor for the component relative uncertainties does *NOT* come from the partial derivative of the functional relationship but from breaking down the components that have exponents greater than one into their own constituent components, e.g. x^2 ==> x * x. So the total uncertainty of x becomes 2u(x)/x.
The partial derivative in Eq 10 is nothing more than shorthand for multiplying the relative uncertainty by the total.
if q = xy
then you get [u(x)/x] as the relative uncertainty of x. When you multiply that by q = xy you get (xy)(u(x)/x) ==> yu(x). And y is the partial derivative of xy with respect to x.
Now apply this to q = yx^2 ==> y*x*x
The relative uncertainty becomes u(x)/x + u(x)/x + u(y)/y = (total relative uncertianty percentage). This has nothing to do with partial derivatives at all.
The partial derivative only appears *after* you’ve found the relative uncertainty. Where Eq 10 is very misleading is that you don’t need to multiply each component by its partial derivative. Just find the total relative uncertainty and multiply that percentage times the total!
u(q) = (yx^2)(total relative uncertainty percentage). No partial derivatives even need to show up!
“What I’ve said is that you can’t know the absolute uncertainty until AFTER you’ve calculated the relative uncertainty.”
And you are wrong about that. Using the general equation gives you the estimated absolute uncertainty without ever using relative uncertainties. You can then simplify them into relative uncertainties by dividing through.
“The partial derivative in Eq 10 is nothing more than shorthand for multiplying the relative uncertainty by the total.”
Delusional. How is calculating a partial derivative easier than just multiplying everything by the total? As always you are concentrating on one example where you can do what you say, and assuming this is the way to do it for everything.
“The partial derivative only appears *after* you’ve found the relative uncertainty.”
Tim just likes to do everything backwards. Here he seems to be using the specific rules for multiplication involving relative uncertainties in order to get the partial derivative. That of course works. It’s just not the purpose of the general equation, which is to plug in the partial derivatives in order to get the combined uncertainty.
“Where Eq 10 is very misleading is that you don’t need to multiply each component by its partial derivative.”
Now try it with a function that involves multiplication and addition.
“ Using the general equation gives you the estimated absolute uncertainty without ever using relative uncertainties”
You just won’t listen, will you?
Why do you think Possolo came up with his relative uncertainty equation for the barrel BEFORE he determined the absolute uncertainty?
<img src=”https://i.ibb.co/NmpR991/possolo-uncertaintyofvolume.png” alt=”possolo-uncertaintyofvolume” border=”0″>
“Gauss’s formula6 [Possolo and Iyer, 2017, VII.A.2], which is used
in the Guide to the expression of uncertainty in measurement (gum)
[JCGM 100:2008], provides a practicable alternative that will produce
a particularly simple approximation to the standard deviation
of the output quantity because it is a product of powers of the input
quantities: V = πR^2H^1.”
The operative words here are “powers of the input quantities”.
Thus you get 2 * (u(R)/R) and 1 * (u(H)/H)
Possolo: “Note that π does not figure in this formula because it has no uncertainty, and that the “2” and the “1” that appear as multipliers on
the right-hand side are the exponents of R and H in the formula for
the volume.”
Even Bevinton in his Eq 3.30 (Data Reduction and Error Analysis for the Physical Sicences) shows for
x = au^b
The relative uncertainty becomes
σ_x/x = b(σ_u/u) (3.30)
You can whine and gripe all you want but Taylor, Possolo, and Bevington do it this way. I’ve shown you two different derivations for this. And *still* you refuse to learn.
Willful ignorance is the worst kind of ignorance.
“Why do you think Possolo came up with his relative uncertainty equation for the barrel BEFORE he determined the absolute uncertainty?”
Because it’s simpler.
Rest of your rant is just missing of the point. The results are correct. But to understand how the general rule works you need to use it, rather than starting at the final simplification. You problem is trying to apply this specific example to a different problem, that of the average.
“Because it’s simpler.”
No, it’s because it’s how the process HAS to work!
You are ignoring the example I gave you of the voltmeter with a 1% uncertainty. You calculate the percentage uncertainty FIRST!
You simply don’t know how to evaluate the general equation. It doesn’t work until you determine the uncertainty! Just where you think u(x) comes from is beyond comprehension.
One more time, the average uncertainty is *NOT* the measurement uncertainty of the average. I keep telling you to use the terms “measurement uncertainty of the average” and “standard deviation of the sample means” in order to be specific about what is being discussed. You refuse to do so in order to continue your use of the argumentative fallacy of Equivocation – changing the definition of “uncertainty of the mean” to be whatever you need it to be in the moment.
“No, it’s because it’s how the process HAS to work!”
Completely wrong. And you should know as you keep pretending you explained how it worked to me.
You can easily use the general equation of propagation, without the simplification to relative uncertainties.
Here’s the resulting equation.
u(V) = √[{πR²u(H)}² + {2πRHu(R)}²]
Using the values given int he example we have
R = 8.40m
H = 32.50m
u(R) = 0.03m
u(H) = 0.07m
So
u(V) = √[{π * 8.4² * 0.07}² + {2 * π * 8.4 * 32.5 * 0.03}²]
= √[{15.52}² + {51.46}²]
= 54m³ to 2sf
Exactly as the example shows.
“You simply don’t know how to evaluate the general equation.”
Yet, somehow it works.
Why do you NEVER provide quotes from the pertinent documents?
Here is what Dr. Possolo says.
Exactly what metrology document are you using that says π should be represented in the uncertainty equation?
Mm
You are just mathtubating with no backup at all. In the end, you only look foolish.
Here is a reference from Experimentation and Uncertainty Analysis for Engineers by Dr. Coleman and Dr.Steele.
Maybe you can see how “2” and “π” are declared to have no uncertainty and are not included in the uncertainty equation. You might also tell us where the sensitivity of “16” comes from!
“Why do you NEVER provide quotes from the pertinent documents?”
I feel sorry for you. You may well be the product of a failed education system. You obviously never learnt how to work your way through a problem rather than just crib the answer.
The equation I used comes directly from the “Law of Propagation of Uncertainty” (Equation 10 in the GUM). It’s a simple exercise in applying the correct values into equation 10.
The resulting equation allow you to calculate an approximation of the combined uncertainty for the water tank, giving exactly the same result as Possolo does from the simplified equation.
Moreover, I know they must give the same result, becasue one is just an algebraic transformation of the other. Divide my equation through by V and you get the one Possolo gives.
“Exactly what metrology document are you using that says π should be represented in the uncertainty equation?”
It’s the GUM again. Equation 10. Partial derivative of H is πR², partial derivative of R is 2πH. Bot derivative must be included in the result to get the correct answer. Now when you divide through by V, the πs cancel, so they don’t appear on the left hand side. But the result is still correct because π is required in the equation for V.
What Possolo is saying is that there is no term for the uncertainty of π, as it has no uncertainty.
“You are just mathtubating with no backup at all.”
Apart from the fact I get the correct value for the uncertainty and I can easily show how my equation leads to the one Possolo gives, and indeed equation 12 in the GUM.
“Here is a reference from Experimentation and Uncertainty Analysis for Engineers by Dr. Coleman and Dr.Steele”
You really don’t get that mathematics is not a choose your own adventure. Either it’s right or it’s not. If you think my interpretation of GUM (10) applied to the volume of a water tank is wrong, you need to point out where it wrong, not just appeal to authority. If I’m right and your source disagrees, then your source is wrong.
Of course, your reference is not wrong, just as usual you don;t get the point. The equation it gives is exactly the same simplified equation from the GUM you can apply to any function of the form cx1^p1x2^p2 … xn^pn. Note, your book says the equation is the special form of equation (3.18). I think I can guess what (3.18) is.
“Maybe you can see how “2” and “π” are declared to have no uncertainty and are not included in the uncertainty equation.”
As with all these examples they cancel on the left hand side when you divide through by the total (θ in this case).
“You might also tell us where the sensitivity of “16” comes from!”
You have R^4. This gives you a term of (4u(R)/R)^2. 4^2 = 16.
You are still avoiding the point of all this. The simplified equation you keep using can only be used when the function is of the given form. A more complicated example, such as the Wheatstone Bridge example in the Possolo book. There the equation is RU = RG * RF * (RE^-1 + RH^-1), and you cannot use the same simplification used for the water tank. Hence the result is given by the standard way, using the partial derivatives for each term applies to the absolute uncertainties.
Cherry picking an equation without understanding how it supposed to be used is fruitless.
You have been shown examples from textbooks written by PhD’s like Dr. Taylor, Dr. Possolo, Dr. Coleman, and Dr. Steele who are experts in uncertainty analysis. They ALL say that constants and counting numbers HAVE NO UNCERTAINTY! Why do you continue to attempt to shove your interpretation upon everyone.
Why don’t you write an essay detailing the mathematics behind your assertion that it is proper to include these into the uncertainty analysis. Your analysis should show why the math these professors use to justify their teachings is incorrect and should be dismissed.
I am sure everyone here is tired of seeing me continue to show actual quotes an math from metrology textbooks while you drone on making your own interpretations of how it should be done. Don’t expect anymore detailed refutation of your own interpretations and assertions. All I am going to show is what you should read.
“Cherry picking an equation without understanding how it supposed to be used is fruitless.”
Translation, Gorman doesn’t like the consequences of using the general law for propagation when applied to an average, so claims there are reasons why you can’t use it, without ever explaining what those reasons are.
“They ALL say that constants and counting numbers HAVE NO UNCERTAINTY!”
No they don’t. What they say is that exact values that have no uncertainty have no uncertainty. Why you think that is surprising or why you think I would disagree is a mystery.
Your problem is in ignoring everything they say about scaling by an exact number, and instead pretend that what they mean is you can ignore any exact constant.
“Why don’t you write an essay detailing the mathematics…”
You never read or understand my lengthy comments – why would you understand an essay?
“…behind your assertion that it is proper to include these into the uncertainty analysis”
If you don;t understand what the equation is telling you, what’s the point of explaining it to you in more detail?
“Your analysis should show why the math these professors use to justify their teachings is incorrect and should be dismissed.”
Straw man. I have never said that any of those professors are incorrect. What I keep saying is your understanding of what they are saying is incorrect.
“…your assertion that it is proper to include these into the uncertainty analysis.”
Take TN1900 Example 8
Note the constant term 2 in the equation.
See how the constant term 2, with no uncertainty, becomes 2² = 4, in the combined uncertainty equation.
It’s more interesting than you might think.
There are 2 ways of stating the molecular mass.
or
The first does give
The second gives
The second is correct when adding in quadrature, but the first adds the oxygen uncertainty directly because the uncertainty of oxygen is the same for both oxygen atoms.
A similar approach applies to ethane (C2H6), ethylene (C2H4) and acetylene (C2H2).
Yes, that’s why it’s important to check the independence of the measurements. If you treat the oxygen as two separate measurements, you would use the second method, and get the wrong result.
It’s more the independence of the source of uncertainty. That’s why I’ve been banging on about the recorded resolution(s) providing a floor for uncertainty.
Dummy, you just can’t refrain from cherry picking can you?
The “2” is not a constant, it is a shorthand notation that denotes the number of atoms! A chemical reaction formula IS NOT a functional relationship.
Please note,
This is not the same as a partial differential sensitivity factor that you like to pretend you know how to use. You want to be an expert in metrology, you tell us the probability reason for using a factor of 4.
I know you are trying very hard to show that constants have an effect when you take partial differentials. If you want to make an impression, tell us where the CONSTANT of “1000” goes in the uncertainty equation in Example 9.
The functional relationship is:
c_D = 1000mP∕V
The corresponding uncertainty equation is:
(u(c_Cd)∕c_Cd)² ≈ (u(m)∕m)² +(u(P)∕P)² +(u(V )∕V )²
Whoops, where did the 1000 go. No uncertainty maybe?
“The “2” is not a constant,”
How much uncertainty do you think the figure 2 has. How many CO2 molecules are there with more or less than 2 O? Even if there are any it doesn’t matter because the model NIST are using assumes there are only ever 2.
“This is not the same as a partial differential sensitivity factor that you like to pretend you know how to use.”
Your ability to split hairs, rather than accept the the fact that however you do it, the constant will have an effect on the uncertainty is quite remarkable.
Probability theory and the general law and any other result derived from them should be the same, becasue they all derive from the same concept.
“You want to be an expert in metrology, you tell us the probability reason for using a factor of 4.”
A couple of rules. When adding random variables the variances add. When multiplying a random variable by a constant C its variance will be multiplied by C².
Combining these two rules if X and Y are random variables, and C is a constant, then
Var(X + CY) = Var(X) + Var(CY) = Var(X) + C²Var(Y)
“I know you are trying very hard to show that constants have an effect when you take partial differentials”
I don’t have to try hard at all. It’s extremely easy. It’s how differentials work. It’s one of the first things you learn in elementary calculus. If the function is Cx, then the derivative with respect to x is C. If you want an elementary primer try this
https://www.mathsisfun.com/calculus/derivatives-rules.html
Then there are many other reasons why the result applied to the lay for propagation requires that scaling a measurement will scale the uncertainty by the same factor.
You think of the law as looking at the upper and lower limit of an uncertainty range and applying the same function to both. If your measurement is 10 with an uncertainty interval of ±1, then the lower limit is 9 and the upper limit is 11, with a range of 2. Multiply the measurement by 2, and you get 9 * 2 = 18, and 11 * 2 = 22. The range is 4. The uncertainty is half the range. Do I need to explain that this result generalizes to multiplying by any constant.
“If you want to make an impression, tell us where the CONSTANT of “1000” goes in the uncertainty equation in Example 9.”
I’ve gone through this so many times – do you really think you will ever understand it? You have there a function
c_D = 1000mP/V
which as with all the ones you bring up is of the form cx1^p1x2^p2…xn^pn. And the argument will be the same as with all of them. You can use the general equation as is, and can then divide through by c_D² to simplify the equation into one involving relative uncertainties, or you can just use GUM (12), which is just jumping to that final result.
As with all of these examples, what happens to the constant is that when you convert to relative uncertainties it disappears from the right hand side of the equation, but is reappears on the left hand side, in that the relative uncertainty depends on the result (c_D in this case), which has the 1000 in it. You cannot just get rid of the 1000 becasue that would unbalance the equation.
∂f/∂m = 1000P/V
∂f/∂P = 1000m/V
∂f/∂V = -1000mP/V²
u(c_D)² = (1000P/V)²u(m)² + (1000m/V)²u(P)² + (-1000mP/V²)²u(V)²
Lots of 1000s there. You could simplify it a bit by factoring out the 1000², but much simpler to divide through by c_D² = (1000mP/V)²
u(c_D)²/c_D² = [(1000P/V)/(1000mP/V)]²u(m)² + [(1000m/V)/(1000mP/V)]²u(P)² + [(-1000mP/V²)/(1000mP/V)]²u(V)²
= (u(m) / m)² + (u(P)/P)² + (u(V) / V)²
“Whoops, where did the 1000 go.”
It’s behind you!
u(c_D)²/(1000mP/V)² = (u(m) / m)² + (u(P)/P)² + (u(V) / V)²
So,
u(c_D)² = (1000mP/V)²[(u(m) / m)² + (u(P)/P)² + (u(V) / V)²]
Whoops, where did that 1000 come from?
You do realize that your explanation implicitly includes the fact that the constant, “1000”, HAS NO UNCERTAINTY don’t you?
Why haven’t you found a metrology text that shows your math trick for eliminating constant values?
Now let’s look at your math.
You might want to consider revising these to match what experts have decided is the best way to determine uncertainty.
These are products and quotients of terms with different dimensions. You must use relative uncertainty (percents, i.e., unitless) values to obtain a correct uncretainty value. Example 9 in NIST TN 1900 says
Let’s look at GUM 5.1.6
Think about why Dr. Possolo’s example of uncertainty for a cylinder is derived in this fashion.
Why do you continue to cherry pick stuff and give your own interpretations of the involved mathematics?
Maybe, since you are convinced that you are correct, it would be appropriate for you to contact NIST and BIPM to tell them that their documents are incorrect and show them how your derivations are better than theirs.
Or, maybe, you could spend the time learning about metrology as several of us have. It will take you a goodly amount of time, several equivalent semester hours at least.
You might visit this site for some training. 3.2. Mean, standard deviation and standard uncertainty – Estimation of measurement uncertainty in chemical analysis
Here is another, the NIST Engineering Statistical Handbook. NIST/SEMATECH e-Handbook of Statistical Methods
Another lengthy response that makes no attempt to understand what I’m saying.
“You do realize that your explanation implicitly includes the fact that the constant, “1000”, HAS NO UNCERTAINTY don’t you?”
The number 1000 has no uncertainty. Why do you keep pretending I’m saying anything different. You do realize the 1000 is only there to convert the units. V is given in mL, but the result is mg/L, hence the result is multiplied by 1000. It really shouldn’t be surprising that when you do that the uncertainty also has to be multiplied by 1000.
“Why haven’t you found a metrology text that shows your math trick for eliminating constant values?”
Your the one who keeps claiming the constant is eliminated. I’m saying it affects the uncertainty.
“You might want to consider revising these to match what experts have decided is the best way to determine uncertainty.”
If you think any of the answers are wrong, you need to explain why. I’m pretty sure any expert would agree with them. You still seems to be under the delusion that experts make up the result of a derivative in order to get the best final result. That’s not how maths works.
“You must use relative uncertainty (percents, i.e., unitless) values to obtain a correct uncretainty value.”
There’s nothing cretainy about my equation. I’m sure you will find the dimensions area ll correct. There is absolutely no “must” about using relative uncertainties. It’s just a simple short-cut.
Let’s try it.
u(c_D)² = (1000P/V)²u(m)² + (1000m/V)²u(P)² + (-1000mP/V²)²u(V)²
c_D has dimensions of mass / volume.
1000P/V * u(m) has units of mass / volume
1000m/V * u(P) has units of mass / volume
-1000mP/V² * u(V) has units of mass * volume / volume² = mass / volume
m = 100.28 mg
u(m) = 0.05 mg
P = 0.9999
u(P) = 0.000058
V = 100.0 mL
u(V) = 0.07 mL
u(c_D)² = (1000P/V)²u(m)² + (1000m/V)²u(P)² + (-1000mP/V²)²u(V)²
= (999.9 / 100.0)²0.05² + (100280 / 100.0)²0.000058² + (100269.972 / 10000.0)²0.07²
= 9.999²0.05² + 100.28²0.000058² + 10.0269972²0.07²
≈ 0.5000² + 0.005816² + 0.7019²
≈ 0.25 + 0 + 0.49
≈ 0.74
Hence,
u(c_D) = √0.74 = 0.86 mg/L
In keeping with the NIST answer of ≈ 0.9 mg/L
Once again, the equation you insist is wrong gives the correct answer.
“Let’s look at GUM 5.1.6”
Yes lets.
Two important points:
“Equation (10), can be expressed as …” You keep insisting that somehow it’s wrong to use equation 10. But the equation you want to use is just the same equation expressed in a different form.
“if Y is of the form…”. This only works for functions of that form. Something you keep ignoring. It works for this function as it consists entirely of multiplication and division. You cannot use it for the function we are really interested in, that of an average.
“Think about why Dr. Possolo’s example of uncertainty for a cylinder is derived in this fashion.”
Because it’s simpler to do it that way when the function, volume of a cylinder, consists of nothing but multiplication and raising to a power.
“Why do you continue to cherry pick stuff and give your own interpretations of the involved mathematics?”
Because it’s how I was taught. Not just to blindly accept equations from authority, but to try to solve them, to understand why they work, and to check that they are correct.
“Maybe, since you are convinced that you are correct, it would be appropriate for you to contact NIST and BIPM to tell them that their documents are incorrect and show them how your derivations are better than theirs.”
Why do you think they are incorrect? They agree with everything I’m saying.
“Or, maybe, you could spend the time learning about metrology as several of us have. It will take you a goodly amount of time, several equivalent semester hours at least.”
As far as these equations are concerned, it really didn’t take that long to figure them out. Just trying to explain them to you lot has been the best education.
Read the GUM 5.1.6 to see how to treat powers. It will explain many of the examples you are cherry picking.
As to constants, I have given you multiple references from textbooks and NIST examples that explicitly say right in the text that constants have no uncertainty and that they are not included in the uncertainty equation.
Your argument is not with me, but with the authors of the texts that say this. Why don’t you show how these authors have a wrong interpretation of metrology.
I have searched multiple resources and I can not find one instance that supports your interpretation. Maybe you can show us one.
“Read the GUM 5.1.6 to see how to treat powers. It will explain many of the examples you are cherry picking.”
You really aught to read my comments. I explain how to derive the equation in 5.1.6. And yes, cherry-picking is quite a useful skill in understanding how this works.
“As to constants, I have given you multiple references from textbooks and NIST examples that explicitly say right in the text that constants have no uncertainty and that they are not included in the uncertainty equation.”
Again, constants per se can have uncertainty. If by constant you mean an exact value that has no uncertainty, why do you keep ignoring the fact that I agree. I’ve no idea why anyone would think an exact value with no uncertainty, had uncertainty.
“Your argument is not with me, but with the authors of the texts that say this.”
I’m not arguing with them. That’s just you attacking your straw men.
“Why don’t you show how these authors have a wrong interpretation of metrology.”
Could you give an actual example of something they’ve actually said that you think I disagree with?
“I have searched multiple resources and I can not find one instance that supports your interpretation. Maybe you can show us one.”
Start with GUM equation 10.
If any constant had uncertainty, π would be the one since it is an irrational number. Do you know why it is considered to not have uncertainty?
Yes, you are arguing with them. Do you think textbooks or the GUM would not cover your derivation if it was appropriate? Show a textbook or other online course that shows your derivation!
An example of a constant with uncertainty is given in TN1900 E1, the Gravitational constant G.
“Yes, you are arguing with them.”
Stop lying. Give me one example where I have said Eq 10 is wrong. I can’t help it if you don’t understand calculus. It does mean what I say is wrong.
“Do you think textbooks or the GUM would not cover your derivation if it was appropriate?”
They tell you what the equation is. They probably expect you to be able to do the maths. You keep wanting them to explain everything to you with examples. If my derivations were wrong you would not be able to get the correct result for equation 12. You need to learn to figure out why a given equation is correct – not just take it on trust.
“Show a textbook or other online course that shows your derivation!”
You’ve had four years, and so far you have not shown a single textbook that claims uncertainty increases when you increase sample size.
Meanwhile, try Taylor Exercise 3.47
The function is
a = g (M – m) / (M + m)
where g, acceleration under gravity is assumed to have no uncertainty.
The answer is given at the back of the book for u(a), and guess what – it involves g.
“Show a textbook or other online course that shows your derivation!”
Another example. Bevington 3.3 weighted Suns and Differences.
If x is the weighted sum of u and v
x = au + bv
The partial derivatives are simply the constants …
And we obtain
σx² = a²σu² + b²σv² + 2abσuv²
“I’m specifically saying that assuming random uncertainty the uncertainty of the average is smaller than the average uncertainty. “
“But it’s ot what you get if you assume some randomness in the uncertainties.”
Random uncertainty is not a sufficient criteria. The uncertainty also has to be Gaussian and it has to be developed from measurements of the same thing multiple times by the same instrument under the same conditions with zero systematic uncertainty. Assumptions which you *NEVER* specify.
For instance, if a person is reading an analog voltmeter ten times the readings may be random. But if that person turns his head even a minute fraction over the 10 readings the parallax will change and the “same conditions” will be violated. Therefore the readings may *NOT* turn out to be Gaussian.
If the overhead heating duct comes on while reading a LIG thermometer 10 times the “same conditions” requirement may be violated.
Now, apply this to the GAT. Every single one of the data points violates *all* of the requirements. Single readings. Different instruments. Different conditions. Systematic bias.
You keep saying over and over that you don’t have the meme “all measurement uncertainty is random, Gaussian, and cancels” stuck in your head but it just comes shining through in everything you post! This post is a prime example!
“If Tim can’t think of a use for an average then no one can.”
I’v given you a use for the average. As usual, your reading comprehension skills are atrocious. The problem is that use doesn’t apply to the real world, it applies to math world. And even then it has measurement uncertainty that doesn’t cancel unless you apply your common meme of “all measurement uncertainty is random, Gaussian, and cancels”.
He (they even) won’t understand because he has no experience doing anything in the real world, he can’t understand.
Numbers Is Numbers!
How many has this been pointed out, to no avail. Must be scores now.
All these “experts” using B as the leading character for their handles have the same mental blocks. Strange, coincidence?
“All these “experts” using B as the leading character for their handles have the same mental blocks. Strange, coincidence?”
ROFL!!
They probably all believe in numerology!
“How many has this been pointed out, to no avail. Must be scores now”
And yet they believe linear regression of the stated values is somehow valuable in projecting the future.
“This is the THIRD time I’ve explained this to you. And you STILL don’t understand the use of relative uncertainty and what it does. You don’t know basic algebra, you don’t know basic calculus, and you don’t know basic metrology. You just keep right on making a fool of yourself.”
Tim still insisting he’s the one who explained the volume uncertainty to me, as if it matters.
Here’s Tim explaining the “Possolo” method in 2022.
https://wattsupwiththat.com/2022/11/03/the-new-pause-lengthens-to-8-years-1-month/#comment-3636937
https://wattsupwiththat.com/2022/11/03/the-new-pause-lengthens-to-8-years-1-month/#comment-3637635
I eventually spell it out
This is what Tim now says is correct, and claims he had to explain it to me. But at the time,. his response was
You *still* don’t understand that x^2 = x * x
The relative uncertainty of that is u(x)/x + u(x)/x ==> 2u(x)/x
NO PARTIAL DERVIATIVE NEEDED!
And you *must* do this before finding the absolute uncertainty! Unless you have a crystal ball like you must have!
“You *still* don’t understand that x^2 = x * x”
More patronizing lies. I’m sure there must be some deep psychological reason he has to denigrate everyone he talks to.
“The relative uncertainty of that is u(x)/x + u(x)/x ==> 2u(x)/x”
Except it’s not. The relative uncertainty would be √[(u(x) / x)² + (u(x) / x)²]. Which is the wrong answer. In order to use the x times x trick you have top understand that u(x) and u(x) are not independent uncertainties. Or you can just use the calculus.
“And you *must* do this before finding the absolute uncertainty! Unless you have a crystal ball like you must have!”
In Tim’s world, being able to apply an equation to get the correct result is black magic.
“It’s Christmas Eve, and I’m not wasting my time going down these”
Oops.
In which karlo tries to score a cheep point by missing of the rest of my sentence.
“…endless stream of consciousness comments all day.”
You are the one that is wrong. “2a=a•a”, 3a=a•a•a”, etc. I’m not sure what algebra you use, but that is what I was taught!
Read this from the GUM.
See especially Note 2.
You are basically nitpicking the fact that Tim did not show the entire calculation but instead concentrated on the pertinent part! What a joke.
I recall putting Note 2 in front of his nose previously; it had no effect — averaging still magically removes “error” in his world.
Do you think an average is of the form given in 5.1.6? That is a string of multiplications and divisions?
Significant digit rules are not your friend.
“2a=a•a”, 3a=a•a•a”
Only if you are using “•” to mean “+”, which is not a convention I’m aware of.
” I’m not sure what algebra you use, but that is what I was taught!”
Doesn’t surprise me. I guess you dropped out before learning why multiplication is not just repeated addition.
All of which is still besides the point, which is you cannot treat the uncertainty of a value raised to a power as if it was multiplication of independent measurements.
“Read this from the GUM.”
Yes,that’s the simplification you can get when your function is of that form.
“You are basically nitpicking the fact that Tim did not show the entire calculation…”
My “nitpick” is that Tim got the initial equation wrong, insulted me when I corrected him, and has then spent the last couple of years claiming he was the one who explained it to me.
My bigger nitpick is that he, and you, continue to draw an incorrect conclusion from it, in order to distract from the correct way of calculating the uncertainty of an average.
I’ve given you the derivation from three experts and quoted them exactly.
I got nothing wrong. You just don’t have a grasp of the subject at all!
unfreakingbelievable.
Taylor’s rule 3.18:
if q = x * w and you assume some cancellation of random errors then
u(q)/q = sqr[ (u(x)/x)^2 + (u(w)/w)^2 ]
if x and w are the same then you get
u(q)/q = sqrt[ 2 * (u(x)/x)^2 ]
==> .7 * u(x)/x
This is the RSS method of adding uncertainties. If you do a direct addition then it *is* 2 * u(x)/x.
One is “best” case and the other the “worst” case.
Again, STOP CHERRY PICKING
Study the literature and learn. In neither case is the partial derivative required!
“One is “best” case and the other the “worst” case.”
There’s no best case here, that’s the point.you cannot just plug a repeated multiplication into the uncertainty equation -you have to understand that the two uncertainties are dependent.
I really don’t understand what this distraction is meant to accomplish. There’s a general rule that allows you to calculate the uncertainty just by taking the derivative of the function. Yet you make a huge case of demonstrating you could do it in a slightly more complicated way for a few specific functions. What point does that prove? The answers the same. Nobody doubts that x² = x*x. It just seems to be an excuse for you to keep lying about me not understanding the algebra.
“ In this case each measurement has an uncertainty of 0.5°C. the average uncertainty can only be 0.5°C. How you think 5/100 = 0.05°C is an average is beyond the understanding of mankind.”
Judas Priest! You can’t even calculate an average!
The average is the sum of the elements divided by the number of elements. The average uncertainty is *NOT* 5/100. It’s (.5*100)/100 = 0.5!
4.2.3 describes how to calculate how closely you have calculated the mean! It’s the STANDARD ERROR which is better described as the standard deviation of the sample means!
Did you miss the part where it says: “how well q_bar estimates the expectation μ_q of q,”
It’s a measure of how well the uncertainty is being estimated. It is *NOT* the measurement uncertainty itself – that is u_q.
You just keep on cherry-picking without actually taking the time to read and understand what is being said. You’ve done the same thing here
The standard deviation of the sample means tells you how closely you have located the mean. It does *NOT* tell you the accuracy of that mean! 4.2.3 says nothing different. “how well q_bar estimates” are the operative words which you totally ignored in your zeal to cherry-pick.
As bellcurveman writes tome after tome of unreadable noise…
“First though – can you just say whether you agree or not with the with the comment “As you add multiple things the uncertainty of the sum grows, and therefore so does the uncertainty of the average.””
I just hate having to be your teacher on basic statistics. You want the math? Here’s the math:
The variance will decrease when the next element is within sqrt(1 + 1/n) standard deviations of the mean. The variance will increase when the next element is greater than sqrt(1 + 1/n) standard deviations of the mean.
Now, which part of the global temperature field dominates the global average? It *should* be the part covering 70% of the globe – water. What is the standard deviation of water temperature? It’s pretty small. Sea Surface Temps pretty much have a temp range of 4C – 8C. Now land, on the other hand, can have daytime ranges of 30C-40C and nighttime ranges of 10C to 25C.
Since water will dominate the overall average and variance, adding additional water temp elements won’t change the variance much. But adding land elements will change the variance because the increased variance will add temps far outside the standard deviation of the global temps.
Conclusion: Adding land elements *will* increase the variance of the global average and will therefore increase the uncertainty of the global average temp.
You can argue with this math all you want but you’ll never refute it. It’s one of my main criticisms of climate science. Ignoring variances of the data is just a fraud in my opinion. Adding hemispheric temps together with no weighting for different variances is ignorant in the extreme. And climate science should realize this.
You keep referring to traditional statistics. Why don’t you quote regular metrology texts? They would serve you better in determining what uncertainty is.
In fact, why do you not literally quote any text books or online sources of reputable learning?
“Why don’t you quote regular metrology texts?”
I do. You just don’t understand them.
You are so full of shite it shows.
IIRC, the story was about 100 2×4’s each of which had a measurement uncertainty of 0.5 ft. A sum of the lengths is the same as laying them end to end, Tell every one what Dr. Taylor says the max uncertainty is in this scenario. I’ll give you a hint.
How about another, better equation from later.
“IIRC, the story was about 100 2×4’s each of which had a measurement uncertainty of 0.5 ft.”
No, the one that started this nonsense for me was about 100 thermometers each with a measurement uncertainty of 0.5°C. I’ve give the quote and link elsewhere. (By the way, I’m not used to these imperial measurements, but an uncertainty of 6 inches seems quite big.)
“A sum of the lengths is the same as laying them end to end”
And the point I keep making is that the sum is not the average.
“Tell every one what Dr. Taylor says the max uncertainty is in this scenario.”
The uncertainty of the sum cannot be greater than the sum of the uncertainties. Hence in your case it would be 50′. Assuming they uncertainties are random independent, you can add in quadrature and get 5′.
And again, you duck out of what happens when you take the average rather than the sum. And this can easily be done using Taylor’s equation for multiplying a quantity by an exact value. The exact value in this case being 1/100. Then you get the uncertainty of the average, assuming random independent uncertainties of 5 / 100 = 0.05′, and it cannot be greater than 50 / 100 = 0.5′. That last one of course being the average uncertainty. Not surprising, as it should be obvious that the uncertainty of the average cannot be greater than the average uncertainty.
+1000!
“…always add…” — not the result he is looking for.
‘The “experts” change there story every time – but it always comes back to the idea that the measurement uncertainty of the average is the same as the uncertainty of the sum….. (et seq)’
I am finding the exchanges in this fracas very hard to follow and keep track of across multiple blogs and comment-threads and four years of time, but I thank you for stating what you see as the perceived experts’ basic proposition. However, you haven’t stated your reason/s for objecting to it here and, as I’m sure you will appreciate, I don’t have time to sift through four years’ worth of exchanges to find out what it/they might be and what the other side/s might really be trying to say, so I can’t comment on it or give an opinion about who’s right or who’s wrong in this long-running kerfuffle.
There are many places in the world where daily temperatures can vary by way more than 1C..
and yearly temperatures vary by several degrees or much more.
And places where temperature can vary by quite a few degrees over a fairly short distance.
Yes, we would expect a rise in SST during and after a large EL Nino event.. that is what they do.
But then they drop down afterwards, but because that warmth gets spread out, maybe not back down to where they started.
“But then they drop down afterwards, but because that warmth gets spread out, maybe not back down to where they started.”
You should patent that discovery. You could keep temperatures warmer in the winter by spreading out the summer warmth.
Not my discovery,
Everyone knows about water currents in the oceans.
Sorry you choose to remain ignorant.
And DENIAL of the step change associated with the 1998 and 2015 El Nino..
Just deliberate ignorance.
Still waiting for your evidence, either physical or statistical, that all the warming of the last 50 years has been caused by El Niños.
You might start by explaining where all the heat from the 1998 El Niño went in 1999 and 2000, and how it then caused a step change 3 years later.
So are in data DENIAL , ok !
Blinders off little child. !
So you can’t explain it. Just as I thought.
Still calling me a child, and posting another half dozen insulting comments is a good way of demonstrating how confident you are in your beliefs.
Oh dear, still in DENIAL that the zero trend period from 2001-2015 is a step of about 0.3C higher than the zero trend period from 1980-1998.
Then provides absolutely no evidence that humans caused the step change.
Makes up an imaginary CO2 warming even though he can produce nothing to back it up.
Sadly pathetic.
“Still calling me a child,”
Do you want to put a whine with that whinge ??
Maybe if you stopped acting like one.
There is a reason he earned the moniker bellcurvewhinerman.
Denial of the step change that is right in front of your eyes… bizarre !!
l would have to go more into the workings of El Nino events… but for someone of your limited intelligence, it is not worth the effort.
And you might like to explain how CO2 caused a step then nothing for nearly 14-15 years.
Or you could run away from posting evidence of human causation, like you always do.
And still averaging to yearly data when you know El Ninos effect different parts of the year each time… sad. !!
Do you DENY that the zero trend period from 2001 to start of 2015 was about 0.3C warmer than the zero trend period from 1980 to start of 1997?
Where is the human causation for this. !
(Note: there was a small El Nino in 1987 which caused a very small step
This is clearly indicated by Bob Tisdale when he looks at connected ocean temperature, which also shows the delayed movement of energy from the main El Nino region. )
“You could keep temperatures warmer in the winter by spreading out the summer warmth.”
Because you seem totally unaware, of anything… the planet has been doing this for millions of years.
Wow, some red thumber doesn’t know how the planet’s oceans operates.
Bizarre, but not unexpected… very dumb person. !
Vino s is a crackpot who should be ignored.
His data is mined in a lame attempt to obscure the +0.13 degrees C. per decade long term UAH warming trend for oceans
His claim is not supported by UAH anomalies for December 2015 versus November 2024
It took me 30 seconds of reading this claptrap to find his data mining buas leading to an irrelevant conclusion. I have seen conservatives start aa mining trends at EL Nino heat peaks, and falsely claim the planet is cooing, too many times.
December 2015 was approaching the February 2016 peak of a very strong El Nino. Vinos data mines two short periods: An El Nino month in 2015 and December 2024? That’s data mining 101.
Vinos compared an El Nino month with an ENSO neutral period. That biased comparison is junk science.
The UAH data show a larger anomaly for November 2024 than for December 2015. (December 2024 is not yet available).
Far more important: The Version 6.1 global area-averaged UAH temperature trend (January 1979 through November 2024) remains at +0.15 deg/ C/decade (+0.21 C/decade over land, +0.13 C/decade over oceans).
December 2015 saw a strong El Niño event occurring, with significantly above-average sea surface temperatures in the central and eastern equatorial Pacific Ocean, considered one of the strongest El Niño events on record at the time.
Pot, meet kettle…
Indeed.
Oh dearie me.. The El Nino DENIER is back at it again. !
The 2023 El Nino saw a temperature increase for much longer than the 2015/16 El Nino.
Only a completely ignorance AGW-cultist doesn’t realise that
Everything you have said is based on your JUNK science.. or in fact ZERO science.
Below is the comparison of the 3 main El Nino events that have provided the ONLY warming in the whole UAH satellite data. adjusted for start point
Anyone can clearly see that the 2023 El Nino started much earlier, warmed as much as the 1998 El Nino and has lasted a lot longer.
Even you are not dumb enough to claim any human CO2 causation.. are you ???
Your task now is to show some warming , other than at El Nino events, in UAH ocean data.
You have failed completely so far.
Seems you are incapable of reading the graph presented in the main topic.
It clearly shows exactly what Vinos has said.
Try again, dickie. !
The Great Greene can’t even type an o-accent correctly.
Did I miss any?
Vinós / Viños’s / Vino s / Vinos
I think you them all.
“the planet is cooing”
Like a dove… or like yourself in the mirror.. ??
UAH takes its measurements from microwave frequencies. Path loss at microwave frequencies are very dependent on humidity meaning the irradiance the satellite sees can vary from point to point. How does UAH measure the humidity over the path at each sample point? Does that measurement protocol for the amount of water vapor in the path actually provide a resolution down to the hundredths digit for their temperature estimates?
Didn’t the UN say that the oceans were boiling? They’re only a few blocks from the ocean. You’d think they would have sent someone down to check.
They sent the butler, but he got mugged so hasn’t been able to return with the results of his investigation yet.
“For the first time in 21 months, global ocean temperatures have returned to levels seen in December 2015—nine years ago.”
Levels which were record warm for December, resulting from a very strong El Niño.
“While it’s tempting to view every uptick in temperature as evidence of impending doom, the reality is far more nuanced.”
As people have been saying for the past two years. Yet here we are claiming a few days where the temperature is only the third warmest since 1979, as evidence that warming has returned to normal.
For reference, here’s the UAH Ocean monthly data up to November.
The more interesting question is how much will temperatures drop during the next year or so. I doubt we will know until then if the last couple of years indicate accelerating warming, or just a combination of natural variability.
Still HT moisture above the higher latitudes, so I suspect a slower drop than usual .
And of course, there is absolutely zero evidence of any human causation.
Wouldn’t you agree !! (if you don’t agree, then provide evidence)
Obvious that the temperature will drop back to at least 2020 to 2022 levels (whether in 1 year or 5) because the current spike is not part of any perceived, gradual long term trend. How else can I help you?
Things would be so much better if we had global cooling for the last 150 years, wouldn’t it. So much better.
No.
So, tell us again your silly premise.
He won’t. He always weasels away when asked to explain the implications of his cherished beliefs.
What premise? I was asked if things would be better if we had had global cooling over the last 150 years. My answer is no, I don’t think things would be better. It’s not a premise, just my honest opinion.
Jeff was, of course, being sarcastic.
But we can all agree that the warming since the LIA has been totally beneficial.
So has the use of hydrocarbon fuels.
So has the increase in atmospheric CO2.
—
ps. thumbs up for getting something correct for once. !
“Jeff was, of course, being sarcastic.”
Wow, I would have never realized that. Thanks for explaining it. The answer is still “no”.
I know I can explain the really simple things to you.
The harder things are way too much for you to grasp.
Then why are you worried about slight and unnoticeable warming?
“For the first time in 21 months, global ocean temperatures have returned to levels seen in December 2015—nine years ago. Let that sink in for a moment.”
December 2015 was the peak of a very strong El Nino, and was far warmer than anything previously measured. December 2024 is on the La Nina side of neutral.
All the ruler monkeys are making their presence known now.
i was noticing the something…whats that saying…something about being over the target?
The triple-A lights up.
But still very much part of the El Nino warming event
Effect of the 2023 El Nino started earlier in the year than usual, and has been much more prolonged.
There is no evidence that humans have caused any part of this.
The 1998 El Nino actually raised the temperature about the same amount as the 2023 El Nino, 2015/16 had less effect.
Because there was cooling after the 2015/16 El Nino, both 2015 and 2023 El Ninos started at about the same base temperature.
“Effect of the 2023 El Nino started earlier in the year than usual, and has been much more prolonged.”
Keep talking, maybe you will figure out why people have been telling you this has been such an unusual event.
“The 1998 El Nino actually raised the temperature about the same amount as the 2023 El Nino, 2015/16 had less effect.”
Only if you accept the starting temperature keeps warming. Without that adjustment it looks like this.
ROFLMAO. Your ignorance astounds even me. !!
I have been saying all along that the 2023 El Nino has been an unusual El Nino event. You even quoted exactly what I said…
To see the effect of each individual El Nino, you have to start them at the same temperature.
Yes, in reality,1998 started lower but it produced a spike with pretty much the same peak increase as 2023. The 1998 El Nino also caused a step change that any normal person, without blinders on, can see.
Nothing to do with human causation though.
At least you have shown that the effect of the 2023 El Nino was far greater than the 2015 El Nino. Well done, dopey !!
And of course, there is absolutely zero evidence of any human causation.
Wouldn’t you agree !! (if you don’t agree, then provide evidence)
“Yes, in reality,1998 started lower but it produced a spike with pretty much the same peak increase as 2023.”
Yes, once you de-trend it.
“The 1998 El Nino also caused a step change that any normal person, without blinders on, can see.”
Normal people can see all sorts of things which are often not there.
“At least you have shown that the effect of the 2023 El Nino was far greater than the 2015 El Nino. Well done, dopey !!”
Once again bnice demonstrates what many people can clearly see – when he knows he’s losing an argument, he resorts to name calling and multiple explanation marks.
The temperature spike across the last two years has been greater than it was in 2016 – that’s why it has gained so much attention. If it was entirely the result of the El Niño, as you claim, that raises questions. If it was not entirely caused by the El Niño, that raises other questions. If you think you know all the answers, you should be writing a paper on the subject, rather than throwing tantrums here.
“And of course, there is absolutely zero evidence of any human causation.”
Writing it in bold doesn’t make it any more true. You just keep ignoring all the evidence you are given, because in your world view there can be no evidence. I’ll ask again what sort of evidence you would consider acceptable. I’ve shown you the simple statistical evidence that the temperature rise is consistent with the rise in CO2. I’ve shown you how combining CO2 and ENSO can produce the appearance of your step changes. You will deny all of that as evidence, so I’ll ask again, – what evidence would you need to convince you that increasing CO2 can cause a rise in temperature?
“Normal people can see all sorts of things which are often not there.”
Like a climate crisis.
How did you manage such a load meaningless gibberish ??
You haven’t given any evidence of human causation.
Nothing to ignore. !
You have shown a manufactured meaningless nothing. !
I haven’t de-trended anything, I have shown the effect of each El Nino by starting them at the same point.
Do you really continue to DENY that the 2023 El Nino started earlier and has lasted much longer? How does your imaginary CO2 warming cause that?
Anyone who looks at that chart and cannot see that the 2023 El Nino had far more effect that either the 1998 and 2015/16 El Nino is deliberately and mentally blind. Your own graph shows exactly the same thing.
Your DENIAL of the step change at the 1998 El Nino continues to show just how blinded you are to reality.
As for the extended 2023 El Nino that you have now tacitly admitted to…..
Certainly no evidence of anything humans have done, unless your manic cultism can creates it..
I wonder what happened just the year before.. can you remember ??
“I’ve shown you the simple statistical evidence that the temperature rise is consistent with the rise in CO2.”
NO. you have deliberately used El Nino spikes and step changes every time you have posted a rising trend graph.
They are all you have.. they are NOT caused by human anything.
There is no evidence of CO2 warming in the UAH data, period., and no amount of childish anti-science wishful thinking will change that.
You have failed utterly and completely to show any evidence of any human causation.
“when he knows he’s losing an argument”
Only in your enfeebled little mind.
You are still totally unable to produce anything but fantasy evidence of any human causation.
You have agreed with me that the 1997/8 and 2023/24 El Ninos added similar peak amount but the 2023 El Nino started earlier and has lasted longer.
You have agreed with me that the 2015 had a lesser effect than either the 1997/8 or 2023/24 El Ninos
You continue to use the spikes and step changes to create positive trends, showing that you know they are all you have.
Doesn’t sound like I’m losing the argument.
Now, where is that empirical scientific evidence of warming by atmospheric CO2.
And where is the evidence of CO2 warming in the UAH atmospheric data.
“Normal people can see all sorts of things which are often not there.”
And wilfully blind and ignorant people cannot see all sorts of things that are there.
Name them.
I doubt bellboy knows any “normal” people !
“I’ve shown you the simple statistical evidence that the temperature rise is consistent with the rise in CO2.”
Correlation does not necessarily mean causation. It might just be a coincidence that CO2 levels are increasing today while the temperatures are increasing.
After the 1930’s temperature high points, the temperatures cooled for decades down through the 1970’s, to the point that some scientists were fretting that the Earth was heading into another Ice Age.
CO2 was increasing continuously from the 1930’s to the 1970’s, but the temperatures cooled, they didn’t warm. No correlation between CO2 and temperatures during that period of time. So what’s different today?
“Correlation does not necessarily mean causation.”
As I keep saying.
“It might just be a coincidence that CO2 levels are increasing today while the temperatures are increasing.”
Could be. But it is evidence that CO2 could be a cause. And it,’s a pretty big coincidence given this was predicted, and nobody comes up with a more plausible cause.
“After the 1930’s temperature high points, the temperatures cooled for decades down through the 1970’s”
The high point was in the 40s, there was about 30 years of slight cooling, at a time when pollution was increasing. There might be many reasons for this, but it has little impact on the overall correlation with CO2.
“Could be. But it is evidence that CO2 could be a cause.”
You keep saying correlation is not causation and then turn around and post something like this!
You have to know what the cause is BEFORE you can say that correlation is evidence or even “could” be evidence.
Climate science says CO2 is warming the oceans with IR “back-radiation”. And then climate science admits that IR doesn’t penetrate the ocean. That leaves only convection and conduction as the way CO2 could warm the ocean. Convection goes the other way – UP – and not down so it’s out as well.
So how much conduction of heat occurs from CO2 into the ocean?
Yep!
He won’t answer.
“You keep saying correlation is not causation and then turn around and post something like this!”
Yes I did. Well done for noticing. Correlation does not imply causation. It can be evidence for causation. That’s how statistical evidence works.
“You have to know what the cause is BEFORE you can say that correlation is evidence or even “could” be evidence.”
You do not. You can easily find a correlation, speculate it’s a cause and then look for reasons why it might be a cause.
But in this case, you already have the cause BEFORE you found the correlation. The Greenhouse effect, and the idea that rising COL2 would increase temperatures where explained long before we had the experimental evidence from observing what happened to global temperatures after you increased CO2.
“So how much conduction of heat occurs from CO2 into the ocean?”
I’m not interested in trying to prove why your reductive arguments are false. I’m not a scientist, and if you won’t actually look for answers from scientists, why do you think I could persuade you?
Your argument, which has been around for years, seems to me to be
based on a very simplistic view of heat transfer. It ignores the dynamic nature of the oceans, waves, currents, winds etc. But to my simplistic way of thinking it just seems absurd to suggest that water would not be affected by atmospheric temperature. Why would you think a lake would freeze over on a cold night, but not on a hot night?
Well that was one great load of gobblygoop and kamal-speak.
Yes, it is very obvious you have never been anywhere near any science.
You are still totally empty of any actual scientific evidence…
… now we know why. !
“my simplistic way of thinking”
Yes.. we noticed.
Gee, who could have ever guessed?
It is not statistical evidence of anything. Time series may appear to have a high correlation and suggests further investigation may be warranted but that is all. The correlation neither proves nor suggests that there is a direct relationship between the two dependent variables shown in two independent time series.
Why don’t you tell us the statistical tests you have done that implies there is a relationship and what the relationship is.
If you don’t think it’s evidence why would you investigate further?
I’m not sure what you mean by two dependent variables or two independent time series.
“Why don’t you tell us the statistical tests you have done that implies there is a relationship and what the relationship is.”
I’ve told you many times – I can’t help you with your memory problems.
Here’s a simple one, using GISTEMP annual data, and dependent variables CO2, ONI, AOD and AMO. Apart from AOD the variable is lagged by a year. CO2 is log2 of atmospheric CO2.
Result of a simple linear regression using the R lm function.
Call: lm(formula = GISS ~ CO2t + ONI2t + AODt + AMOt, data = .) Residuals: Min 1Q Median 3Q Max -0.205085 -0.051093 -0.007976 0.053472 0.259310 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -19.76093 0.43500 -45.428 < 2e-16 *** CO2t 2.37549 0.05209 45.607 < 2e-16 *** ONI2t 0.08730 0.01014 8.611 1.41e-14 *** AODt 0.07713 0.02421 3.186 0.00178 ** AMOt 0.25610 0.04260 6.012 1.53e-08 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.08867 on 139 degrees of freedom Multiple R-squared: 0.9464, Adjusted R-squared: 0.9449 F-statistic: 613.9 on 4 and 139 DF, p-value: < 2.2e-16This is the raw copy from the summary – please don’t start whining about the number of decimal places printed.
Main points are that the result is statistically significant with an r^2 of 0.94.
Here’s what the result looks like:
Here’s what it looks like if I remove CO2 as a dependent variable.
Call: lm(formula = GISS ~ ONI2t + AODt + AMOt, data = .) Residuals: Min 1Q Median 3Q Max -0.5134 -0.2730 -0.1090 0.2870 0.9499 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.07507 0.03059 2.454 0.0153 * ONI2t 0.07989 0.04036 1.980 0.0497 * AODt 0.05488 0.09635 0.570 0.5699 AMOt 0.73291 0.16441 4.458 1.68e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.353 on 140 degrees of freedom Multiple R-squared: 0.1448, Adjusted R-squared: 0.1265 F-statistic: 7.9 on 3 and 140 DF, p-value: 6.607e-05Result is still significant, but R^2 is just 0.13.
Arg – I’m too tired. I got the dependent and independent variables mixed up in the text. Temperature is the dependent variable, all the rest independent.
You still don’t get it do you? You are curve fitting to anomalies, i.e., Δ T’s, NOT temperature. You are trying to trend an average rate of change which occurs with temperatures that have a large variation. You will NEVER discover a functional relationship between your independent variables and temperature doing that!
I have tried to point out to you before that is like averaging accelerations between a group of Yugo’s and a group of Lamborghini’s. First, that average tells you nothing about the speeds each is traveling. Secondly, you lose any information about individual acceleration.
Ultimately, your trend has no value in determining anything!
roflmao
even more contrived assumption driven garbage ignoring all real climate drivers.
and using faked surface data to boot !!
This is the sort of stuff that fills your recycle bin.. or should
Delete it to avoid further embarrassment. !
Now, where is that empirical scientific evidence of warming by atmospheric CO2.
And please DO NOT pretend what you have just produce is anything but a load of parameter fitting garbage !!
The absolute epitome of ANTI-science.
It might be worth doing the same with the pre-1970 data. That should bring the fit line down, and increase the amplitude. The waveform fit looks decent.
The AMO effect is a bit surprising, considering how small the Atlantic is. Influence on the Arctic?
roflmao
contrived assumption driven garbage ignoring all real climate drivers.
This is the sort of stuff that fills your recycle bin.. or should
Delete it to avoid further embarrassment. !
Now, where is that empirical scientific evidence of warming by atmospheric CO2.
And please DO NOT pretend what you have just produce is anything but a clown act !!
He likes gaslighting real scientists and engineers, with years of training and experience in data measurement and handling, that black is white and white is black.
Much like his hero Stokes.
And then he expects to taken seriously, that the nonsense he claims has merit.
“But in this case, you already have the cause BEFORE you found the correlation. The Greenhouse effect,”
The issue is HUMAN-CAUSED warming. No one doubts the green-house effect. There is a LOT of doubt about man-generated CO2 increasing the green-house effect enough to even be measured let alone distinguished from natural variation. What climate science, AND YOU, refuse to admit is that our measurement instruments don’t have the capability of determining actual cause and effect. “We don’t know” is the only valid physical science conclusion that can legitimately be reached.
“I’m not interested in trying to prove why your reductive arguments are false.”
About 70% of the earth’s surface is water. ANY actual increase in the global average temperature *has* to have an increase in water temps as *the* major contributor. That means that the process by which the ocean temperature is caused to vary needs to be known in order to provide proper weighting to the data points.
And then we have you! Admitting you are not a scientist but also showing that you feel you can lecture others on *science*. While being unable to even conjecture on how the biggest contributor to the global temperature average warms and cools.
An abject failure on your part.
“But to my simplistic way of thinking it just seems absurd to suggest that water would not be affected by atmospheric temperature. “
The issue isn’t what causes water to freeze (which is heat loss *from* the water to the air which is not the same question of how the atmosphere transfers *heat* to the water).
All the effects you list, such as wave action, impacts heat loss *from* the ocean to the air (think evaporation) and not heat loss from the atmosphere to the water or the transfer of heat within a body of water (e.g. ocean currents).
You absolutely refuse to admit what the *real* main source of additional heat in the ocean actually is. Mainly because it undercuts the argument of human-caused temperature rise.
“No one doubts the green-house effect. There is a LOT of doubt about man-generated CO2 increasing the green-house effect enough to even be measured let alone distinguished from natural variation.”
As there should be – that is why it’s evidence if you can show a measurable correlation between CO2 and temperature.
“And then we have you! Admitting you are not a scientist but also showing that you feel you can lecture others on *science*.”
So little self-awarness here. Tim is not a climate-scientist, yet he constantly says that climate-scientists are wrong. He is not a statistician, but he’s constantly saying that all statisticians and mathematicians are wrong.
“While being unable to even conjecture on how the biggest contributor to the global temperature average warms and cools.”
I can conjecture. I just wouldn’t want to imply my conjectures where anymore scientific than yours.
As a starting point, any very simple model that doesn’t explain observations is probably wrong. I’ve seen this type of argument used by many pseudo-scientist, conspiracy theorists and the like. They have a simplistic model, which they claim proves something cannot happen. In my opinion this is illogical – a model can be used to show that something may be possible it cannot be used to show that something is impossible. It’s always going to be possible that something in the model’s assumptions is wrong.
In this case you have a model of ocean heating which assumes there is no movement in the ocean. That any infra-red radiation will stop in the first micro-meter, and therefore cannot be responsible for any warming below the ocean’s skin.
Conjecture 1.
The ocean moves. There are waves and currents. Any warming in the skin of the ocean will not stay there. It can shift down beneath the surface.
Conjecture 2.
Warming is not just about how much energy goes in, but also how much energy comes out. If the surface or atmosphere warms, that will reduce the rate of energy leaving the oceans and hence they will warm. This is the point about the temperature of water being affected by the atmospheric temperature. As you admit, a body of water in a cold atmosphere will freeze because of a higher rate of heat loss.
“You absolutely refuse to admit what the *real* main source of additional heat in the ocean actually is.”
You haven’t supplied you hypothesis yet – how can I admit it? What do you think caused the oceans to warm up at the same time as the land?
“So little self-awarness here. Tim is not a climate-scientist, yet he constantly says that climate-scientists are wrong. He is not a statistician, but he’s constantly saying that all statisticians and mathematicians are wrong.”
I have worked with measurements for over 60 years. As a mechanic, as a carpenter, and for over 50 years as an electrical engineer.
Those statisticians and mathematicians that ignore measurement uncertainty of measurements ARE wrong. And it doesn’t matter what field they are working in. Be it immunology, virology, climate, thermodynamics, surveying, etc.
“The ocean moves. There are waves and currents. Any warming in the skin of the ocean will not stay there. It can shift down beneath the surface.”
It can also be lost to the atmosphere via evaporation. A factor you ignore. You can’t even explain how heat sent into the depths of the ocean can cause the ocean SURFACE to warm!
“If the surface or atmosphere warms, that will reduce the rate of energy leaving the oceans and hence they will warm. “
More proof that you aren’t a scientist. The ocean radiates based on it’s temperature, not on the temperature of the atmosphere. As the ocean surface warms it’s radiation will go up by some exponent of the temperature. If it was the surface of a perfect black body it would be T^4. That means the rate of energy leaving the ocean WOULD GO *UP*, not down. Since warm air rises (less density) and warmer air rises faster (go ride in a hot air balloon sometime) convection from the ocean warming will GO UP, not down. Conductive heat transfer might go down BUT, conductive heat transfer is the smallest component of heat loss from the ocean. Waves and currents *increase* evaporation and thus convection losses of heat, they don’t increase it.
“You haven’t supplied you hypothesis yet – how can I admit it? What do you think caused the oceans to warm up at the same time as the land?”
See what I mean? You assume to come on here and lecture us about thermodynamics and you haven’t a real clue about how the physical world works!
“It can also be lost to the atmosphere via evaporation. A factor you ignore.”
I’m not ignoring anything – I’m not the one describing any model. I’m simply pointing out reasons why your model might not be the whole picture.
“You can’t even explain how heat sent into the depths of the ocean can cause the ocean SURFACE to warm!”
Your argument is that only the very top of the surface is warmed by IR. I’m making the conjecture that the very top of the surface will get mixed into the rest of the ocean. This would include the rest of the surface.
“The ocean radiates based on it’s temperature, not on the temperature of the atmosphere.”
Could you provide a reference for that?
Granted, I don’t have your expertise on the subject, but I thought that the net radiation depended on the difference in temperature between the body and it’s environment.
If the only factor was the water temperature, why would a body of water cool down more in a cold atmosphere than a hot one?
“If it was the surface of a perfect black body it would be T^4.”
Isn’t that only when it’s in thermal equilibrium?
Here’s a reference I found with a quick search.
P = e ∙ σ ∙ A· (Tr – Tc)^4
Where Tc is the temperature of the surrounding area.
https://sciencenotes.org/heat-transfer-conduction-convection-radiation/
If this is wrong, please point me to a better reference.
“See what I mean? You assume to come on here and lecture us about thermodynamics and you haven’t a real clue about how the physical world works!”
This is what I get for asking Tim what his hypothesis is. That is what he thinks is warming the oceans. It’s bad enough that he just gives the usual insults, but he also fails to actually answer the question.
Copy-pasting stuff you obviously don’t understand, and combining with gibberish.
Not very clever. !!
Then whining about how he isn’t treated with great deference.
He doesn’t even understand what the word “net” means. He probably didn’t even bother reading it.
Stop whining.
“Could you provide a reference for that?”
Go look up the name Planck.
“Granted, I don’t have your expertise on the subject, but I thought that the net radiation depended on the difference in temperature between the body and it’s environment.”
Nope. That’s CONDUCTION, not radiation. I’ve been over the different types of heat transport with you at least twice before. You just never learn anything.
“If the only factor was the water temperature, why would a body of water cool down more in a cold atmosphere than a hot one?”
Why do you think? “cool down more” means what? A rate? A value?
“Isn’t that only when it’s in thermal equilibrium?”
Why do you think I used the term “a perfect black body”? Do you *ever* read anything for meaning? And that is describing the black body as being in thermal equilibrium, not the system the the black body is a part of. My guess is you have exactly no idea what the term “black body” actually implies.
“P = e ∙ σ ∙ A· (Tr – Tc)^4″
You have no idea what “P” is, do you? As usual you didn’t bother to study anything, you just cherry picked something you think confirms your misconceptions. Go look up the term “radiant exitance”.
You have no idea of how the physical world works. But you still think you can lecture others on it.
Well that was a load of anti-science gibberish..
Have you been studying Kamal-speak ?
“Could be. But it is evidence that CO2 could be a cause. And it,’s a pretty big coincidence given this was predicted, and nobody comes up with a more plausible cause.”
I can come up with a plausible cause: The same thing that caused the similar warming in the 1880’s and the 1930’s, and it wasn’t CO2 back then.
Anyone who looked at the cyclical nature of the climate and saw the warming in the 1880’s and the warming in the 1930’s, could make an easy prediction that there would be a third phase of warming coming in the 1980’s or thereabouts.
Before the bogus, Hockey Stick chart was created, all climate scientists had to go on was the written record whose temperature profile looks completely different from the bogus, bastardized Hockey Stick. Yet, you want to ignore all the written records, and put your money on a computer-generated chart that doesn’t come close to representing reality.
What’s bad is you and others of your knowledge have seen these written records, and have seen they don’t match up with the bogus Hockey Stick chart, yet you go with the bogus Hockey Stick chart.
I assume you do so because you want to promote CO2 as the Bad Guy, and that doesn’t work if you refer to the written temperature record which debunks the Hockey Stick chart claim that today is the hottest time in human history.
It was just as hot in the recent past, and you have seen the charts, but you reject them for a BIG LIE Hockey Stick chart. The written records have a completely different temperature profile than the Hockey Stick chart.
Willfully blind, would be my judgement.
“The high point was in the 40s, there was about 30 years of slight cooling,”
I see you are fixated on the bogus Hockey Stick temperature profile.
You are basing your conclusions on a BIG LIE. The Hockey Stick chart, including the instrument era, does not represent reality.
Do you think alarms would be raised by scientists over a possible slip into another Ice Age in the late 1970’s, if the cooling was “slight”?
The only reason you think it was “slight” is because you are looking at a bogus, bastardized Hockey Stick chart whose creators deliberately erased the warming of the 1880’s and the 1930’s, thus making it appear that the cooling after the 1930’s was “slight”.
That was the purpose of bastardizing the Hockey Stick chart in the first place. The Temperature Data Mannipulators wanted it to look like today is the hottest time in human history and they couldn’t do that if they allowed the 1880’s and the 1930’s to be just as warm as today. That would blow up their whole CAGW narrative. It it was just as warm in the past with less CO2 in the air, then that must mean that CO2 has had little effect on the temperatures of today because although there is more CO2 in the air, it is no warmer today than in the past. Therefore, CO2 is a minor player in the Earth’s climate.
The “Ice Age Cometh” debunks your claims of “slight” cooling and debunks the bogus Hockey Stick chart as not representing reality.
You are living in a Hockey Stick fantasy world. Reality is represented by the written, historic temperature records which refute everything you say.
Dream on, dreamer.
Amen. This is the world of the trendologists.
“I can come up with a plausible cause: The same thing that caused the similar warming in the 1880’s and the 1930’s, and it wasn’t CO2 back then.”
And that cause is?
“Anyone who looked at the cyclical nature of the climate and saw the warming in the 1880’s and the warming in the 1930’s, could make an easy prediction that there would be a third phase of warming coming in the 1980’s or thereabouts.”
And when would they have predicted cooling, based on this cycle.
From the rest of the rant, I don’t think Abbott likes the current data sets.
You have to admit, he makes a valid point about the ice age scare contradicting the claim of ‘slight cooling.’
And it wasn’t just the media pushing this narrative—it was also purported by scientists and experts at the time. Many here say they remember the scare vividly.
“An enduring popular myth suggests that in the
1970s the climate science community was
predicting “global cooling” and an “imminent” ice
age, an observation frequently used by those who
would undermine what climate scientists say today
about the prospect of global warming.
A review of the literature suggests that, to the
contrary, greenhouse warming even then
dominated scientists’ thinking about the most
important forces shaping Earth’s climate on
human time scales. More importantly than
showing the falsehood of the myth, this review
shows the important way scientists of the time built
the foundation on which the cohesive enterprise of
modern climate science now rests.”
https://ams.confex.com/ams/pdfpapers/131047.pdf
What the media focus on is sensationalism.
That is what sells.
Not the truth.
Quoting a lies.
Denial of what was a major part of climate science in the 1970.
Can’t get any funnier. !
Anthony, see bnice2000’s comment.
“But it is evidence that CO2 could be a cause”
WRONG.. It is not evidence of anything… just baseless anti-science speculation.
You have failed completely to provide any evidence that CO2 could be the cause.
You have shown nothing statistical that connects CO2 and temperature together in a relationship.
The fact that time series of different variables can be jiggered to appear correlated proves nothing.
You want to prove something, make CO2 the independent variable and temperature the dependent variable. Then show the mathematical relationship that ties them together.
Here you go.
Call:
lm(formula = CO2t ~ GISS, data = .)
Residuals:
Min 1Q Median 3Q Max
-0.127232 -0.030878 -0.001143 0.038156 0.092139
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 8.328168 0.004035 2063.89 <2e-16 ***
GISS 0.385607 0.010874 35.46 <2e-16 ***
—
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.04762 on 141 degrees of freedom
(1 observation deleted due to missingness)
Multiple R-squared: 0.8992, Adjusted R-squared: 0.8985
F-statistic: 1257 on 1 and 141 DF, p-value: < 2.2e-16
If this relation holds you would expect every degree of warming to increase log2 CO2 by 0.39. R^2 is 0.90, and it’s significant.
But I have doubts about the validity of this model. One being that inevitably the CO2 would be expected to jump about with temperature, when in fact it increases smoothly. It’s a lot easier to believe that annual temperature is determined by CO2 plus noise. More difficult to imagine CO2 being temperature plus noise, but the noise conspires to cause a smooth increase in CO2.
Remember in this chart the red line is the predicted CO2 levels for a year based on temperature, the black dots are the actual CO2 levels.
Oh look, a hockey stick.
How unusual.
It’s still just a correlation graph. You’d likely get the same thing between temperature and the US inflation rate.
Yes, that’s the point I was making. I think it’s unlikely that the rise in CO2 was primarily caused by the rise in temperature.
What you think is totally irrelevant.
It is based on scientific ignorance.
You mean the graph that Jim insisted I drew for him. Yes, it shows a rise in CO2. Are you saying CO2 hasn’t risen?
It is a totally contrived piece of anti-science. Well done !! 🙂
A juvenile, contrived, assumption driven load of meaningless garbage.
Using totally FAKE temperature series as well.
Cannot get any more RIDICULOUS.
Well done. You have reached peak stupidity!….
…….. or have you !
Just wait…
roflmao.
A contrived anti-science assumption driven load of bollocks
Delete before you cause yourself more embarrassment !!
I do not think it shows what you think it shows.
Jim asked for CO2 as the independent variable and temperature the dependant.
That doesn’t seem the right question to ask, either – ln(CO2) would be preferable as the independent variable.
“I do not think it shows what you think it shows.”
If it shows anything, it’s how easily triggered some people are. I’m still not sure if they know what they are arguing about.
“Jim asked for CO2 as the independent variable and temperature the dependant.”
Yes. Checking back that’s what he asked for, and I misread it. It’s the peril of trying to answer multiple threads. I read his comment after I’d already produced multiple graphs showing temperature as the dependent variable, and had just assumed he wanted one showing it the other way round in order to claim that temperature was causing the rise in CO2.
So I better do what he asked, and see how well bnice and karlo take it.
It does get a bit John Cleese and Michael Palin after a while, doesn’t it?
Sorry, misunderstood what you were asking, before. Here’s the second go.
Call:
lm(formula = GISS ~ CO2t, data = .)
Residuals:
Min 1Q Median 3Q Max
-0.255337 -0.088427 -0.005416 0.080194 0.298231
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -20.30858 0.55863 -36.35 <2e-16 ***
CO2t 2.44134 0.06692 36.48 <2e-16 ***
—
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1177 on 142 degrees of freedom
Multiple R-squared: 0.9036, Adjusted R-squared: 0.9029
F-statistic: 1331 on 1 and 142 DF, p-value: < 2.2e-16
This is the OLS linear model. GISS is the annual Gistemp anomaly and CO2t is the log2 of the average annual CO2 from the previous year.
The mathematical relationship would be
Anomaly = -20.3 + 2.44 * log2(CO2) + ε
The relationship suggests 2.44 ± 0.13°C warming for each doubling of CO2.
Here’s the direct Log CO2 Vs Anomaly comparison
The Asimovian “that’s funny” items on the chart are the trough and spike at the left side.
They are far more pronounced than the equivalent features in the time series.
Yes, that’s where temperatures were warming in the 40s then cooling whilst CO2 was rising. They don;t have as much impact on the overall trend as supposed because that was when CO2 was rising more slowly.
The model I used in another comment, when I include ENSO and the AMO, removes quite a lot of that variance.
I have to repeat that I am not claiming any of this on it’s own proves that CO2 was the cause, just that it demonstrates that nothing seen is incompatible with it being a cause.
The 312ppm spike was in the 1940s, and there is also the inverse correlation from 290 ppm to around 308ppm (1890 – 1930 give or take a bit). That’s quite a contrast to the rather high correlation above 320ppm (1960 on). What is the cause? Dunno, but it certainly falls into the “that’s funny” category.
It’s more that they are close together and so counteract each other. A spike will never have as much impact as a prolonged peak or trough, either.
Starting the data from 310 ppm (late 1930s) reduces the slope and worsens the fit.
The amplitude was also much higher up to and including the 1940s spike. That’s another “that’s funny”.
Is that the chart? The early amplitude is still quite a bit higher than later.
Yes, it’s quite a good fit above 320 ppm (1960-ish).
“What is the cause? Dunno, but it certainly falls into the “that’s funny” category.”
“The amplitude was also much higher up to and including the 1940s spike. That’s another “that’s funny”.”
What it *should* mean from a physical world viewpoint is that a linear regression is not appropriate for any kind of a long term projection. A wavelet or Fourier analysis of the data will reveal some low frequency and high frequency components that are not accounted for in the linear regression. A wavelet analysis that accurately locates them in time (although probably not in amplitude) would be quite revealing and would require “climate science” to address the causes.
This is asking climatology to give up their cherished magic average tool.
Never gonna happen.
It’s been quite some time since I’ve done any Fourier analysis, but isn’t it usually applied to the time domain?
It looks like you think so as well
That’s part of the reason for graphing temperature anomalies against the log of CO2 concentration, instead of the year, as is usually done.
The regression is a reasonably good fit in the later 2/3 of the range, but rather poor in the first 1/3. Well, it’s a reasonable fit in the early part, but extremely noisy.
“What it *should* mean from a physical world viewpoint is that a linear regression is not appropriate for any kind of a long term projection.”
There is no long term projection. It’s just covering the available data up to 2023. I doubt 2024 will be close to the projection.
I’m sure the model could be improved using different regressions, but I’d hardly suggest the result was not appropriate.
“A wavelet or Fourier analysis of the data will reveal some low frequency and high frequency components that are not accounted for in the linear regression. ”
You’ve been saying that for years, yet you still decline to test it. To me a Fourier analysis just suggests curve fitting. It will match the data perfectly, with zero explanationary power.
This was the earlier comment
https://wattsupwiththat.com/2024/12/20/ocean-temperatures-and-climate-hysteria-a-lesson-in-perspective/#comment-4010724
It’s the same model and data, just shown chronologically. It certainly doesn’t fully predict the 1940s spike and other parts of the earlier data. But it’s hardly intended to be a complete model. Also the data is going to ve less accurate the further you go back. E.g. CO2 levels are based on ice core data before the start of the Keeling Curve.
Cool. Thanks.
Here is an interesting bit of trivia.
If you create a series comprising the differences in temperature anomalies from the previous year (mine is sourced from ERA5, 1851-2022), the y intercept is -0.25, and the slope is 0.00013.
Stoppit man, you sound like an idiot. Do you have any friends?
Obviously nobody who taps you on the shoulder.
You get a better perspective if you subtract out the seasonal variation, since the present is a seasonal low. I subtracted for each day of year, the mean for that day from 1991-2020:
Thanks Nick.
You have shown exactly what Vinos was saying…. was that your intent ?
That the effect of the large 2023 El Nino + HT event is gradually subsiding.
The temperature anomaly went up.. now its coming back down. !
ps.. I have saved the image under the name “El Nino Subsiding” for future reference.
No uncertainty limits in your spaghetti, Stokes.
FAIL.
No uncertainty limits in Vinós’s chart either. Is his argument a FAIL also?
It shows what he needed it to show.
Weasel. You think averaging diverse air temperatures eliminates uncertainty.
And you completely ignored/missed the point of Kip’s comment, not a surprise.
“It shows what he needed it to show.”
Translation, when Karl agrees with the conclusion he doesn’t care about uncertainty, when he doesn’t agree the lack of uncertainty in the graph is a FAIL. Yet in this case, Stoke’s and Vinós’s graphs are showing the same thing.
“You think averaging diverse air temperatures eliminates uncertainty.”
Stop lying. I’ve told you numerous times why that’s wrong.
No, you generate lots of weasel words when confronted about what you really believe.
I can’t help you any more. If you think I believe it’s possible to eliminate measurement uncertainty, and don’t accept my arguments for why that’s wrong, you will just have to continue to live in your own delusional world.
Poor Bellboy. FAILED again. !
Weasel — it is YOU who pushes this 10mK temperature uncertainty garbage. This is effectively zero, which you might understand if you had any real metrology experience.
“Weasel”
You’re like a 2 year old who’s just discovered a new word.
“it is YOU who pushes this 10mK temperature uncertainty garbage.”
When have I pushed that – be specific, and 0.01K of uncertainty, even if that were possible, is not zero uncertainty.
More weaseling — 10mK or thereabouts is COMMONLY quoted in climastrology literature, to include the UAH. You’ve never mounted a soap box and called out these ridiculous numbers.
And yes it is effectively zero, which you would instantly understand if you had any real experience.
What !!!.
another load of meaningless gibberish from bellboy, well .. I’m not stunned. !
Of what use is a global average temperature when global circulation in atmosphere and oceans (and weather) is driven by density differences which
It is therefore important to know how the averages change. Does the average temperature increase due to increases in the lower temperature regions while the warmer regions remain at the same temperature or do both regions increase in temperature equally? In the first case one would expect a reduction in intensity of weather events while in the second case the intensity would remain as it is at present.
The real problem is that averaging diverse temperatures tells you absolutely nothing.
Yeah, like claiming 2024 is the hottest time in human history, when in my neck of the woods, we have had no record heat, and it is certainly not the hottest time in our history, not even close.
Climate science as a whole will apparently never understand this.
It’s even worse than that. Averaging an intensive property like temperature is a physical nonsense.
Tellingly, the usual CAGW shills who infest this site only ever use average temperatures rather than actual, measured temperature series.
They cling to it like flies on honey.
COVID-19 demonstrated how politicians, of whatever discipline, love to control and set their agendas regardless of what the historic truth constantly tells us – ‘we just do not know’. Each of us barely lives long enough into old age to know what real climate change is or may be let alone why it is that it never rains but pours in life (as a very old adage puts it).
Rather than trying to scare people into submission perhaps our politicians should concentrate on listening to what historic wisdom tells us about human nature before shouting their mouths off in panic at every opportunity.
When was the last ban the bomb march and all that was about?
I chuckle whenever I see ‘climate scientist’ in articles. Is that really a thing? Can you actually get such a degree?
If there is such a thing let me know. I don’t believe there is one. Climate is perhaps the singular phenomenon that encompasses EVERY scientific discipline, and climate models are (I suspect) written by coders with no more than one scientific credential, if any. Models are, of necessity, a mashup of all the various scientific disciplines.
More trainwreck than science.
While the media pounced on this as evidence of human-caused warming…
If human-induced climate change was responsible for the early 2024 warming, then we humans must also be responsible for the observed end of year cooling below 2023 levels.
Well done humans….
“Blustering doesn’t provide any evidence.. maybe try something else?”
Now, hows about we look at the reponses of by far and away the most “blustering” poster on WUWT.
And by inference the most hypocitical one ….
That ever so nice man’s “Blustering” responses to R Greene:
“roflmoa.. not science , then
You have yet to produce any real scientific evidence of CO2 causing any warming.”
“No evidence again, hey dickie?
Just yapping and doing the Walz arm flap”
“Poor consensus brain-washed dickie-boi… All you do is YAP.
Your reputation for avoiding presenting science of any sort is legendary.
You have a reputation below that of a dead mullet.
Do I have to paste those three question…
… so you can continue to RUN AWAY !!!”
“Nothing you have ever produced has shown even remotely that CO2 causes warming.”
“And every name on the Oregon Petition has more scientific knowledge and credibility than you will ever have, dickie…
. even the “Mickey Mouse” names that go snuck in by the AGW cultist you worship.
Blustering doesn’t provide any evidence.. maybe try something else?”
“Oh dearie me.. The El Nino DENIER is back at it again. !
….
Only a completely ignorance AGW-cultist doesn’t realise that…..
Everything you have said is based on your JUNK science.. or in fact ZERO science.
…….
Even you are not dumb enough to claim any human CO2 causation.. are you ???
Your task now is to show some warming , other than at El Nino events, in UAH ocean data.”
“Seems you are incapable of reading the graph presented in the main topic.
It clearly shows exactly what Vinos has said.
Try again, dickie. !”
“Like a dove… or like yourself in the mirror.. ??”
Hark #2 Oxy’s “Blustering” reponse to nyolci….
“Come on nikky, tell us what we “DENY” that you can provide solid scientific evidence for.
That means you have to actually provide that evidence, not just mindless bluster and Walz-like hand flapping, and gibbering with mindless Kamal-speak.
Tim just happens to be absolutely correct, it is you that is showing your brain-washed ignorance…. as always.
Hark #3 Mr nice’s response to Bellman ….
“Still zero mathematical understanding.
You need to finish Junior High, bellboy !”
“There’s that lack of mathematical understanding.. Well done. !”
“You truly are displaying your mathematical ignorance today , bellboy !!”
“Not my discovery,
Everyone knows about water currents in the oceans.
Sorry you choose to remain ignorant.”
“And DENIAL of the step change associated with the 1998 and 2015 El Nino..
Just deliberate ignorance.”
“So are in data DENIAL , ok !
Blinders off little child. !”
“Oh dear, still in DENIAL that the zero trend period from 2001-2015 is a step of about 0.3C higher than the zero trend period from 1980-1998.
……..
Sadly pathetic.”
“Do you want to put a whine with that whinge ??
Maybe if you stopped acting like one.”
“Denial of the step change that is right in front of your eyes… bizarre !!
l would have to go more into the workings of El Nino events… but for someone of your limited intelligence, it is not worth the effort.
……….
Or you could run away from posting evidence of human causation, like you always do.”
“And still averaging to yearly data when you know El Ninos effect different parts of the year each time… sad. !!”
“Because you seem totally unaware, of anything… the planet has been doing this for millions of years.
Wow, some red thumber doesn’t know how the planet’s oceans operates.
Bizarre, but not unexpected… very dumb person. !”
“I know I can explain the really simple things to you.
The harder things are way too much for you to grasp.”
“ROFLMAO. Your ignorance astounds even me. !!
At least you have shown that the effect of the 2023 El Nino was far greater than the 2015 El Nino. Well done, dopey !!”
“How did you manage such a load meaningless gibberish ??”
“Only in your enfeebled little mind.
Doesn’t sound like I’m losing the argument.”
“And wilfully blind and ignorant people cannot see all sorts of things that are there.”
“I doubt bellboy knows any “normal” people !”
“bellboy still thinks that monster under its bed is real.”
Unlined are the ad-homs that don’t appear (if they did then they would be banned) from the people the ever so nice man is responding to (without “blustering”) mind you (sarc)
Cont.
There are on this thread alone 28 responses from bnice2000 and only one that I can see that is NOT “blustering”
“Doesn’t sound like I’m losing the argument.”
Winning any “argument” isn’t scored on the basis of blustering/ad-homs/bolding and outright thread-bombing the argument.
But as that seems to be your measure of it then please carry on as it enables me to while away a few minutes in amusement and it really, really does win your argument Oxy (sarc).
And by inference that of WUWT.
And, your’re welcome, as I know that you are un-self aware, to have enlightened you, as to the character traits that you appear to hold via the evidence presented above.
Then we have, bringing up the rear to mr niceguy, we have the yapping of Mr Karlomonte, giving us the befit of his astounding intellect….
“He’s very impressed with … himself.”
“Same old nasti.”
“Weasel.”
“Yep!”
“Weasel.”
“Stop whining.”
“Weasel.”
“There is a reason he earned the moniker bellcurvewhinerman.”
“Indeed.”
“The Great Greene can’t even type an o-accent correctly.”
“All the ruler monkeys are making their presence known now.”
“The triple-A lights up.”
“No uncertainty limits in your spaghetti, Stokes. FAIL.”
It seems he likes “Weasel” .
Let me ask:
Do you think that that razor sharp invective has added to the discussion?
That, like themanwhowouldbenice, you reckon you have “won the argument”?
Is that what this place has become (Charles?).
To those without any ideological skin in the game that is of course rhetorical.
You are just venting some derangement upon those you cannot gainsay, as though it matters a jot, other than projection.
Are you done yet? Answer the question.
Are you done spewing yet?
Another one to add to the list…
“Are you done spewing yet?”
One of your best yet.
Brilliant!
Depends if you add more
Come on banton, maybe you can explain how averaging air temperatures is a valid procedure.
Your ruler monkey pals certainty can’t.
“Your ruler monkey pals certainty can’t.”
A least a question, rather than invective.
In the same way as it is a valid procedure in the averaging of any quantity.
It ends up being more accurate.
https://timharford.com/2019/08/the-strange-power-of-the-idea-of-average/
““While nothing is more uncertain than a single life, nothing is more certain than the average duration of a thousand lives.” The statement is often attributed to the 19th-century mathematician Elizur Wright, who not coincidentally was a life insurance geek. But buried in the aphorism is a humdrum word concealing a powerful idea: the “average”.”
Change “duration of a life” to an average temperature at a given location – and certainly the average temp of 10’s thousands of temperatures will give a more certain estimate of the global temperature.
It’s just statistics 101.
That you don’t like it and keep carping on about it in tandem with Gorman et al does not a fallacy make.
To listen to you it would seem that we are that uncertain we may even all have disappeared down the rabbit-hole, never to emerge again, as you have done.
“It’s just statistics 101.”
So is this:
Max temp 30.0ºC, min temp 0.0ºC, = average 15.0ºC
Max temp 29.9ºC, min temp 0.2ºC, = average 15.1ºC
Averages indicate nothing.
Didn’t think you could form a coherent answer, I was right: “stats 101” doesn’t cut it.
How is averaging diverse air temperatures ANYTHING like making random samples of a single fixed population?
“In the same way as it is a valid procedure in the averaging of any quantity.
It ends up being more accurate.”
I’ll never understand how so many people can get this so wrong!
If you fire four shots at a target and they all hit 3 feet away from the bullseye arrayed equally around the circumference, no amount of averaging is going to help the accuracy.
If I am laying out a drainage ditch that is 100 feet long using a 6′ tape measure to measure my depth every 10′ and the tape has a systematic bias of +1/8″ of an inch per inch to measure the depth I am at, no amount of averaging is going to help my accuracy of depth at the end of the ditch. After 10 depth measurements along the ditch I will off 1.25″ in depth at the end of the run. No amount of averaging is going to help that accuracy.
The *only* time you might help with the accuracy is if you have taken multiple measurements of the same thing using the same instrument under the same condition. If you can then assume that all of the measurement uncertainty is random and Gaussian you might be able to justify using the average value as the most accurate one. But you must be able to justify those two assumptions. The measurement uncertainty of a liquid-in-glass thermometer is *not* guaranteed to be symmetrical because of friction in the glass tube and gravity. Assuming all the measurement uncertainty contribution from the physical design of the thermometer is totally random and Gaussian is, therefore, NOT justified. You will find the same thing for almost all instruments. Even a Stevenson screen whose paint has weathered will have a different measurement uncertainty in the daytime than it does at night. Therefore the assumption that all measurement uncertainty is random and Gaussian is not justified.
Yet climate science *always* assumes that all measurement uncertainty is *always* random and Gaussian and therefore cancels, even systematic uncertainty across multiple instruments.
You are a prime example of someone who believes that meme apparently.
“The *only* time you might help with the accuracy…”
What if you take millions of measurements, over years, with methods having many, many, varied kinds of “systematic” errors? They can be big, or not. They can be constant, or not. As the number and variety of those measurements increases, would the weighted averages and trends of them become more or less accurate? Follow up question. What is an example of an evaluation that does this, and is discussed here?
Hint: Try not to deflect by putting terms like “all”, in my mouth.
This is your chance to crawl out into the sunlight. Take it.
Uncertainty is not error, blob, but then you already didn’t know this.
Then he doubles down on his absurd claim that averaging reduces non-random uncertainty (or all “error” in his strange world).
And km “doubles down” on his deflection. Remember Brave Heart? “Answer the fookin’ question”.
What ever are you ranting about now, blob?
Systematic bias ADDS to measurement uncertainty. Systematic bias is *not usually +/-, it is usually either + or -.
We’ve been down this road before. You somehow think a specific design of instrument will *always* have a random systematic bias, sometimes negative, sometimes positive and that they will cancel That is VERY seldom the case. A major component of calibration drift is heat. Heat very seldom causes random drift in *ANY* COMPONENT. Resistors expand as they heat, they very seldom contract. Capacitors with dielectrics typically see the dielectric constant increase with temperature which causes the capacitance to drift higher. And on and on and …..
These changes never return completely to zero even after being unpowered. Thus the systematic bias for a specific design usually goes in one direction, whether for intermittent operation or constant operation.
Take the paint on a Stevenson screen. That has an impact on the ambient temperature in the screen and thus is a systematic bias. As the sun bakes the paint if very seldom will become *more* reflective, over time it will absorb more and more heat and raise the ambient temperature. You simply can’t assume that paint changes will cancel out over multiple stations, it’s not a valid physical assumption.
Even the glass in an LIG thermometer is affected over time by heat. Very seldom does the measuring tube change randomly, usually over time heat will expand the tube permanently causing a systemic bias.
Metal rulers left in the heat will expand permanently. Cold doesn’t seem to cause permanent contraction however, that’s probably because materials like metals can get less dense (i.e. expand) under heat but have a hard time getting *more* dense under cold (i.e. shrink). I’ve seen metal yardsticks warp in the heat of the sun but I’ve never seen one warp from laying in the snow.
You have an blind spot in thinking that everything is random, Gaussian, and cancels. A lot of things just don’t work that way in the real world.
If you think human life expectancy (a real number) and meteorological temperature (dimension in kelvin or degrees Celsius) are remotely comparable you have nothing worthwhile to contribute.
Banton is the King of the Inappropriate Analogy.
“Change “duration of a life” to an average temperature at a given location – and certainly the average temp of 10’s thousands of temperatures will give a more certain estimate of the global temperature.”
The duration of a life is an extensive value. You can add those and get something physically meaningful. Temperature is an intensive value, you can *NOT* add those and get a physically meaningful answer. And it doesn’t matter if you add 2 values or 10,000 values.
Take Las Vegas and Miami for instance. If each of those measures 100F on the 4th of July at noon does that indicate that the average of 100F tells you anything physically meaningful about the climate at each location? Does it tell you anything meaningful about the temperature and climate in New Orleans that is in between the two locations?
It’s not just statistics 101. It’s PHYSICAL SCIENCE 101. In statistical world it is assumed that numbers is just numbers. You can do anything you want with the numbers, even ignoring uncertainty intervals associated with the numbers. In Physical Science world, those numbers mean something about the physical world. They must be used in a manner that maintains and clarifies the properties of that physical world.
In the same way as it is a valid procedure in the averaging of any quantity.
You don’t even read what you reference.
“A forecasting model that is correct on average may be a very dangerous model indeed.”. WOW!
Read this article. https://www.buildingtheelite.com/average-fails-everyone/
Anyone trained in experimental science and/or engineering has this drilled into their brains. You don’t design experiments, buildings, bridges, electronics, etc. without serious attention being given to variances and uncertainties. These qualities define the operational profiles that can be expected.
You obviously have not learned or practiced proper measurement protocols.
Replying to yourself is a major kooksign.
You have definitely LOST the argument.. and the plot. !
Must have been hilarious watching you put your massive whinge together.
Yes,we know Oxy.
Your definition of a “win”
Is to rant, ad-hom (in bold) and thread-bomb.
As I’ve told you, there is no “evidence” that you would ever accept.
You even post up stuff that proves the exact opposite of your contention.
(graph showing LWIR shifting to shorter wavelengths).
You wear it like a badge.
Oh, hang on …. I spotted some self-awareness.
I do hope you nurture it.
Indeed I did find it “hilarious”
(haven’t used that in a few posts)
Nice to see it return.
As expected, you FAILED the evidence test…. yet again. !
Well done. !
What a nutter.
roflmao.
What a monumental WHINGE !
Hilarious. !!
Totally evidence free .. as always. !!
An epic rant, to be sure.
And the expected response.
Thanks guys!
More whine from Banton. !
And the yapping terrier behind the attack-dog chimes in.
You poor little demented chihuahua
ok, a effeminate mincing poodle then !!
The evidence is what you typed.
That is the lack of yours and the hypocrisy in accusing others of what you do in spades.
As I said, you really are totally un-self aware.
And as you would say …
ROFLMAO
Not forgetting the bolding of course
ROFLMAO
Oh, forgot … hilarious
Why be so stupid as to paint a target on your forehead. !
Guess it helps hide the big “loser” sign . !
With the “L” that is already up there!
You mean the FACT that not one of you mindless AGW clowns can produce one piece of empirical scientific evidence to back up even the most basic tenet of your cult-religion?
Care to try, rather than cry like a little baby ??
“Please provide empirical scientific evidence of warming by atmospheric CO2.”
You will need to figure out what those three words in bold mean first, because it is obvious you don’t have a clue.
““Please provide empirical scientific evidence of warming by atmospheric CO2.””
7 hours later…… And Banton FAILS AGAIN !!
I’m in the UK oh nice one.
It tends to have a different time zone.
Is that another thing you are ignorant of and are incapable of undertstanding?
You have still FAILED COMPLETELY !!
It is your only choice. !
And nikky called us “Deniers”, a horrendously ugly insult… deliberate and baseless
Perhaps you could answer
“Come on Banton, tell us what we “DENY” that you can provide solid scientific evidence for.
That means you have to actually provide that evidence,
..not just whinge and carry on like a little school-girl.
Oh and by the way
You have yet to produce any real scientific evidence of CO2 causing any warming.”
I followed the advice of michel, who got fed up with this stuff. There is an add-on called ublock, which works in all browsers but chrome, and is the basis of apps like adblock. You can nomonate things to block; just add the filter
wattsupwiththat.com##.comment-author-bnice2000
and you just won’t see his comments anymore. I blocked karlomonte too. It greatly improves the threads.
Poor Nick, unable to answer when checked.
He just RUNS AWAY.
Pathetic. !!
Thing is Nick, everyone else can see your lack of anything worthwhile to back up anything you say.
All you do is highlight your ignorance for everyone else to see.
Well done, and thank you. 🙂
Stokes sucking up to Banton is positively revolting, but hardly unexpected.
And your little nip at the heels to the rear of your leader is likewise expected.
Is their some sort of bromance going on?
Just asking.
Clown.
Poor little worm, Banton.. let Nick lick your feet.
Then you can lick his. !!
YES! I made the Nitpick Nick killfile list!
It means you get destroy all his nit-picks and he has no come-back.
Not that he ever does anyway….
Great fun ! 🙂
Oh God. It’s pathetic.
It seriously is pathetic, isn’t it
These precious little petal can’t stand to be put in their place and shown what idiots they are
Diddums !!
Thanks Nick:
I’ll consider it but part of the fun is seeing those 2 idiots exhibit the psychopathy of denial to the extreme.
Bit of an amateur Psychologist you see.
The whole forum (as you know – bar a few brave/patient contributors) is beyond any sort of rational thinking and is mainly a means to vent anger and hug each other in their shared world-view, and exert some sort of perceived pushback due their lack of control of it.
Thing is, I don’t have your self control, and eventually have to call a spade a bloody spade, so your suggestion would be the safest option.
If the mighty Watts gets involved I’ll be banned.
At least Charles is lax with the likes of me as well as being over lax with the attack-dogs behaviour.
I’ve often thought you could be a bit more accerbic but it’s allowed you to be probably WUWTs longest contributor, as a defender of science (and common-sense), hats off to you.
Liar.
Poor Banton , you have serious psychological problems.. all psychologists do.
Dunning-kruger in your own mind.
You knows you cannot support anything you says with anything except data that you KNOWS is totally corrupted.
You are the equivalent of a little worm and incapable providing rational scientific evidence.
You are a scientific non-entity and a little pathetic cry-baby.
Nick doesn’t defend science, he twists and tortures it.
“call a spade a bloody spade”
Do it while looking in the mirror, little child. !
Listening Bellman? Your posts are spot on. Your time management is piss poor. Engineers are supposed to be so lazy that they go to school for at least 4 years to learn how do things the easy way. K State obviously failed the Gorman’s in this respect, but why are you spending your days so unproductively?
What are your engineering credentials, blob?
Thought so.
BS Petroleum Engineering, Missouri School of Mines. MS Drilling Engineering, University of Southern California. Registered, by examination and recommendations, Oklahoma, first try. Transferrable to every state, with extra training in earthquakes required in California
Now you…
How is it that you are completely ignorant of basic metrology then?
You think everything is random, Gaussian, and cancels.
Excellent, well-written article, Charles!
I recently read on another WUWT article that the energy captured by the CO2 molecules from earth’s radiated heat is transferred by collisions to other air molecules, mostly nitrogen and oxygen, long before they have the time to re-radiate the energy away. This warms the air, but I am unclear how warm air can warm the ocean. It warms the shore by convection, but how does it warm the water?
The warmed air is immediately dealt with by the gravity based energy balancing of the gravity based thermal gradient.. ie a tiny increase in convection.
Try lighting a match.. how much of the energy goes downwards. !
wow, a red thumb that thinks the heat from a match goes downwards.
And doesn’t understand the gravity based thermal gradient.
Must be an AGW-cultist.
Numbers in climate research often come from analytical chemistry labs. I spent a few years managing labs, which helped me to write earlier WUWT articles on uncertainty, like these three starting with
Uncertainty Estimates for Routine Temperature Data Sets – Watts Up With That?
Errors and uncertainty consume a large portion of time and effort in labs. Some of my work was with geochemistry and chemical analysis of rocks. A piece of rock can, in theory, contain all 118 known elements of the periodic table, but many are in small amounts that can be safely ignored. Of the 20 or so main elements that are not ignored, there is a reasonable test that their concentrations should add up to 100%.
In climate research, we do not have a 100% test. If, for example, we wish to analyze those chemicals that affect our atmospheric temperature, we have nothing equivalent to the Periodic Table; indeed, we find new chemicals from time to time. For example, we now have methanethiol from a paper of 19th December 2024.
ACP – Quantifying the impacts of marine aerosols over the southeast Atlantic Ocean using a chemical transport model: implications for aerosol–cloud interactions
When, with rock analysis, we can test by adding elements to 100%, in global atmospheric analysis we do not have this test. Without the test, we have greater uncertainty. This illustrates a difference between uncertainty and error.
I suspect that few people understand what uncertainty is in the scientific context that is a hallmark of WUWT. We saw a great deal of confusion when Dr Pat Frank estimated factors like cloud parameters affecting the propagation of errors in global climate models.
Frontiers | Propagation of Error and the Reliability of Global Air Temperature Projections
The hundreds of comments were well above the usual numbers for WUWT. One outcome is that commenters displayed a wide set of personal meanings of both “error” and “uncertainty”, often wrongly equating the two. I am not going to add my personal understanding here, since a better education comes from reading Pat’s articles and the reader comments to them. Geoff S