From the DAILY SCEPTIC
by David Craig
In an article I wrote for the Daily Sceptic on June 20th 2024 I predicted:
As we all shiver in the autumnal weather during what is meant to be summer and some of us have even turned our central heating back on or continued using our winter duvets, there is one certainty – in a few weeks time, the good folk at Met Office and the BBC will tell us that we’ve just had the “warmest June on record”. After all, the Met Office and the BBC made the same claim about appalling April and miserable May.
In the article I proposed three possible tricks which the Met Office and the BBC could use to justify their claim of June being “the hottest ever”:
- Will they have the gall to say that June in the U.K. was the warmest on record even though everybody else knows it wasn’t?
- Or will the Met Office and the BBC choose somewhere which had a bit of decent weather – perhaps Greece or Spain or India – to justify their climate catastrophism?
- Or will they instead try to fob us off by claiming that, although June in the U.K. was a disaster weatherwise, global temperatures (if such a thing can even be measured) were at record levels?
Well, just as I predicted, we’ve been told that June was the hottest on record: From the Mail: ‘Last month was officially the hottest June on record‘.
To justify this claim, the ‘experts’ used the third trick: “claiming that, although June in the U.K. was a disaster weatherwise, global temperatures (if such a thing can even be measured) were at record levels.”
The key words are “on record”. What the ‘scientists’ used as the start of records this time is the year 1980 – a few years after satellites began to be used to measure the Earth’s temperature. Before the late 1970s, there was no way of measuring the Earth’s temperature as temperatures were not recorded in many places.
But let’s remind ourselves of what happened to the Earth’s climate in the 1960s and 1970s. Temperatures were so low that even the climate-catastrophist Guardian newspaper predicted a new Ice Age:
Crop failures and mass starvation were expected:
The CIA was commissioned to write a report for the U.S. President about the consequences of the coming Ice Age:
And the experts worried that the global cooling would never stop:
Of course, the predicted Ice Age never happened and, quite naturally, the cooling 1960s and 1970s have been followed by a period of warming. The climate catastrophists have never got around to explaining to us how global temperatures could have cooled for around 20 years in the 1960s and 1970s while levels of atmospheric CO2 were increasing. I guess that’s a question we’re not supposed to ask, otherwise we might conclude that the climate king has no clothes.
Moreover, there are strong indications that the scorching hot 1920s and 1930s, the years of the U.S. “Dustbowl” featured in John Steinbeck’s novel The Grapes of Wrath, were much hotter than today’s supposedly “record” temperatures:
It was predicted that sea levels would rise 40 feet and half of England would disappear beneath the waves:
Because the glaciers and ice caps would melt:
Just to conclude, there’s one more of many charts which suggest that the 1920s and 1930s, when atmospheric CO2 levels were the lowest they’ve been in the last 100 or so years, were much hotter than today’s supposedly “record” temperatures. That’s the chart of the acreage of forest fires in the U.S.:
Was June 2024 really the hottest since records began as our rulers claim? I’ll leave that up to you to decide.
David Craig is the author of There is No Climate Crisis, available as an e-book or paperback from Amazon.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.








“The key words are “on record”. What the ‘scientists’ used as the start of records this time is the year 1980 – a few years after satellites began to be used to measure the Earth’s temperature. Before the late 1970s, there was no way of measuring the Earth’s temperature as temperatures were not recorded in many places.”
That’s what happens when you get your “science” from the Daily Mail. Of course temperatures go back at least a century before 1980. It just happens that the Mail posted an ERA5 graph from 1980 onward.
Here is the GISS plot since 1880
Could you be any more deceitful? I haven’t read the Daily Fail, but it’s dollars to donuts that the statement is a quote from some government bureaucrat. From TFA:
“Officially”, see that word? What does that mean? It doesn’t mean that some Daily Fail reporter made an official statement on behalf of the Daily Fail. It means the reporter got some government bureaucrat to make an official statement on behalf of the government.
Get a new Thesaurus, Nick.
No. It means that a Daily Mail reporter, as ignorant as David Craig, googled and found a ERA5 plot starting 1980. Actually, the Mail didn’t say there were no earlier records. That is an invention of David Craig.
The “invention” is the JUNK graph from GISS (and its stablemates).
ERA5 graphs from 1980 also have a LOT of urban bias baked in.
“didn’t say there were no earlier records”
At least now you are admitting that there are plenty of records showing 1930s,40s were warmer.
You know that.. so why the continued deceit ??
You have never answered why you continue to support that which YOU KNOW is a scam, and wants to destroy western society.
What is in it for you ?
For those who don’t know just how sparse temperature sites were before 1920..
.. here are the historic temperature sites from that period.
NOBODY with even a single working brain cell would think you could create a meaningful “global” temperature from that.
The ocean data is even worse.
The GISS data shown by Nick is TOTALLY FAKE
Even worse before 1890
Thank you so much for this, bnice. It definitely casts doubt on the ~0.15°C uncertainty for this period. Where does Heller get his map, if you don’t mind me asking?
He has long records of all surface stations from GHCN.
He is also a whiz at computer stuff.. so compilation is easy for him.
MEANINGLESS JUNK and URBAN DATA. !!
And faked ocean non-data.
Show us where land temperatures were measured from 1880 to 1920.
Show us where ocean temperatures were measured from 1880 to even 2004.
Not only that, but most raw data from the NH and from many other parts of the world shows the 1930s,40s, warmer than the 2000-2010 period.
The GISS graph is as FAKE AS THEY COME !
I raise you Ed Hawkins stripes
It is noted that Nick is totally incapable of answering those two questions.
Nick, you are a liar and a base-level failure even as a con-man.
Nobody believes a single thing you post… not even you.
Everybody knows that YOU KNOW they are LIES.
It’s baffling how gullible people are to accept those incredibly narrow uncertainty intervals attributed to the late 19th and early 20th centuries.
Especially as the “adjustments™“ made to the raw data are often several times what is shown as the “error” margin…
… and always in a direction to fake a warming trend.
From the “Journal of Geophysical Resarch – Atomsopheres” have a look at the paper “Improvements in the GISTEMP Uncertainty Model”, 2019 (tpg note: James Hansen is listed as one of the authors)
——————————
“Station uncertainty encompasses the systematic and random uncertainties that occur in the record of a single station and include measurement uncertainties, transcription errors, and uncertainties introducedby station record adjustments and missed adjustments in postprocessing. The random uncertainties can be significant for a single station but comprise a very small amount of the global LSAT uncertainty to the extent that they are independent and randomly distributed. Their impact is reduced when looking at the average
of thousands of stations.”
————————————————————
———————————————————-
“The major source of station uncertainty is due to systematic, artificial changes in the mean of station time series due to changes in observational methodologies. These station records need to be homogenized or corrected to better reflect the evolution of temperature. The homogenization process is a difficult, but necessary statistical problem that corrects for important issues albeit with significant uncertainty for both global and local temperature estimates.”
————————————————–
This is just the typical climate science meme of “all measurement uncertainty is random, Gaussian, and cancels”.
Note carefully that the article assumes all measurement uncertainty is random but never justifies that. Nor does the paper justify the unstated, but still necessary, assumption that all measurement uncertainty is Gaussian (or at least symmetric) across multiple stations.
This demonstrates a complete lack of knowledge of metrology concepts, and especially a lack of understanding of the physical attributes of real world components.
This leaves the primary source uncertainty being “sampling uncertainty”. The paper says:
————————————–
4.2.3. Sampling Uncertainty
Sampling uncertainty is an umbrella term for uncertainties introduced into global and regional annual means by incomplete spatial and temporal coverage. Whereas the station uncertainties are observed to mostly cancel out in modern-era global annual means, as many of the uncertainties are independent from station to station, the sampling uncertainties remain significant.”
—————————————-
Sampling uncertainties can be reduced by better sampling, measurement uncertainties cannot be reduced in the same manner. Measurement uncertainties from different measurands using different devices in different environments accumulate. especially when the measurements cannot be assumed to be random, independent, or symmetric.
The meme of “all measurement uncertainty is random, Gaussian, and cancels” does nothing but allow climate science to avoid having to say “we do not know” what is actually happening. No other reason. And it wouldn’t be allowed in any other physical science or engineering discipline that I have been involved in over the past 50 years.
Great find, Tim, its all the same stuff the trendologists push ad nauseam.
Yes. A great comment from Tim.
I’ll add that “Sampling uncertainties can be reduced by better sampling,..” hides something else: More frequent sampling allows for more “records” to be claimed.
The newer platinum resistance thermometers allow for just that. It is quite possible they are more accurate and reliable. But the smaller thermal inertia and more frequent sampling also means that more frequent “record” temperatures would be guaranteed, even in a stable climate.
Seek and you shall find. That pretty much sums up climate science.
The newer stations like those with PRTs also average temperature over a period of time. For example, ASOS stations report 5 minute averages. This obviously effectively raises its time constant and may be partly responsible for its low bias compared to older stations. It also presents a challenge to those who say you cannot average temperature like Kip Hansen and followers yet seem to be ok with standard temperature observations.
Idiot. These measurements are by the same instrument in the same place over a very short period of time.
Multiple measurement of the same thing.
Sorry your grasp of mathematical ‘anything’ is so appallingly bad.
He simply can’t even grasp the difference between intensive and extensive properties.
Stop CHERRY PICKING. Why can’t you learn to read *everything* for context?
“These 5-minute averages are rounded to the nearest degree Fahrenheit, converted to the nearest 0.1 degree Celsius, and reported once each minute as the 5-minute average ambient and dew point temperatures”
When the average is rounded to the nearest degree, and is subsequently used in analysis, the measurement uncertainty becomes +/- 0.5F which is +/- 0.3C. This is endemic and can’t be reduced!
As as KM has pointed out, this average is of THE SAME THING TAKEN BY THE SAME THING. It only lacks the requirement of repeatability.
You *can* measure intensive quantities OF THE SAME THING. That does *not* mean that it is valid to add that temperature to the temperature of something different. Intensive properties do not add.
You just continue to show that you have absolutely no understanding of physical science or metrology.
Kip Hansen did not mention any exceptions to his “But, but, but, but” …..no butts. One cannot average temperatures” rule. The rest of your response has nothing to do with my post.
You ran away from Tim’s main point. No surprise
The classic 1/sqrt(n) meme.
It’s an attempt to substitute sampling error for measurement error because it is a smaller value. Sampling error simply cannot tell you the accuracy of anything, it can only tell you how many digits of resolution your calculator has.
Tim,
Excellent work, thank you.
I have studied in detail the temperature records of the 45 official weather stations that best fit my loose definition of “pristine” for UHI studies. I have found NO properties of these sites to allow them to be formally defined and so different to (say) urban stations. They have been defined as different by past authors on no more evidence than they should be different.
Within those 45 stations, there is not a great deal of commonality. Temperature trends over the same time intervals duffer; the timing of wriggles can differ, but comes closer as stations are chosen to be geographically close to each other; adjusted data like the BOM ASCORN-SAT 4 different series differ; other adjustment such as from metadata records is not useful, because in general the more remote the location the lower the quality of data and metadata.
This leaves me with the subjective impression that we do not understand or even know about further variables that add to the uncertainty of these “pristine” data sets. It is almost as if the classic Stevenson screen/Lig thermometers combination was extremely sensitive to some factor(s) as yet unrecognised that can change at times for reasons not understood.
There are therefore dangers associated with assuming that there is a UHI that can always be calculated from the urban minus pristine calculation, because we cannot define either term and we cannot assume some form of constancy in their temperatures. Sometimes that subtraction can be of opposite expected sign, such is the uncertainty.
But then, it was IPCC that decided to thrash around in the soup of small number uncertainty, looking for numbers to frighten children.
Geoff S
You’ve pretty much summed up the entire panoply of uncertainty associated with climate science.
Temperature is a multi-component functional relationship. Some of the factors are humidity, pressure, altitude, cloudiness, wind, geography, and terrain. There are certainly others that are part of the microclimate, even including the type of grass or fauna surrounding the measurement station (think evapotranspiration).
The measurement uncertainty in each and everyone of these factors ADDS, either by direct addition or root-sum-square addition.
The GUM specifically states that in Equation 10. The GUM also says the measurand *must* be fully specified, if it isn’t then that *adds* to the measurement uncertainty. In other words if you can’t define all the factors associated with the measurand then it isn’t fully specified and the uncertainty budget must take that into consideration.
“Buh, buh, buh the error CAN’T be that big!” — trendology.
Yup! And when the bridge winds up 2″ short of reaching the footing on the other end it’s because you didn’t calculate the average length of the construction beams out to enough digits.
GISTEMP uncertainty is on the order of 0.15 C prior to 1900. That is about 1 part in 40 relative to the range of temperature change since the last glacial maximum. That is incredibly large. As a point of comparison the ARGO float profiles have an uncertainty on the order of 0.002 C which is about 1 part in 15,000 relative to the range of temperature changes they record.
Anyway, do you mind providing a link to an uncertainty analysis showing how unreasonable the GISTEMP analysis is?
https://www.mdpi.com/1424-8220/23/13/5976
Ah yes. Pat Frank. Perhaps you can answer these questions related to his handling of uncertainty.
1) Why does he dismiss the mistakes found by others?.
2) Why does he use the wrong formula for uncertainty in many of the equations (#5 for example) in [Frank 2023]?
3) Why does he use Bevington 4.22 which only computes the variance of the data instead Bevington 4.23 which computes the uncertainty of an average in [Frank 2010]?
4) Why does he not confirm his answers using the NIST uncertainty machine?
bozo-x, the great expert on “uncertainty”, who still doesn’t understand that error is not uncertainty, thinks Pat Frank has it all wrong.
“He doesn’t ignore them, he refutes them. He has refuted each and every one.”
Not successfully. You seem to be confusing his relentless digging in deeper with “refutation”. In fact, he’s been rebuked for his boners at every turn.
The final call is in the fact that entire scientific world has carefully stepped over the referenced paper, because they don’t want to get it on their collective shoes. If it had merit, left, right, or center, they would be fighting to apply it’s findings to their work. But not even one citation. And your standard rejoinder of “Well, all the scientists are in a Dr. Evil, secret cabal against him” started incredibly and continued leaking air from then.
But I do agree that since then, he’s built an entire alt.world politically and culturally in his head to avoid admitting it. At least SLAC emeritted him away from his lab safety job, so they could stop the embarrassment.
A blob word salad does not constitute “refutation”.
Just like all the rest of the trendologists, you don’t even comprehend that error is not uncertainty.
And your usual slime job on Pat Frank still don’t hold water.
Big Oily Blob: Kindly explain in detail what is wrong with Frank’s work.
Already been done. Repeatedly. Would you like the links?
I agree that many of the fora refutations gave Dr. Frank the (repetitive, repeatedly debunked) last word, and he uses that to claim that he “prevailed. But he didn’t, as evidenced by the fact that his Bizarro world statistical evaluations have no superterranean acceptance..
Still want the links? I suspect not, since most here have been exposed to them, but either ignored them or went full Dan Kahan System 2 hysterically blind to them.
I’ve read critiques of Pat Frank’s work. One common error they commit is conflating large uncertainty intervals with physical impossibility. For instance, interpreting a +/- 1°C uncertainty range as indicating global average temperatures oscillating in and out of Little Ice Age conditions.
In reality, such intervals simply indicate the measurements are hopelessly corrupted and provide little to no utility.
Bingo. Just like the trendologists, they don’t understand that uncertainty is not error.
100%
bigoilbob simply doesn’t understand metrology at all. It’s a good thing he isn’t a machinist, he would be unemployed.
What a load of ignorant gibberish.
Pat has several magnitudes more comprehension of the issues than you or any of your fake mates will ever have.
All they do is expose their own ignorance.. and your ignorance along with them.
“Not successfully.”
Malarky! You may as well have stated that “Trump has never successfully refuted that he is a Russian agent”. Accusation is not proof!
Hubbard and Lin PROVED that homogenization spreads measurement uncertainty because of microclimate differences. Yet their paper gets stepped over continuously by climate science. That is *NOT* refutation of their study!
What a load of fanciful meaningless GIBBERISH… pertaining to absolutely nothing.
The slimy blob in action.
1) He conflates W.m-2 with W.m-2.year-1 without justification. [Lauer & Hamilton 2013]
2) law of propagation of uncertainty. [NIST TN 1297]
3) He used Bevington 4.22. [link]
Clown.
What is the right formula for uncertainty?
The law of propagation of uncertainty. [NIST TN 1297]
The cited text does not discuss systematic errors and pertains to repeatable measurements, which are unattainable in the context of atmospheric air temperature.
With systemic errors, you have two possible outcomes. If they are persistent enough through the evaluative period, they reduce the standard error of the trend, compared to what it would be if they were considered to be random. But if there are enough different ones, they tend towards normality, for evaluative purposes.
IOW. assuming that those errors are randomly distributed is your worst case. To imagine that a magic set of such systemic errors significantly influences either the trend we are trying to find, or it’s statistical durability, is WUWTBS.
The usual blob gar-bage — “everything magically transmogrifies into random then cancels.”
Have you ever considered reading first, responding after?
Have you ever considering taking the time to actually learn something about the subject instead of just high-fiving the B&B clowns?
“But if there are enough different ones, they tend towards normality, for evaluative purposes.”
Malarky!
Systematic uncertainty in measurement devices tend to all be in the same direction. Springs lose tension, they don’t gain tension. Ratchets wear and their accuracy decreases, it never increases. Electronic components expand under heating conditions (i.e. current flow) and the expansion is accumulative because of hysteresis in the material. If you want more examples I can provide them!
Measurement uncertainty NEVER trends toward normality. The absolute value of the uncertainty interval may be different for different instruments but it ALWAYS grows.
“standard error of the trend”
The standard error of a trend is based on assuming the stated values of the measurements are 100% accurate. THAT is the assumption that is totally wrong.
All you have done here is repeat the common climate science meme that all measurement uncertainty is random, Gaussian, and cancels. An actual physical impossibility.
Tell us – do you believe that if you have two iron rods that you heat to the same temperature that one will expand and one will contract?
If you *DON’T* believe that then your claim about systematic uncertainty always cancelling is either based on lying or ignorance. Only you can know which.
No, I don’t believe in that “always” or “totally”. Please read what I really wrote.
As a said, systemic errors can move trends up or down. And to the extent that they are truly “systemic” thru the evaluative period, they can reduce trend uncertainty, compared to what it would be if the errors were evaluated as random Of course, as the number of different “systemic” errors increases, their effect would be to have the evaluation trend towards what it would be if those errors were considered to be random. We know this. We’ve known it for over a century…
“As a said, systemic errors can move trends up or down.”
No! Systematic errors mean you DO NOT KNOW if the trends are moving up or down because you don’t know a “true value”.
You have a *real* problem with understanding the concept of UNCERTAINTY.
“their effect would be to have the evaluation trend towards what it would be if those errors were considered to be random.”
NO! They won’t cancel because they are *NOT* random!
Systematic bias in measurements *always* tend to be in the same direction because of design. Springs *never* gain tension, they always lose tension. Thus the bias *always* moves in one direction. Electronic components expand with heating from current flow. Thus the bias ALWAYS MOVES in one direction. Measurement heads on micrometers *always* wear from losing metal, they *never* grow additional metal. Thus the bias *always* moves in one direction.
When systematic bias moves in one direction possible cancellation gets SMALLER. The total bias *always* gets larger, not less.
Same c/p comment. Same willful avoidance of reading what I actually wrote.
Interesting the difference in post quality and thoughtfulness, between those that post early, late, and do shit the rest of the time, and their polar opposites.
Stop whining.
I get this all the time. Here is a summary of the extensive vs intensive discussion I’m painfully trying to have with TG.
Me: W.m-2 is intensive.
TG: Wrong. Lumens is extensive.
Me. I know, but lumens is not the same thing as W.m-2.
TG: Wrong. Energy is extensive.
Me. I know, but energy is not the same thing as W.m-2.
TG: Wrong. Watts is extensive.
Me: I know, but watts is not the same thing as W.m-2.
Half of the chatter is him completely ignoring what I said and making a up a strawman .And even when he does acknowledge what I say he defends his position by making yet more factually incorrect claims.
BTW…the flash flooding today was insane.
Your attempts to gaslight that you actually known anything about radiometry are pathetic.
Your links finally made me see how silly G’s “intensive”, “extensive” whines are. He’s totally, deflectively, circular, and I should map it. Then, I could predict when he will rerun his “Well, I’ve got a pickup truck full of 8′ boards” irrelevance, to convince us that he’s a man of the people.
As for relevance:
“BTW…the flash flooding today was insane.”
Any water in your basement? I’m guessing that up there, it’s built to keep it out
My 113 year old brick, south of the Hill used to get lots of trickles They would drain fine, but bugged me. We put in Richie Rich guttering a few years ago, and I got only one trickle today, at the only section we couldn’t adequately divert.
The rain also messes with my wife’s all girl bike rides on Thursday evenings, from South Cyclery to wherever and back. And it interferes with masters swim training, because every Y pool – indoors or out – closes whenever there’s lightning within a 6 mile radius, Bummer.
bdgwx got caught. Flux is extensive. If you don’t believe that then you are as bad at physics as he is – which is likely true.
Neither of you have *any* understanding of basic metrology.
You got caught and you know it! You had to change from talking about flux to exitance!
Be brave and admit it!
Flux is Flux. Exitance is Exitance. They are not the same.
If you screwed up and started talking about flux when you meant exitance then ADMIT IT.
STOP WHINING!
WTF is a “systemic error”?
Just like the B&B clowns, you don’t understand that error is not uncertainty.
And who are “we”?
Note: blob ran away from these questions.
First…NIST does discuss systematic uncertainty in TN 1297.
Second…the law of propagation of uncertainty has a covariance term that handles correlated errors.
Third…no where does it say that the law of propagation of uncertainty is for repeatable measurements of the same thing. In fact, it is a general equation that is used not only to combine the uncertainty of measurements of different things, but it can do so even when those measurements are of a completely different type with different units.
Fourth…it is irrelevant to my point. The point being that Pat Frank used the wrong formula.
Occam sez: “more likely it is bg-whatever doesn’t know WTF he yaps about.”
I remember people in undergrad days who thought that engineering was just a matter of “plugging into” the right formula.
Those people didn’t last very long.
The bottom line: Pat Frank understands uncertainty and uncertainty analysis — YOU DON’T.
“Third…no where does it say that the law of propagation of uncertainty is for repeatable measurements of the same thing.”
Of course it does! You continue to read the GUM for meaning and comprehension!
from the GUM:
——————————————–
2.2.3 The formal definition of the term “uncertainty of measurement” developed for use in this Guide and in the VIM [6] (VIM:1993, definition 3.9) is as follows: uncertainty (of measurement) parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand” (bolding mine, tpg)
——————————————-
Note carefully the use of the term “the measurand”. That is *NOT* the same thing as “multiple measurands”.
——————————————–
NOTE 3 It is understood that the result of the measurement is the best estimate of the value of the measurand, and that all components of uncertainty, including those arising from systematic effects, such as components associated with corrections and reference standards, contribute to the dispersion.
——————————————–
———————————————–
3.1 Measurement
3.1.1 The objective of a measurement (B.2.5) is to determine the value (B.2.2) of the measurand (B.2.9), that is, the value of the particular quantity (B.2.1, Note 1) to be measured.
——————————————————-
—————————————–
3.1.4 In many cases, the result of a measurement is determined on the basis of series of observations obtained under repeatability conditions (B.2.15, Note 1).”
———————————————————–
———————————————————-
B.2.15
repeatability (of results of measurements)
closeness of the agreement between the results of successive measurements of the same measurand carried
out under the same conditions of measurement” (bolding mine, tpg)
———————————————————
This is the text that goes with GUM equation 10, the propagation of uncertainty.
——————————————————
5.1.1 The standard uncertainty of y, where y is the estimate of the measurand Y and thus the result of the measurement, is obtained by appropriately combining the standard uncertainties of the input estimates
x1, x2, …, xN (see 4.1). This combined standard uncertainty of the estimate y is denoted by uc(y).”
———————————————–
x1, x2, etc are measurements taken of the SAME MEASURAND. E.g. x1 = height of a barrel. x2 is the diameter of the same barrel. The combined standard uncertainty is the propagated uncertainty of those two measurements OF THE SAME MEASURAND, i.e. the SAME BARREL.
Why do you persist in coming on here and lecturing people about things you know nothing about and refuse to learn about?
He’s a formula plugger, unable to comprehend what he’s trying to use.
Tim Gorman sums it up nicely.
We can only measure atmospheric air temperature once per day (one MAX and one MIN), which means we can never verify the accuracy of our samples.
No he doesn’t. He posts content from the GUM that is irrelevant to the topic of uncertainty propagation and does not in any way invalidate the fact that the procedure for propagating uncertainty does NOT require the inputs to be of the same thing. Even the example given in section 5 is of inputs for different measurands even of completely different types with units. Going further the NIST uncertainty machine manual has several examples none of which propagate uncertainty from the same measurand. Don’t take my word for it. Read JCGM 100:2008 and play around with the NIST uncertainty machine yourself.
Did you not read:
He only sees what he wants to see.
bdgwx hates Tim Gorman.
Heh. This made me laugh.
Oh yes. I read the GUM. That’s how know the second paragraph does not appear in it. That is something TG made up all on his own. It’s also how I know that section 4.1 referenced in the first paragraph (which does appear in the GUM) has an example where x1, x2, …, xN are of different measurands. Literally it is right there at the top of page 9.
I encourage you to read the GUM as well and verify this for yourself. Don’t take my word for it. And certainly don’t TG’s word for it.
You’re insane, all you know is formula plugging.
4.1 is “Modeling the measurement”, where separate measurements are used to calculate a final measurand (singular) with a defined function Y = f(X1,…Xn).
The average formula does NOT qualify, as you’ve been told countless times, 4.1.3 should make this clear (impossible for Olmec heads, I realize).
He can’t even read a simple sentence for meaning!
the power P (the measurand)
“Oh yes. I read the GUM. That’s how know the second paragraph does not appear in it. That is something TG made up all on his own.”
I did *NOT* make it up. It’s what the GUM says about x1, x2, etc! They are measurements of different factors in a functional relationship. E.g. the height and radius of a barrel.
Your reading comprehension is atrocious.
from the GUM
—————————–
4.1.1 In most cases, a measurand Y is not measured directly, but is determined from N other quantities
X1, X2, …, XN through a functional relationship f :
Y = f (X1, X2, …, XN )
—————————————————-
You have *NOT* read the GUM. You and bellman cherry pick from it without understanding either the context or the meaning of what it says!
“t section 4.1 referenced in the first paragraph (which does appear in the GUM) has an example where x1, x2, …, xN are of different measurands.”
You can’t even read the top of page 9 correctly!
——————————————-
EXAMPLE If a potential difference V is applied to the terminals of a temperature-dependent resistor that has a
resistance R0 at the defined temperature t0 and a linear temperature coefficient of resistance α, the power P (the
measurand) dissipated by the resistor at the temperature t depends on V, R0, α, and t according to
———————— (bolding mine, tpg)
P is the measurand. The temperature t is not the measurand. V is not the meaurand. R is not the measurand. a is not the measurand.
P, the power, is the measurand and it’s value is based on the input factors V, R, a, and t via a functional relationship. x1 is V, x2 is R, x3 is a, and x4 is t.
And the measurement uncertainty of the measurand, P, is the propagated measurement uncertainty of the factors in the functional relationship!
Does the word/term “estimate” not mean anything?
Same old crap.
Occam is laughing at you.
“the procedure for propagating uncertainty does NOT require the inputs to be of the same thing.”
I gave you the quotes from the GUM that put the lie to this. And yet you persist in trying to gaslight everyone?
“where y is the estimate of the measurand Y”
THE MEASURAND!
Singular.
Period.
Full stop.
You still don’t get what the word “combined” means in a combined standard uncertainty. You have a function that determines a singular measurand Y, based on multiple measurands, X_1, X_2 etc.
The inputs, the Xs, can be different things. You accept this when you measure the height and radius of a water tank to get a singular measurand – the volume. I’d have though this was obvious, but looking back you say
How on earth do you think the diameter of a barrel is the same measurand as the height of a barrel? Are you really trying to pretend you think “the barrel” is a measurand?
💀
The inputs X1, X2, …, Xn determine Y. The height and diameter of the barrel are separate coefficients. He understands this clearly. Nobody is being fooled by your tricks.
“The height and diameter of the barrel are separate coefficients.”
Hence different measurands.
No, these are various measurements of the same measurand/quantity.
Absurd. The GUM specifically says that they are treating all measurands as scalars. That’s the context of equation (10) where each input is considered a separate measurand.
You can extend the concept to treat measurands as vectors – but then your entire argument is kaput. You can treat the global temperature as a single measurand, consisting of a vector of all individual measurements . The claim that you are averaging different measurands becomes irrelevant as you are just averaging one single measurand.
“The GUM specifically says that they are treating all measurands as scalars. That’s the context of equation (10) where each input is considered a separate measurand.”
NO! The measurements of a measurand are not in themselves measurands. They are measurements of a measurand!
from the GUM:
—————————————–
EXAMPLE If a potential difference V is applied to the terminals of a temperature-dependent resistor that has a
resistance R0 at the defined temperature t0 and a linear temperature coefficient of resistance α, the power P (the
measurand) dissipated by the resistor at the temperature t depends on V, R0, α, and t according to” (bolding mine, tpg)
————————————–
V, R0, a, and t are *NOT* measurands, they are measurements of the measurand and are factors used in the functional relationship defining the measurand.
You are still cherry picking crap to throw against the wall instead of actually studying the GUM for meaning and context.
“NO! The measurements of a measurand are not in themselves measurands.”
The GUM says they can be
But your original argument wasn’t that they are not measurands, but they all had to be measujrments of the same measurand.
“You are still cherry picking crap to throw against the wall instead of actually studying the GUM for meaning and context.”
How many more times are you going to make that excuse? All you are saying is “ignore anything the GUM actually says, just let me interpret it.” You are setting yourself up as the only authority figure who can correctly interpret the holy scripture.
Read the GUM again!
————————————–
4.1.4 An estimate of the measurand Y, denoted by y, is obtained from Equation (1) using input estimates
x1, x2, …, xN for the values of the N quantities X1, X2, …, XN. Thus the output estimate y, which is the result of
the measurement, is given by
y = f (x1, x2, …, xN )
—————————————-
Tim: Don’t cherry pick the GUM.
Also Tim: I’m ignoring the part you quoted which explicitly stated that the inputs could be considered measurands, but want you to read a part that says nothing about whether they are measurands or not.
The global average temperature is not a single measurand. Rather, it is an average derived from thousands of individual measurands.
From the GUM 2008:
Measurand:
particular quantity subject to measurement
EXAMPLE Vapour pressure of a given sample of water at 20 °C.
NOTE The specification of a measurand may require statements about quantities such as time, temperature and pressure.
Measurement:
set of operations having the object of determining a value of a quantity
NOTE The operations may be performed automatically.
Not a single word you typed says that an average of multiple measurands cannot be a measurand.
Would you say the sum of multiple measurands cannot be a measurand?
And if you insist that the average cannot be a measurand we are back to the same paradox as the Gorman’s et al are oblivious to – if it isn’t a measurand it cannot have a measurement uncertainty.
“Not a single word you typed says that an average of multiple measurands cannot be a measurand.”
Sure it does. You are still trying to cherry pick instead of actually studying and understanding.
An average is not a measurement. An average is not a measurand. The average of multiple measurands is not itself a measurand. It is a statistical descriptor of a distribution of measurements or measurands.
Why do you think the root “measure” implies? How do you measure something that doesn’t exist?
Exactly.
“set of operations having the object of determining a value of a quantity”
yep. Not the value of a measurand but the value of a measurement of the measurand.
Yet most (all?) examples in the GUM or the NIST uncertainty machine user manual is of the x1, x2, …, xN being of different measurands.
Read 4.1.3, clownpants.
They won’t read it. All they can do is cherry pick things they think confirm their misconceptions!
Yep!
“Yet most (all?) examples in the GUM or the NIST uncertainty machine user manual is of the x1, x2, …, xN being of different measurands.”
No, they are *NOT*.
A measurement is *not* the measurand! The measurand is something like a board. The length, width, and height of that board are measurements of the board. They are not, in and of themselves, measurands!
You and bellman can’t seem to even get this simple concept correct!
They still can’t make it past understanding that error is not uncertainty.
Did you even read your own source?
He never reads anything for meaning or context. It’s all cherry picking.
Can you measure an average? Where do you go to measure it? If you can’t measure it then is it a measurand?
Yes. That’s how know that no where in the GUM does it say x1, x2, …, xN have to be the same measurand. The quoted block you selected makes no stipulation that Xi have to be of the same measurand.
“No, these are various measurements of the same measurand/quantity.”
Hurrah! Someone that gets it! You’ve nailed it!
Neither bellman or bdgwx have ever actually studied anything involving metrology. They just cherry pick things they think confirm their misconceptions.
They must not like metrology.
I’ve also noticed Bellman really likes ordinary least squares.
Because ordinary least squares is the method Monckton insists on using for all his pauses. I don’t suggest it’s the only method.
Its the only tool in his toolbox.
He is an expert at gaslighting.
The VOLUME of a barrel, THE MEASURAND, *is* the result of a functional relationship based on the factors of the height and radius of a barrel.
The measurement uncertainty of the MEASURAND, i.e. the barrel, is the combined measurement uncertainty of the factors in the functional relationship.
You *still* haven’t figured out measurement uncertainty!
“The VOLUME of a barrel, THE MEASURAND, *is* the result of a functional relationship based on the factors of the height and radius of a barrel.”
No need to shout. You are not disagreeing with anything I said. The volume of the barrel is the measurand you are determining using a function. The inputs to that function are height and diameter, two different measurands.
Ambiguity strikes again 🙂
If the volume of the barrel is what is of interest, then it is the masurand.
That then requires the height and radius, so they have to be measured. Each of those in turn becomes the measurement of interest, hence the measurand (at the time).
The radius may well be determined by the diameter or circumference, so that then becomes the measurand (at the time).
It’s a hierarchy of measurands, Measurands all the way down, or at least until you reach a leaf node.
“Subsidiary measurands” or “component measurands” may be more accurate terminology.
Precisely, the uncertainties of the subordinate measurements become inputs for the final measurand. This is the entire point of GUM 4.1 “Modeling the measurement”, as 4.1.3 makes clear.
This seems to be the crux of the argument about what the meaning of is is.
Are they subsidiary measurements, are are they subsidiary measurands? And is it a distinction without a difference?
I think it is a matter of partitioning—the main point of the way the GUM is constructed is that a measurement result and its associated uncertainty can be used directly in subsequent measurements.
A good example is the calibration chain, which starts at national lab-level calibrations. If I am using a photodiode that has been calibrated by NIST for a measurement I am making, I use the calibration value to get my final result, and include the uncertainty reported by NIST when I calculate the uncertainty of my measurement.
Notice there is no way I can reduce my uncertainty below that of the NIST calibration, unless I get a better calibration from NIST (or elsewhere).
If length can be a measurand then I should be able to give you a tape measure and tell you to go measure a length. Where do you find that length? What does it look like? Does someone have to hold the tape measure at one end of it while you walk to the other end?
If radius can be a measurand then I should be able to do the same. Where do you find a radius to measure?
I’ve never seen a “length” or a “radius” anywhere in the wilds of my backyard. Have you?
The same applies for an “average”. I’ve looked all over the 160 acres behind my house and I’ve never been able to find an “average” to measure. Have you?
Touche!.
Yes, I should have written “length of the cylinder” and “radius of the cylinder”.
None of us are immune to sloppy language usage.
Maybe; maybe not. JCGM 200:2012 defines a measurand as a quantity to be measured. It defines quantity as a property of a phenomenon, body, or substance. It lists “length” as one such property. So as long as the property is to be measured it is a measurand at least according to JCGM. I think you can probably get into the philosophical weeds here though. To decide to measure a length you have to identify what length it is to measure. So in that regard a statement like “length of the cylinder” does seem more appropriate.
The JCGM says nowhere that a property of a measurand is a measurand.
Here is what the JCGM says:
——————————————–
D.1 The measurand
D.1.1 The first step in making a measurement is to specify the measurand — the quantity to be measured;
——————————————-(bolding mine)
Even the JCGM doesn’t consider a property of a measurand to be a measurand in and of itself. It is a PROPERTY of a measurand. And the measurand is the quantity to be measured. E.g. voltage, pressure, etc. The *value* of that measurement is not a measurand.
“If the volume of the barrel is what is of interest, then it is the masurand.”
I’m not disagreeing with that. I’m saying that if the volume is determined from the height and radius of the barrel, then height and radius are also measurands, and they are not the same measurand as the volume.
“That then requires the height and radius, so they have to be measured. Each of those in turn becomes the measurement of interest, hence the measurand (at the time).”
Yes, that’s what I’m saying. Tim is saying they are measurments of the same measurand, namely the volume of the barrel.
“It’s a hierarchy of measurands, Measurands all the way down, or at least until you reach a leaf node.”
Which is what 4.1.2 says
““Subsidiary measurands” or “component measurands” may be more accurate terminology.”
I don;t care what you call them – just that they are not the same measurand.
No, but they are required to determine the output quantity (volume).
Precision of language matters. “Different” implies “not related”, as in “that’s different”.
This still feels like the usual over interpretation of meaning in order to avoid admitting that the uncertainty from measurement will be less in an average than in the sum.
You need height and radius to determine the volume – but that doesn’t mean height and radius are describing the same thing. You need various measurements to determine an average, but that doesn’t mean the measurments have to be of the same thing.
I still don’t know if Tim thinks that it’s possible to use the general equation to determine the uncertainty of a sum of different things, but it’s difficult to see why it would be OK to add different things but not average them.
The average seems to me to be just a scaling, so that seems correct. The question should be more a matter of how the uncertainties are added.
If your measurand is the volume of a particular cylinder, you had better be measuring the height and radius (or diameter or circumference) of the same cylinder 🙂
Height and radius are leaf nodes in that they can’t be broken down into smaller components. While they can stand alone, they are required (subordinate measurands?) to determine the volume of a cylinder. Look on the bright side – Tim is sticking with regular shapes 🙂
That’s where the ambiguity comes into play, and everybody winds up arguing past each other.
My interpretation of what Tim meant by “measurements” was that they go into the calculation of the volume. That doesn’t seem strictly correct, just like “different measurands” doesn’t seem strictly correct.
Sometimes I think the definitions in the GUM need to be restated in Backus-Naur Form so that everybody can parse them the same way.
“My interpretation of what Tim meant by “measurements” was that they go into the calculation of the volume.”
But again, how would that be different from measurments going into the calculation of the average.
“Sometimes I think the definitions in the GUM need to be restated in Backus-Naur Form so that everybody can parse them the same way.”
I suspect ambiguity is a feature of the GUM. They put a lot of effort into avoiding upsetting multiple opposing camps, and so people can read into it anything they want.
Let me count the ways 🙂
The volume is a multiplicative relationship, so as I understand it the relative uncertainties need to be used.
An average is a scaling of a sum, so uses the absolute uncertainties.
Smart-alecness aside, I think it’s a matter of which additive approach is appropriate to calculate the sum of the uncertainties, and that’s above my pay grade.
You are basically correct: it is easier to use relative uncertainties when possible. They also have the advantage of making it easier to compare the magnitude the various uncertainty components have on the uncertainty of the final measurand, regardless of their units. When I was writing UA software, I had routines that converted back-and-forth from absolute and relative.***
Unless you can stick solely to Kelvin, temperature has to be handled with absolute uncertainties, though, u(T) = 0.6% of 10°C is meaningless.
Another problem is if the units of the measurand itself are relative, such percent efficiency — stating that u(eta) = 1.4% is ambiguous without more information.
Formal uncertainty analysis can be quite difficult and involved, but has to be done if you are to give an honest appraisal to your customers who will have to use your numbers.
***With object-oriented programming, these could be done as methods of a class that encapsulates both the value and its uncertainty bundled in a class.
“-1”
HEH, the trendologists can’t handle the truth!
“The volume is a multiplicative relationship, so as I understand it the relative uncertainties need to be used.”
Not really true in this case. We are talking about the general law of uncertainty propagation. It’s the same equation, using absolute uncertainties, regardless of the function. What changes is the coefficients from the partial derivatives. It’s just that when the function is multiplication or division, the result can be simplified into an equation involving relative uncertainties.
That’s where the specific rule about adding relative uncertainties for multiplication comes from.
An average is a multiplication by a fraction. So you have to use relative uncertainties.
if q = Bx and x = x1 + x2+ … +xn and B = 1/n (a constant) then ẟq/q = ẟx/x + ẟB/B (where ẟ is the uncertainty symbol)
Since ẟB = 0 (the uncertainty of a constant = 0) then ẟB/B = 0 and
ẟq/q = ẟx/x
The relative uncertainty of q is the relative uncertainty of x. And the uncertainty of x is the additive value of the uncertainties of the members of x, e.g. ẟx = ẟx1 + ẟx2 + … + ẟxn. ( or of the root-sum-square, still an addition)
The uncertainty in q is related directly to the uncertainty in x, *NOT* to the uncertainty in x divided by n or sqrt(n).
Eq 10 of the GUM is for factors in a functional relationship. An average is *NOT* a functional relationship, it is a STATISTICAL relationship.
You still don’t see the problem. The relative uncertainty of the average is the same as the relative uncertainty of the sum. Correct.
But you keep treating it as if that means the absolute uncertainty is also the same.
Let’s go through your original claim. That uncertainty of the average increases with sample size. You give a specific example, the average of 100 thermometers each with an uncertainty of ±0.5°C. You claim the uncertainty of the mean is the uncertainty of the sum, which is ±5°C. All of those are absolute not relative uncertainties.
You get to the uncertainty of the sum by adding the absolute uncertainties in quadrature, which simplifies to √100 * 0.5 = 5. You have to use absolute uncertainties here because the sum is adding.
Now if you want to use relative uncertainties for the division part, you first have to work out what the relative uncertainty of the sum is. Let’s say the sum came to 30000K, so the relative uncertainty is 5/30000, around 0.02%.
So now you can say the uncertainty of your average is the uncertainty of your sum. That is 300K ± 0.02%. Or you can convert this back to an absolute uncertainty, which is (5/30000) * 300 = 0.05K.
The pont though, as Taylor is trying to tell you, is you don’t need to go through the relative uncertainty stage in this case. The result of multiplying any quantity by an exact value will result in multiplying the absolute uncertainty by that exact value. In this case we can go from the uncertainty of the sum, ±5°C,, to the absolute uncertainty of the average by dividing by 100.
“But you keep treating it as if that means the absolute uncertainty is also the same.”
NO, I don’t. Again, your reading comprehension is atrocious. Just non-existent.
The uncertainty of x in a multiplicative functional relationship is a RELATIVE UNCERTAINTY. The relative uncertainty of q and x is the same.
“So now you can say the uncertainty of your average is the uncertainty of your sum. That is 300K ± 0.02%. Or you can convert this back to an absolute uncertainty, which is (5/30000) * 300 = 0.05K.”
You are *still* trying to make the average into a functional relationship.
It isn’t. All you have done here is try to find a value that when multiplied by “n” gives the same value as the measurement uncertainty of the sum. What happens if all the component measurement uncertainties are *not* the same? The average becomes a useless value because it doesn’t describe the population. Same for an average value of measurement uncertainty.
“The result of multiplying any quantity by an exact value will result in multiplying the absolute uncertainty by that exact value.”
That is *ONLY* true if all of the component measurement uncertainties are the same! When you are measuring different things with different devices in different environments the component measurement uncertainties will not be the same.
You *still* haven’t grasped the concept of measurement uncertainty.
“NO, I don’t”
So oblivious to your own faults, you don’t even realize you are doing it.
“The relative uncertainty of q and x is the same.”
But your claim is the absolute uncertainties are the same, remember. You say if the uncertainty of the sum is ±5°C, then the uncertainty of the average is ±5°C. That’s your entire justification for claiming that measurement uncertainty increases as sample size increases.
“You are *still* trying to make the average into a functional relationship.”
You’re dividing a number by 100 – how much more functional do you want the relationship.
“All you have done here is try to find a value that when multiplied by “n” gives the same value as the measurement uncertainty of the sum”
You don;t understand the most basic maths do you. At no point did I multiply any number by 100.
“What happens if all the component measurement uncertainties are *not* the same?”
Talk about trying to throw down a red herring. It does not matter. All the uncertainties being the same size was your example. If they are different it’s still the same operation – just add all the uncertainties in quadrature to get the measurement uncertainty of the sum. Divide by N to get the measurement uncertainty of the average.
“The average becomes a useless value because it doesn’t describe the population.”
And the deflections continue. We were not talking about a sample, and having different measurement uncertainties says nothing about how well a sample describes the population. Please try to focus on the point at hand, rather than all these pathetic deflections.
“That is *ONLY* true if all of the component measurement uncertainties are the same!”
You really don’t understand what you are talking about, so you. Taylor (3.9) has nothing whatsoever to do with the measurement uncertainties being the same. It simply says that if you have a value with a (single) measurement uncertainty and multiply the value by an exact value, then the measurement uncertainty will also be multiplied that exact value.
“But your claim is the absolute uncertainties are the same, remember. You say if the uncertainty of the sum is ±5°C, then the uncertainty of the average is ±5°C. That’s your entire justification for claiming that measurement uncertainty increases as sample size increases.”
If you don’t believe that measurement uncertainty grows with more and more items being added into the set then I can only hope to God that you are never involved in any endeavor that has ramifications for human life and prosperity.
It is a simple matter of the additional items increasing the variance of the distribution. The larger the variance the smaller the hump around the average meaning the actual average becomes more and more uncertain.
As an example, with a sharp peak at the value of 1 for an average, the probability of the average actually being 1 is pretty high. The probability of some other value, say .9 or 1.1 actually being the average tails off quickly. With a low, broad hump at 1 the probability of the average actually being 1 gets much closer to the probability of it being .9 or 1.1. I.e. the uncertainty of the average goes up. As the variance gets wider and wider from adding different items into the data set the uncertainty of that average grows – IT DOES NOT DECREASE! There is no division by “n” or the “sqrt(n). The only question is how fast does the uncertainty grow.
You claim to be a statistician as does bdgwx. But neither of you are actually willing to learn and understand what the statistical descriptors of a distribution are telling you. You want to believe that as you add different items into the distribution that the *certainty* of the average grows when the statistical descriptors tall you otherwise.
If you would bother to actually calculate the variance as you add different temperatures (and this applies to climate science as a whole) into your distribution maybe this would become more clear. But I doubt you would 1. do the work and 2. understand what it is telling you. You are just too ingrained with the meme that all measurement uncertainty is random, Gaussian, and cancels.
“It is a simple matter of the additional items increasing the variance of the distribution. ”
There’s the customery Gorman deflection. Rather than address the problem with him confusing reletive and absolute uncertainties, change to an entirely different, but equally wrong, argument. Ignore the fact that if he’s right on this all his sources, Taylor, the GUM, and NIST, must all be getting it wrong.
There are so many ways this argument about variance is wrong, and contradicts Tim’s other arguments it’s difficult to list them all.
1. He’s using variance, yet by all his other argument’s variance doesn’t exist. It relies on the average of temperatures which he insists can’t be calculated. It’s has no physical reality being based on square degrees. You can’t put it in your fridge, or take a photo of it, so it doesn’t exist. And of course, it’s a statistical descriptor relying on numbers being numbers so has no physical meaning.
2. He’s just plane wrong. Adding tems to a distribution does not increase it’s variance. In general sample variance tends to a constant value as sample size increases.
3. The uncertainty of the sample variance, or standard deviation, is not the uncertainty of the mean. Something he should accept if he want’s to use TN1900 example 2 as the correct model for determining uncertainty of a monthly temperature average.
4. Having attacked me numerous time for not distinguishing between uncertainty in the measurements and from sampling
He now switches from a discussion about measurement uncertainty to sampling uncertainty without mentioning it.
“There’s the customery Gorman deflection. Rather than address the problem with him confusing reletive and absolute uncertainties, change to an entirely different, but equally wrong, argument. Ignore the fact that if he’s right on this all his sources, Taylor, the GUM, and NIST, must all be getting it wrong.”
This isn’t an issue of relative vs absolute uncertainty. That is *YOUR* deflection.
You refuse to admit that variance is a measure of uncertainty of a distribution and that measurement uncertainty smears the variance to make it wider. How you relate the size of the smearing is based on the relationship of the dependent and independent variables – by either a relative or absolute relationship.
You can’t even admit, or more likely can’t understand, that variance tells you the relationship between the mean and its adjacent values. As the probabilities of the surrounding values approach that of the mean the uncertainty of the mean grows.
I don’t know how to explain it more simply. I’ve given you pictures of it in the past but you couldn’t grasp those either.
“This isn’t an issue of relative vs absolute uncertainty. That is *YOUR* deflection. ”
Incredible. This whole discussion starts becasue Tim claims that the measurement uncertainty of an average is the same as the measurement uncertainty of the sum, because he doesn’t understand that he’s switching between relative and absolute values.
Now, I’d like to think he’s finally realised his mistake, so he changes the argument to the variance of the temperatures, switching in the process between measurement and sampling uncertainty. Then claims I’m deflecting by even mentioning the difference between absolute and relative uncertainties. Tim has so many cognitive defenses it’s no wonder he’s incapable of learning anything.
“You refuse to admit that variance is a measure of uncertainty of a distribution and that measurement uncertainty smears the variance to make it wider.”
He still ignores the obvious point that if he’s correct that temperature cannot be averaged, then you can not have a variance of temperature.Variance being defined in terms of the average difference from the average temperature.
He also fails to see that his own insistence that anything that can not be put in a fridge does not exist in the physical world, means that variances do not exist. Especially considering his insistence on using variance rather than standard deviation means he is talking about square degrees of temperature, something that does not exist in the real world.
To his assertion however, standard deviation is a measure of individual uncertainty, not uncertainty of the average. It tells you how close an individual measurement is likely to be to the true mean. It does not tell you how close the sample mean is likely to be to the true mean – that requires the standard error of the mean.
“You can’t even admit, or more likely can’t understand, that variance tells you the relationship between the mean and its adjacent values”
No idea what you mean by “adjacent values”. Variance tells you the average square distance between all values and the mean. But you also claim the mean does has no meaning, I’m not sure why you think the variance has any meaning.
“As the probabilities of the surrounding values approach that of the mean the uncertainty of the mean grows. ”
Wut??
“I don’t know how to explain it more simply.”
That doesn’t surprise me.
“Now, I’d like to think he’s finally realised his mistake, so he changes the argument to the variance of the temperatures, switching in the process between measurement and sampling uncertainty.”
The variance of a distribution *IS* a metric for the uncertainty of the average. And measurement uncertainty does nothing but expand the variance of a distribution. If the max data entry is 100 and the uncertainty is +/-1 then the variance using the +1 uncertainty is greater than that using just the stated value.
You’ll never understand this because you are too tied into the meme that all measurment uncertainty is random, Gaussian, and cancels,
“He still ignores the obvious point that if he’s correct that temperature cannot be averaged, then you can not have a variance of temperature.Variance being defined in terms of the average difference from the average temperature.”
You got this one right. Temperature is an intensive property. I’ve said from the word go that using temperature as an extensive property is not valid physical science.
“He also fails to see that his own insistence that anything that can not be put in a fridge does not exist in the physical world,”
I already answered this. You can’t see voltage but *I* know that it exists because I can feel it! Put a 9v battery on your tongue and see if you *feel* anything. “Length” is not a physical “thing”, it is an attribute, a property, of a physical thing. The fact that you can understand that is *your* problem, not mine.
“It tells you how close an individual measurement is likely to be to the true mean”
The operative word here is “likely”. If the values 0.9, 1.0, and 1.1 have almost the same “liklihood” because of a wide variance then which one is correct? Ans: You don’t know. It’s part of the Great Unknown.
“It tells you how close an individual measurement is likely to be to the true mean. It does not tell you how close the sample mean is likely to be to the true mean – that requires the standard error of the mean.”
Nope. You *still* don’t have a handle on uncertainty. The standard error of the mean is a metric for sampling error, it tells you nothing about whether the the mean of the population is accurate or not.
We’ve been down this road before. You can have a standard error of the mean be zero while the population mean is WILDLY INACCURATE because the data itself is inaccurate. The accuracy of the mean can only be determined by propagating the uncertainty of the individual data elements onto the mean.
This is just more proof that the meme “all measurement uncertainty is random, Gaussian, and cancels” is so ingrained in your mind that you simply can’t get away from it. In your statistical world the mean of a distribution can always be accurately determined by using only the stated values of the measurements while ignoring the measurement uncertainty of the measurements.
Sorry – your distraction techniques won’t work here. I’m still keep asking about your original claim even if you just keep ignoring it.
You keep trying to pull this nonsense – just as you are getting close to realizing you’ve been wrong all these years, you throw a dead cat on the table. Start talking about how variance is the one true uncertainty, make your usual childish jibe’s that I’m the one who doesn’t understand statistics, and just keep lying about me. All in the hope that we’ll argue about that, and forget what we were originally talking about.
So.
Do you now accept you were wrong to claim that the measurement uncertainty of an average is the same as the measurement uncertainty of the sum?
Do you accept that Taylor’s ẟq/q = ẟx/x means the sum and the average will have the same relative uncertainty, but not the same absolute uncertainty?
Do you accept that the equation leads to the special case of 3.9, which results in the absolute measurement uncertainty of the mean being equal to the absolute measurement uncertainty divided by sample size?
Do you accept that it is not true that measurement uncertainty will increase with sample size?
I’ve told you why measurement uncertainty *will* increase as you add multiple different things into the data set. The variance increases!
It’s just that simple. But you won’t accept that variance is a metric for uncertainty. As variance goes up the certainty of the average value goes down, it’s just an irrefutable fact of statistics. You claim to be enough of a statistician that you can lecture people on it but you have no basic, fundamental understanding of what statistical descriptors tell you about the distribution.
Add measurement uncertainty of the data elements into the soup and it smears the measurement uncertainty of the mean even wider.
Range is a metric for variance. Measurement uncertainty of the data elements causes the range to increase. If the range of the stated values go from 5 to 20 and the measurement uncertainty smears that to 4 to 21 (i.e. an uncertainty of +/- 1) then the range goes up and so does the variance! As the variance of the distribution goes up, the peak at the average gets less making it more uncertain.
Think of peakedness this way. With a very peaked distribution with a small variance the values close to the mean fall off quickly. If the peak is 1, the the values one unit away from the mean might be .7 or 1.3 for a very peaked distribution. You are pretty sure that the mean *is* the mean. For a less peaked distribution with wider variance those values one unit away from the mean might be .99 and 1.1. So is the mean actually .99? 1.1? 1? If you have calculated that mean using only the stated values it might appear that you *know* the true value of the mean regaardless of the peakedness. But when you add in the measurement uncertainty of the measurement uncertainty you find that you *don’t* know the true value of the mean if the uncertainty is greater than the difference one unit away from the mean.
Mathematically, the uncertainty of the mean could be said to be related to the percentage of the mass of the distribution that is surrounding the mean. The lower the percentage, i.e. the flatter the curve, the less certain the mean becomes.
You are a blackboard genius that can apply formula’s to calculate statistical descriptors but have no real understanding of what they are telling you. That’s because it is obvious that you have no real world experience in actually applying those descriptors to metrology in the real world.
“Do you now accept you were wrong to claim that the measurement uncertainty of an average is the same as the measurement uncertainty of the sum?”
Nope. I am right. I’ve done too much real world stuff that confirms it.
“Do you accept that Taylor’s ẟq/q = ẟx/x means the sum and the average will have the same relative uncertainty, but not the same absolute uncertainty?”
Who has ever argued that isn’t true. Asserting that I have claimed otherwise is a strawman you made up. I’ve given you Taylor’s example of the paper stack MULTIPLE TIMES. Yet you refuse to actually try and understand it.
If you have a stack of 100 pieces of paper with a measurement uncertainty of ẟx then the uncertainty of q, ẟq, is 100ẟx. ẟq is *NOT* the same as ẟx.
You are so mathematically challenged that you can’t even figure out what that means. If x = 1, then you have ẟq/100 = ẟx/1. The relative uncertainties are the SAME but the absolute values are not!
As Taylor points out, which you have never bothered to actually read, when combining uncertainties of values that have different scales you *must* use relative uncertainties to compare them. The fact that he develops the following rule for multiplicative relationships is actually an outgrowth of that fact! You don’t even seem to understand *that* simple relationship in metrology.
You *really* need to stop cherry picking stuff and actually learn it. And that includes statistics. It’s simply not enough to know the forumulas by rote with no understanding of their meaning.
Let’s stick to the subject.
Yet you claim that you accept u(average) / average = u(sum) / sum. So how exactly does that math work out for you?
As to your real world stuff – could you provide some examples of where you were able to demonstrate that the absolute uncertainty of the sum was the same as the uncertainty of the mean, and explain how you were able to test it.
You just did in the previous answer. It’s been your claim throughout that if the absolute uncertainty of the sum is ±5°C then the uncertainty of the mean is also ±5°C, yet you also accept that ẟq/q = ẟx/x is correct. It’s just not possible for ẟq/q = ẟx/x and ẟq = ẟx to both be true if x ≠ y.
You claim to understand algebra, you should be able to understand that.
“If you have a stack of 100 pieces of paper with a measurement uncertainty of ẟx then the uncertainty of q, ẟq, is 100ẟx. ẟq is *NOT* the same as ẟx.”
So again, do you accept that this means ẟAverage is *NOT* the same as ẟSum?
And why do you always insist on turning that example on it’s head? The example is demonstrating that if you measure a stack of paper and divide by 200 to get the thickness of a single piece of paper, you should also divide the uncertainty of the stack by 200 to get the uncertainty in the single sheet.
Is the reason you can never acknowledge that point, is becasue it conflicts with your claim that you never ever reduce uncertainty?
“You are so mathematically challenged that you can’t even figure out what that means. If x = 1, then you have ẟq/100 = ẟx/1. The relative uncertainties are the SAME but the absolute values are not!”
UI do love it when you throw out some pathetic ad homenim before going on to explain to me exactly what I’ve been trying to tell you for years. They are not the same. But again, why are you incapable of getting the descriptions correct. In this example q is the average and x is the sum. q = x / 100. If x = 100 then q = 1. The equation would be ẟq/1 = ẟx/100 => ẟq = ẟx/100.
It’s almost as if you have a mental block on accepting that the uncertainty of the average is smaller than the uncertainty of the sum – which might have something to do with your answer to the first question where you are asserting that the uncertainty of the mean is the same as the uncertainty of the sum.
“As Taylor points out, which you have never bothered to actually read…”
And when you get to that claim, it’s clear you know you have lost the argument.
“when combining uncertainties of values that have different scales you *must* use relative uncertainties to compare them.”
The average and the sum have the same scale – °C in this case.
“The fact that he develops the following rule for multiplicative relationships”
You do realize that Taylor didn’t invent these rules?
“is actually an outgrowth of that fact!”
It’s actually derived from the general equation for propagating errors, which Possolo suggests dates back to Gauss.
You simply can’t tell when absolute measurement uncertainty is applicable and when relative uncertainty is applicable.
Taylor tells you directly that when you have q = Bx that the relative uncertainty is ẟq/q = ẟx/x. The sum of the uncertainty in x determines the uncertainty in q. You do *NOT* MULTIPLY the MEASUREMENT UNCERTAINTY IN X BY 1/n. “n” is a constant and does not contribute *anything* to the uncertainty in q. NOTHING.
If q is 100 times as large as x then the uncertainty in q is 100 times as large as the uncertainty in x. It is *NOT* 100 times *less*. If x is one sheet of paper and q is 100 pieces of paper then the uncertainty is q is 100 times that in x.
You are *still* stuck in trying to convince everyone that an average is a functional relationship. It isn’t. It is a statistical descriptor that does not give one value out for one value in. It is a *probability* of what is most likely , not what *is*. Being the “most likely” also implies that the true value can be anything within the bounds of the distribution. That is *NOT* one value in and one value out.
Until you can understand *what* a statistical descriptor is you’ll never understand measurement uncertainty. A statistical descriptor tells you what might be, what “is” is part of the Great Unknown. A measurement uncertainty is exactly the same – it tells you what might be, what the true value *is* is part of the Great Unknown.
Error is not uncertainty. Maybe someday you’ll grok the difference but I’m not going to hold my breath waiting.
“Range is a metric for variance.”
Not a very good one – especially if you don;t know the distribution. Or are you just assuming it’s Gaussian?
If you have the data, just calculate the variance from that, or are you just using the range to avoid the problem caused by you rejecting the possibility of getting the mean of temperature?
“If the range of the stated values go from 5 to 20 and the measurement uncertainty smears that to 4 to 21”
Depends on the direction your differences caused by measurement uncertainty (trying to avoid using the triggering word there). You could just as easily get measurements of 6 and 19.
But your general point is correct, and what I’ve been trying to tell you all these years – measurement uncertainty will be reflected in the variance of your sample, the more uncertainty the larger the expected variance.
“As the variance of the distribution goes up, the peak at the average gets less making it more uncertain.”
Why do you always assume all distributions are Gaussian? What happens to the peak if it’s a uniform distribution?
“With a very peaked distribution with a small variance the values close to the mean fall off quickly.”
Again assuming this is a Gaussian distribution.
“If the peak is 1, the the values one unit away from the mean might be .7 or 1.3 for a very peaked distribution.”
Nope – you’ve lost me. Values one unit from a peak at 1, will be 0 or 2. Maybe you meant to say that is the variance is 0.09, and the distribution is normal, than around 2/3 of the values will be between 0.7 and 1.3.
“You are pretty sure that the mean *is* the mean.”
The mean is the mean, again there’s a clue in the fact that they are the same word. If you are saying you are pretty sure your sample mean is close to the population mean, then that will depend on sample size as well as the variance.
“For a less peaked distribution with wider variance those values one unit away from the mean might be .99 and 1.1.”
Did you mean to say that? 0.99 and 1.1 are closer to the peak, suggesting the variance is less.
“So is the mean actually .99? 1.1? 1?”
You don’t know, that’s why there is uncertainty. But I’ve still no idea here is you are taking the mean of a sample, or just a single value.
“If you have calculated that mean using only the stated values it might appear that you *know* the true value of the mean regaardless of the peakedness.”
Only if every value you measure is identical. The larger the sample the less likely that is unless there is no variance in the sample – in which case it’s just like measuring the same thing multiple times.
“But when you add in the measurement uncertainty of the measurement uncertainty you find that you *don’t* know the true value of the mean if the uncertainty is greater than the difference one unit away from the mean.”
I assume here you mean you want to include a Type B measurement uncertainty. And this only makes sense if you are talking about an uncertainty or a systematic error.
But you are starting from the very unlikely premise that you have taken a reasonable sized sample and got identical results each time.
“Mathematically, the uncertainty of the mean could be said to be related to the percentage of the mass of the distribution that is surrounding the mean.”
Yes, if by that you mean it’s related by the standard deviation divided by the square root of the sample size.
“You are a blackboard genius”
Thank you. But I’m really not.
“Not a very good one – especially if you don;t know the distribution. Or are you just assuming it’s Gaussian?”
It’s a far better metric for variance than temperature is for enthalpy.
And, as usual, you aren’t even aware of how to calculate skewness. You are a cherry picking troll, a blackboard genius using formula’s by rote.
“Depends on the direction your differences caused by measurement uncertainty (trying to avoid using the triggering word there). You could just as easily get measurements of 6 and 19.”
So you finally learned about systematic bias did you? Now extend that to asymmetric uncertainty intervals! It could be 6 to 22!
“But your general point is correct, and what I’ve been trying to tell you all these years – measurement uncertainty will be reflected in the variance of your sample, the more uncertainty the larger the expected variance.”
That is *NOT* what you’ve been saying at all! The variance is calculated from the STATED VALUES, not the measurement uncertainty. The measurement uncertainty is a totally separate thing.
The only time the variance becomes the measurement uncertainty is if you use your typical meme of all measurement uncertainty is random, Gaussian, and cancels.
If you are wanting the combined measurement uncertainty then add the uncertainty from the variance to the measurement uncertainty, e.g. adding the Type A uncertainty from the variance of the data and the Type B uncertainty estimated for systematic biases.
You *still* don’t understand measurement uncertainty, the GUM, Taylor, or anything. Again, you are a cherry picking troll.
“Why do you always assume all distributions are Gaussian? What happens to the peak if it’s a uniform distribution?”
It’s not just for Gaussian! Do you think skewed distributions don’t have means and medians?
Again, you have no real concept of what statistical descriptors tell you. You are a cherry picking troll.
OK, lets get on to this claim that variance is uncertainty of the average, and that it gets bigger with sample size.
Questions:
Given that you think it’s impossible to have an average temperature, how are you calculating the variance?
Given you insist that each stage of a calculation has to represent something that exists in the real world, why are you talking about variance that cannot exist in the real world, let alone something you can put in your fridge?
What do you think a square degree of temperature means?
Why, when you were so insistent I had to distinguish between measurement and sampling uncertainty, do you think variance of the sample represents measurement uncertainty?
What makes you think variance increases with sample size?
(I know from your past claims it was because you don;t understand the difference between adding two random variables, and mixing two random populations. But do you still not understand the difference?)
And the main one, why do you think that the variance, or standard deviation of a sample represents the uncertainty of the mean?
“Given that you think it’s impossible to have an average temperature, how are you calculating the variance?”
I’m merely using the data they use. Showing they ignore the ramifications of what they use does *NOT* mean I believe what they use is valid!
“Given you insist that each stage of a calculation has to represent something that exists in the real world, why are you talking about variance that cannot exist in the real world, let alone something you can put in your fridge?”
Because that’s what climate science uses. Showing the internal inconsistencies of what they use is in no way confirming that what they are using is valid.
“What do you think a square degree of temperature means?”
It’s meaningless. That doesn’t mean climate science should ignore it since it is part of what they use.
Look, all you are doing is trying to tell me to shut up about what climate science does if I don’t agree with it. That’s censorship. It’s eliminating the very basis of the scientific method – “don’t rock the boat, don’t question consensus”. That’s Galileo and the Church. I’m not surprised to see you taking that position.
“I already answered this. You can’t see voltage but *I* know that it exists because I can feel it! Put a 9v battery on your tongue and see if you *feel* anything. “Length” is not a physical “thing”, it is an attribute, a property, of a physical thing. The fact that you can understand that is *your* problem, not mine.”
The question was about whether you regarded variances as existing in the real world. How do you put a variance in the fridge? What does a variance of 100 °C² actually mean in the real world? And how do you calculate it if you can’t average temperatures?
You are arguing at this point that the length of a rod is not a physical thing, but the average of the squares of the difference between the average temperature and individual temperatures – is a thing. You won;t accept numbers are numbers, but still are happy to use these abstract numbers to determine measurement uncertainty of a mean.
“If the values 0.9, 1.0, and 1.1 have almost the same “liklihood” because of a wide variance then which one is correct?”
They can all be correct. You are talking about a probability distribution representing the spread of all possible temperatures. Each temperature reading is a random selection from that distribution. They are all the correct value for each reading (aside from actual actual measurement uncertainty).
If you want to know about the uncertainty of the mean you have to work out the sampling distribution of the mean.
“The standard error of the mean is a metric for sampling error, it tells you nothing about whether the the mean of the population is accurate or not.”
Again, you just keep throwing words together in the vain hope they make sense. The standard error is what you want to call the standard deviation of the sample mean. It’s the distribution the mean from any random sample comes from. As such it’s an indication of how close the sample mean is likely to be to the population mean. Again, I’ve no idea what you think you mean by the accuracy of the population mean. The population mean is what you are estimating by the sample. It cannot be inaccurate any more than the length of the rod you are trying to measure can be inaccurate. It’s the measurement that may be inaccurate, not the measurand.
“You can have a standard error of the mean be zero while the population mean is WILDLY INACCURATE because the data itself is inaccurate.”
You measurments being inaccurate has nothing to do withe the population mean being inaccurate. That’s absurd. Of course if all your measurements have systematic error then your sample mean will have that same error. That’s true of any type of measurement.
Your own claim is that the variance represents the measurement uncertainty of the mean – yet if the SEM is by some bizarre coincidence zero, then the variance will also be zero. If there’s a systematic error in all your measurements, it will make no difference to the variance. Nothing you are saying in anyway explains why you think the variance should be used as the measure of the uncertainty of the mean, and not the SEM.
“The accuracy of the mean can only be determined by propagating the uncertainty of the individual data elements onto the mean.”
And so, having claimed variance is the true measure of uncertainty to deflect from his mistakes about propagating measurement uncertainties, he now circles back to insisting propagating measurement uncertainties are the only way to determine uncertainty, in order to deflect from his incorrect claims about variance.
“The question was about whether you regarded variances as existing in the real world.”
No, the question was does the variance of an intensive property exist in the real world. You are right back to making up strawmen to argue with because you are a troll.
“He’s a troll, Tim.” — Pat Frank
From someone who doesn’t know what a standard deviation is, that’s a complement.
Can’t even choose the right spelling or meaning of ‘compliment’.
In belly’s terms that means he made twice the error.
yep.
Now he claims I don’t know what standard deviation, what a nutter.
Desperation time sets in.
Idiot. It’s Pat who doesn’t know what a standard deviation is.
He’s a blackboard genius that has cherry picked some formulas that he can work out on the blackboard but he has no understanding of what they actually say. “Numbers is numbers” and “measurement uncertainty is random, Gaussian, and cancels”.
In bellman’s world you can average the heights of Watusi’s and pygmy’s and get something that means something physically in the real world. You could then order T-shirts sized to fit the “average” and expect them to fit a large percentage of each population. Or you could average all the boards in a lumberyard and get something useful in the real world. Or you could average the entire mass of the earth and use it to calculate the force of gravity on satellites no matter where they orbit around the earth.
Numbers is numbers.
Claiming that Pat F doesn’t know what a standard deviation is – much like blob condemning his work on the basis of web pages he found on Stanford.
I am reminded (again) of the average USAF pilot, who they discovered does not exist. To their chagrin (and waste of defense dollars), they also discovered the uncertainty of the average pilot was greater than the population of USAF pilots.
“Claiming that Pat F doesn’t know what a standard deviation is”
Something you should remember. It was only a year ago. He claimed uncertainties can be negative, standard deviations are both positive and negative, and finally claimed that standard deviation actually means an interval. The fact that you three have to follow him in that idiocy just shows how much into argument by authority.
You seem to think it’s outrageous that anyone should point out where somebody with a PhD has simply misunderstood a basic piece of mathematics.
Noise skipped, unread.
That’s a relief, was worried you might have learnt something in the last year.
A$$hole.
With bellman it’s always an argumentative fallacy used as “proof”.
As usual you simply cannot accept the truth. The standard deviation is the square root of the variance. That means it has both a negative and a positive root. If you didn’t have the negative root then no distribution could have values less than the mean. You would only have half of the curve if the distribution is Gaussian.
The standard deviation covers an interval of Mean – SD to Mean + SD. If the mean is normalized to 0 (zero) then you will have values running from -1 SD to 0 SD on the left side of the mean and from 0 to 1 on the right side of the mean.
You are, as usual, depending on the argumentative fallacy of Equivocation. You are saying that since the standard deviatio is typically given as a positive number that the standard deviation has to be positive. In other words you are using different defintions for the word “standard deviation” depending on your need at the time.
You aren’t fooling anyone. How do you get an interval of -1SD to 0 SD if there isn’t a negative standard deviation?
“The standard deviation is the square root of the variance. That means it has both a negative and a positive root.”
I’m not going through all this again. You cannot accept the truth when it’s staring you in the face. The standard deviation is the positive square root of the variance. A negative standard deviation makes no sense. If you want to keep arguing this point – do so with the GUM:
See also 3.3.5
In other words you have no answer as to how a Gaussian normalized to zero can have a standard deviation of -1.
You are a cherry picking troll.
“In other words you have no answer as to how a Gaussian normalized to zero can have a standard deviation of -1.”
What bit of “positive” don’t you understand. You cannot have a standard deviation of -1. That’s the answer to your question. What you are asking for is something that doesn’t exist. You might just as well ask how you can have a negative height, or how a barrel can have a negative volume.
“You are a cherry picking troll.”
Cherry picking to you means quoting 4 times where the GUM uses the correct definition of standard deviation. As I said, if you think the GUM is wrong take it up with them, and every other book that says that standard deviation is always positive.
Or invent your own statistical notation which allows negative standard deviations and demonstrate how that’s consistent and useful.
Oh look – graph showing that the standard deviation is positive.
I see Tim is intent on ignoring the 4 quotes from the GUM all stating that the standard deviation is the positive square root of the variance. Maybe this is more his level
https://www.dummies.com/article/academics-the-arts/math/statistics/how-to-interpret-standard-deviation-in-a-statistical-data-set-169772/
https://www.jmp.com/en_is/statistics-knowledge-portal/measures-of-central-tendency-and-variability/standard-deviation.html
https://quizlet.com/explanations/questions/he-standard-deviation-assumes-a-negative-value-when-all-the-values-are-negative-when-at-least-half-t-a075a348-cd61-4a0f-a53e-0b0ae3d226c2
I’ll never understand why you have such a bent on ignoring reality.
Only 34% of the values in a Gaussian distribution are in the positive interval of the standard deviation. The other 34% are in the negative portion of the interval. The SD interval is from -SD to +SD. That means that SD *has* to have a negative component. To you SD is always an absolute value, |SD|.
Nor does it matter if the SD can go to zero. You *can* have a -0 and a +0. Again you have no basic understanding of calculus.
You come on here and lecture people about math but you have no “feel” for what the numbers mean or what they tell you. You have no real understanding that range and variance *are* metrics for uncertainty. You have no “feel” that the standard deviation describes values on *both* sides of the mean. You have no “feel” for the fact that error is not uncertainty. You don’t even understand that distances *can* be negative, It’s how vector math works!
To you it’s just “numbers is numbers”.
It’s not a surprise that the GUM gets this wrong. It’s a problem with statisticians in general. Too many are trained the “numbers is numbers” meme. Physical scientists and engineers that have to have a “feel” for what the numbers are telling you understand that range and variance tell you about uncertainty, that the *NEGATIVE* and positive values of the SD tell you about the tolerances of *things”, that they can be too small just as easily as they can be too large, they understand that distance is a VECTOR and not a scalar and can certainly have a negative value.
And they understand that measurement uncertainty is not measurement error.
“I’ll never understand why you have such a bent on ignoring reality.”
And this is why I said I don;t want to be dragged down this rabbit hole again. Tim is undebatable. I’ve given him multiple references saying the standard deviation is never negative – 4 of them from the GUM.
“Only 34% of the values in a Gaussian distribution are in the positive interval of the standard deviation.”
Please at least try to understand that a standard deviation is a real number – it does not have a positive interval. You can define an interval in terms of a standard deviation, or any multiple of a standard deviation. But it is not in itself an interval.
“You come on here and lecture people about math but you have no “feel” for what the numbers mean or what they tell you”
And we are of on yet more patronizing insults. My “feel” for the standard deviation is that it’s a value that describes the average distance of a distribution from its mean (otherwise known as deviation). It’s not an exact average as it’s based on variance which is the average square difference. More exact, but less useful measures are AAD and MAD. Both of which use “absolute” deviation. AAD, MAD, SD, and variance can all be seen as measures of deviation of a distribution. All of the describe this as a positive value, because it makes no sense to talk of a negative distance from the mean.
“You have no “feel” that the standard deviation describes values on *both* sides of the mean.”
It describes values on both sides of the mean. That doesn’t mean it can be negative. The radius of a circle describes all points on the circumference. That doesn’t mean the radius is all points on the circumference. The radius is always a positive scalar value. It doesn’t become negative in order to describe half the circle.
“It’s not a surprise that the GUM gets this wrong.”
Ah, so now the GUM is wrong. Strange, it was only a few weeks ago I was being attacked for not “believing” in the GUM.
As I say, if you think the GUM doesn’t even understand what a standard deviation is, take it out on them, not on me.
“they understand that distance is a VECTOR”
Sorry to break into thins long list of things you think you are right about because you “feel” them. Could you provide a single reference that says distance is a VECTOR, using any definition of distance you like.
“You aren’t fooling anyone. How do you get an interval of -1SD to 0 SD if there isn’t a negative standard deviation?”
As I said, not getting into all this nonsense again. But the answer is trivial. If you want an interval from -1SD to 0, you just write [-SD, 0]. You see that little “-” symbol. That’s a function indicating the additive inverse of the value after it. That is -SD is value that when added to SD will give you zero. As SD is positive you know that -SD is negative.
If, you were correct that SD can be both negative, then -SD could be positive,m and you would have no idea if -SD was more or less than 0.
Subtraction is just adding a negative number.
I said nothing about subtraction. But about putting a “-” in front of a positive value.
You asked how you could get an interval starting at -SD. You gave the answer yourself, just write -SD. -SD is negative because SD is positive. If SD was negative than -SD would be positive, but fortunately that can never be because standard deviations are never negative.
Putting a “-” in front of a positive number makes it a NEGATIVE NUMBER. It’s a basic concept in vector calculus. You ADD vectors. Some vectors are positive and some are negative. Saying distance is always positive is just ignoring the fact that distance is a vector and not a scalar.
As I said, you have no real feel for what numbers are telling you.
“Putting a “-” in front of a positive number makes it a NEGATIVE NUMBER”
No it does not. Objects in mathematics, including numbers and vectors are immutable – they cannot change. Applying a function to a value does not change its value – it just maps onto a different value. If you write -π, you are not saying π is now a negative number. You are saying that -π is the value that if you add it to π gives you the additive identity. If you write 2π, it does not mean that you have made π twice it’s original size.
“Saying distance is always positive is just ignoring the fact that distance is a vector and not a scalar.”
You’ve still learned nothing in the last year. Even Pat Frank warned you not to suggest distance can be negative. Distance is not a vector. Certainly not in any metric space. Distance is defined as a real, non-negative number, zero if and only if it’s the distance from a point to itself.
If you are talking about distance as in distance traveled, then that’s also a scalar quantity. What I suspect you are talking about is displacement, not distance.
Unless there has been a recent (and I mean very recent) change in position he has been adamant that the Xi inputs into the measurement model y = f(x1, x2, …, xN) have to all be for the same measurand. This position was maintained even after it was pointed out (multiple times) that the example given in the section explaining the measurement model concept had 4 inputs each of completely different things even to the extent that they had different units. It’s hard to conceive of a credible argument that the GUM intended Xi to be of the same measurand considering 1) they never say so and 2) the examples given are of inputs of different things.
That’s certainly possible in the case of measurands which need to be combined into a higher level measurand.
These discussions get way too complicated and intertwined to even think about unravelling it back to the original section 🙁
That’s whole point of a measurement model. The JCGM group even has a separate document 6:2020 dedicated to furthering the documentation of measurement models. It’s no surprise that there are many examples in it where the Xi are of different measurands.
I’ve looked all over the 160 acres behind my house. I can’t find an average to measure. What does one look like?
I’ve never been able to find a “length” or a “width” either. Have you? How did you measure it? What did it look like?
No answer I see.
How do you measure something that doesn’t exist?
The average area density of biomass in kg.acre-2 would be an example of a measurand that exists behind your house.
Add an irradiance source, and the irradiance doubles.
OTH:
Add a temperature “source”, the temperature doubles?
Who are you trying kid with this nonsense?
blob?
bellcurvewhinerman?
Stokes?
kilograms are extensive. area is extensive. I can find dirt (i.e. kg) and area (the size of my backyard) in my backyard.
I’ve looked among a LOT of blades of grass in the backyard and can’t find an average however! How can I MEASURE something I can’t find?
Still think the detector inside a Fluke 62 is a “thermopile”?
If you are not measuring the same measurand then exactly what do you think the uncertainty is telling you?
This would be like measuring the heights of Shetland ponies using a yardstick and the heights of mushrooms using a micrometer and then combining the measurements, finding their average, and assuming that will tell you something.
It would be like going to the lumber yard, measuring every single board in the place using a different measuring device for each, finding the average length, adding up the measurement uncertainties and thinking the average length and measurement uncertainty is telling you something.
If you aren’t measuring the *same* thing then exactly what do you think you are learning from the average? From the measurement uncertainty?
Those “inputs” should be measurements of the *same* thing, or at least identical things.
You and bellman are blackboard statisticians whose world view is that numbers are just numbers and don’t have to actually mean anything in the real world. I’m sorry but measurements come from the REAL world. They are meant to tell you about the real world. If you aren’t measuring the same thing then you aren’t learning anything about the real world.
Sorry, I disagree with you. The length of a board is a property of the board that can be measured. The length is *NOT* a measurand. The measurand is defined by its properties but those properties are not the measuand or even “a” measurand.
From the GUM: “To meet the needs of some industrial and commercial applications, as well as requirements in the areas of health and safety, an expanded uncertainty U is obtained by multiplying the combined standard uncertainty uc by a coverage factor k. The intended purpose of U is to provide an interval about the result of a measurement that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand.”
The measurements are of the measurand.
A functional relationship *can* have several measurands as the factors in the relationship. E.g. voltage, current, and resistance can all be measurands but their properties are what is measured, the measurements themselves are not measurands.
If I tell you: “here is a tape measure, go out and measure a length”, what you come back with as a value?
Also, the uncertainty combination will also contain factors that aren’t direct measurement, such as the uncertainty of the tape measure.
Here’s a bit of trivia: what do you get if you send a linear tape measure to a calibration service for calibration?
So now the length of a board is not a measurand??
That’s right. The length of a board is *NOT* a measurand. It is a measurment.
Do you have a “length” somewhere on a shelf in your house? What do you use to measure it?
Yes. My bookshelf has a length. Just now I used my Fluke 417 to measure it.
“Yes. My bookshelf has a length. Just now I used my Fluke 417 to measure it.
I didn’t ask if you had a bookshelf. I asked if you have a “length”, a physical object. If you don’t have something that is physical then how can it be a measurand?
The length of your bookshelf is a MEASUREMENT, it is not a measurand. It is a property of your bookshelf, it is the *bookshelf* that is the measurand.
Not according to the GUM. JCGM 200:2012 says a measurand is the “quantity intended to be measured”. In the notes it specifically mentions that the length of a steel rod is the measurand and that names of substances are not measurands. Then JCGM 100:2008 gives the example “Vapour pressure of a given sample of water at 20 °C”
And if there is any confusion about what “quantity” means then both JCGM 100:2008 and JCGM 200:2012 define that as well as the attribute or property of a phenomenon, body, or substance. Examples given are length, electric charge, etc. So it is indisputable and unequivocal. A measurand is the attribute or property being measured; not the body or substance itself.
The length of my bookshelf is a MEASURAND according to JCGM.
My admiration that you keep up your attempts at reasoning with them. But from my perspective, as a son with a mom in the best Memory Care in Missouri, you have to step back and let them believe what they believe.
WUWT actually shares much with mom’s facility. The “guests” are well isolated from reality, and will have NO measurable impact on the world outside. WUWT is also ~$10,000/month cheaper, so you can argue that it actually serves a purpose…
Why are you wasting your time reading WUWT then?
You’re a real piece of work, blob…
“Why are you wasting your time reading WUWT then?”
No good excuse. Kramer effect is about the best I can come up with. I don’t tell any of my relatives I do so, to avoid the embarrassment. But you’ve snapped me out of it. Bu-bye….
Don’t let the door hit you in the A$$ on the way out.
Clown.
You are leaving out Note 1 from your reference:
————————————-
NOTE 1 The specification of a measurand requires knowledge of the kind of quantity, description of the state of the phenomenon, body, or substance carrying the quantity, including any relevant component, and the chemical entities involved
——————————————–
The QUANTITY is associated with the measurand. Not with the measurement of the measureand.
You also left out:
————————————
EXAMPLE 2 The length of a steel rod in equilibrium with the ambient Celsius temperature of 23 °C will be different from the length at the specified temperature of 20 °C, which is the measurand. In this case, a correction is necessary.
————————————–
It does NOT say the length of the steel rod is the measurand. It says “the length of a steel rod”
The steel rod is the measurand. It is the quantity to be measured. The length is the MEASUREMENT.
Again, your reading comprehension is just atrocious.
Look at more of the document:
——————————————–
1.1 (1.1) quantity
property of a phenomenon, body, or substance, where the property has a magnitude that can be expressed as a number and a reference
——————————————-(bolding mine, tpg)
It is the phenomenon, body, or substance that is the measurand. The “quantity” is a property – i.e. a MEASUREMENT.
You just got caught, once again, cherry picking crap you think bolsters your idiotic assertions. You and bellman need to learn to STUDY the subject and understand all the nuances and contexts before trying to lecture knowledgeable people about something you know nothing about.
More gems from the person who accuses everyone else of cherry-picking and not reading for meaning.
“The QUANTITY is associated with the measurand.”
Where do you think it says that? It literally says the measurand has to be specified in terms of the kind of quantity you are measuring.
Somehow you interpret
As meaning the steel rod is the mesuarand. How on earth do you read that into it? It’s saying the mesaurand is the length of the steel rod at 20°C, and a measurment at 23°C will be different and hence need to be corrected. In no sane way can you attach the phrase “which is the measurand” to “the steel rod”.
“It is the phenomenon, body, or substance that is the measurand. The “quantity” is a property – i.e. a MEASUREMENT.”
This is in response to the defintion of quantity as the property of a body. Again the definition of measurnad is “particular quantity subject to measurement”.
Really, just join the dots. The measurand is a quantity subject to measurement. Quantity is the property of a phenomenon, body, or substance.
Yet Tim thinks the measurand is the phenomenon, body, or substance and not the property being measured.
“As meaning the steel rod is the mesuarand.”
ROFL! “The length of a steel rod” is *NOT* the measurand?
It is the steel rod that is the measurand. The length will be different at 23C than at 20C – THE MEASUREMENTS, meaning the property of the steel rod at each temp!!
“This is in response to the defintion of quantity as the property of a body. Again the definition of measurnad is “particular quantity subject to measurement”.”
Until you show me the picture of a length in your refrigerator I’ll continue to use the common knowledge that a measurand is something that exists physically and a measurement is the value of the property of that measurand. It *is* what the GUM says.
Is the hole you keep digging the measurand, or us the measurand the depth of that home?
I gave you the benefit of the doubt by pointing out that sometimes the word had been used to describe the object being measured. But the VIM and GUM make it clear that the current usage is for it to mean the property being measurered. Claiming that the GUM agrees with you shows a complete inability to accept the meaning of words.
If you want more evidence, just look at the examples used in the GUM.
H1 says the measurand us the length of a copper wire at a specific temperature. No suggesting that the measurand is really the wire, or that the length doesn’t exist.
H3 has measurands that are the statistical parameters of a linear regression. Oh dear. They didn’t get the memo saying that statistical parameters can not be measured.
It’s not my field, so I may well have the wrong end of the stick.
For clarification, is the measurand the “thing” (the cylinder in the example) or the property of interest of the thing (the volume of the cylinder)?
The measurand is the thing, not the property of the thing.
That’s why I asked if you have a “length” somewhere, a thing that exists that can be measured. If it doesn’t exist then it is a measurement and not a measurand.
The voltage applied to a thing is a measurand. Its value is the measurement. The current flowing in the object is a measurand, its value is the measurement. The power(a measurand) being used by the thing is the current (a measuand) times the voltage (a measurand).
The power, voltage, and current represent a functional relationship. They are all measurands with values that are measurements. The relationship describes “physical* things, things that exist.
An average is not a functional relationship, its a statistical relationship. It doesn’t describe a physical thing that exists. That’s why I can’t find one in my backyard. It’s a *STATISICAL DESCRIPTOR”. It’s an “expectation” of a probable value of a thing but it is *not* the thing itself. There is no guarantee that the average is THE value of “anything”. A function defines one y value for one x value. A statistical description can’t do that, it can only assign a probability for what y might be. That’s not a functional relationship, it s statistical relationship.
The map is not the territory. An average is a statistical descriptor that is a map of a territory, it is not the territory. And its not even a complete map of the territory because you don’t know the variance, kurtosis, and skewness of the territory. It’s like saying an AAA roadmap isn’t as good of a map of the territory as a topographical map which is a much better map because its a more complete description of the territory. But neither map *is* the territory.
I know its a hard concept to grasp, especially for statisticians (who seem to be the main defenders of the GAT on here). But when you ask a statistician if they have an “average” in their pocket they just look at you blankly. It’s the same difficulty that statisticians have with “true value +/- error” vs “estimated value +/- measurement uncertainty”. They equate “error” with “measurement uncertainty” because they can’t grasp the concept of the ‘GREAT UNKNOWN”. They want to calculate “error” as a statistic so it becomes a measurand instead of understanding that the “uncertainty interval” is a cloudy crystal ball. That leads them to ALWAYS assume that all measurement uncertainty is random, Gaussian, and cancels so they don’t have to deal with the GREAT UNKNOWN.
The GUM has a lot of verbiage that may be subject to interpretation. This is not one of them. It is unequivocal. According to the GUM a measurand is a property or attribute of a phenomenon, body, or substance that is intended to be measured.
Length is a property or attribute of a body. If there is an intent to measure it then it (not the body) is the measurand.
you don’t understand anything about.
You have no understanding of the GUM at all.
A measurand is *NOT* a property or attribute. A measurand is something you measure. The measure is not the measurand.
Again, to determine power you measure voltage, resistance, and current.
Voltage is a measurand. Its value is a measurement, not a measurand.
Current is a measurand. Its value is a measurement, not a measurand.
Resistance is a measurand, its value is a measurment, not a measurand.
An average is a statistical descriptor of a set of measurments. It is *NOT* a measurand.
When you can post an image of an average in your backyard, or in your pocket, *then* I’ll believe an average is a measurand.
Trust me, you’ll never be able to do it!
“For clarification, is the measurand the “thing” (the cylinder in the example) or the property of interest of the thing (the volume of the cylinder)”
I think there’s some confusion here because in some older definitions it can be used for either. But in all the modern sources I’ve seen it is made clear that measurand is only used to mean the property being measured, not the thing that has the property.
Here’s the definition in the GUM, taken from the VIM
Pressure *IS* a thing! Jump in the ocean and swim down 20 feet. I’ll guarantee you that you will find out that the pressure is a physical thing!
You left off the note following your quote:
“NOTE The specification of a measurand may require statements about quantities such as time, temperature and pressure.”
Time, temperature, and pressure are *things*. Their value is a measurement.
As usual, your reading comprehension is atrocious.
“quantity subject to measurement” – The measurement is *NOT* the quantity! It is a property of the quantity!
I’ll ask again (even though I know you won’t answer):
Do you have an “average” stored away somewhere that you can measure? I can’t find one in my house or backyard. Is there one in yours?
Pure deflection. The question isn’t about whether the measurand is a “physical thing”. It’s about whether it’s an object such as a barrel, or a property of the barrel such as it’s height.
The GUM clearly says the measurand is a “quantity” subject to measurement. Length is a quantity you can measure. Let’s see what the GUM has to say about quantity. I’ll include all the notes before you accuse me of cherry picking again.
————–
B.2.1
quantity
attribute of a phenomenon, body or substance that may be distinguished qualitatively and determined quantitatively
NOTE 1 The term quantity may refer to a quantity in a general sense (see Example 1) or to a particular quantity (see Example 2).
EXAMPLE 1 Quantities in a general sense: length, time, mass, temperature, electrical resistance, amount‑of‑substance concentration.
EXAMPLE 2 Particular quantities:
NOTE 2 Quantities that can be placed in order of magnitude relative to one another are called quantities of the same kind.
NOTE 3 Quantities of the same kind may be grouped together into categories of quantities, for example:
NOTE 4 Symbols for quantities are given in ISO 31.
==================
But the point is they specifically give length, and length of a given rod as examples of quantities. Length is a quantity that can be measured. Length of a barrel is a quantity that can measured. It is a measurand.
Your attempts to distinguish between this you consider to exist or not exist is just philosophical onanism.
“Do you have an “average” stored away somewhere that you can measure? “
Yes. Lots of them.
Take a picture of one and post it here for all of us to see.
You do realize that things can exist without be visible?
So what does that have to do with anything? I can’t see the current flowing into a capacitor but it *is* a physical thing in the real world that gets stored in the capacitor. I can’t see the voltage being applied to something but it exists in the real world and I can store it in an inductor.
I can measure the properties of that current and voltage but that doesn’t make the measurements into a measurand, they are just properties with a value.
“So what does that have to do with anything?”
You insist that things don;t exist unless you can take a photo of it. Now you admit you know current flows whilst being invisible. You’re obsession with trying to divide the world into things you think exist and things you reject the existence of is getting beyond a joke. Claiming that things only exist if you can fit them in your fridge or pocket sounds like the ravings of a mad man.
You are so dense that it is unbelievable.
How do you measure a length if you can’t see it?
How do you measure a width if you can’t see it?
Can you just close your eyes and *guess* at where something begins and ends? And then average those “guesses” and get a 100% accurate value? I’m pretty sure that you probably believe that since you think you can measure a crankshaft journal to the thousandths of an inch using a yardstick if you can just make enough measurements!
Voltage exists. I can FEEL it. I’ve been knocked on my ass twice by it, once from working inside a TV and once working inside a high power RF transmitter, and am probably lucky to still be alive. Current exists, I once saw it melt a wrench an installer dropped on a 24v buss in a telephone central office. Heat exists, I’ve been burned by it from my acetylene torch and my soldering irons. EM waves exist, I’ve had RF burns from them and knew a Western Electric installer that got cataracts from looking in the end of a microwave waveguide while it was carrying power. Magnetism exists, I’ve built both small and large electromagnets using the magnetism.
Numbers are not just numbers. Measurements are not just “numbers are numbers”. Measurements don’t have some metaphysical existence without being related to something that exists.
From example 2 of TN1900.
It’s not funny and I shouldn’t laugh, but I literally busted out laughing while watching TV with the family when I read that. I’m sure TG will find some way rationalize (in his mind) his way out this one. Afterall, that’s what he did with division by sqrt(m) in the same example.
You don’t get it! Since when is a probability distribution a real thing in the real world.
Possolo uses the term loosely. It doesn’t comport with any real world definition of what a measurand is.
The mean is something that “might” be, like a fairy godmother “might” be. It isn’t something that *is*.
And the mean of an intensive property doesn’t exist. Do you *truly* believe that a statistical description of a distribution exists in the real world so you can put it in your pocket?
Do you *truly* believe you can add colors and get an average color?
“The GUM clearly says the measurand is a “quantity” subject to measurement.”
HOW DO YOU MEASURE AN AVERAGE?
Again, do you have one in your pocket?
How do you measure a length? Do you have one in your pocket? You measure the length of an OBJECT! It is the OBJECT that is being measured, not the measurement itsefl!
How do you measure a radius? Do you have one in your pocket?
You keep confusing measuring the value of a property with being the property itself. It isn’t. It’s a measurement!
You don’t understand metrology, the science of measuring, at all!
“HOW DO YOU MEASURE AN AVERAGE?”
Stop shouting – you are really sounding hysterical.
You measure an average the same way you measure any complex measurand – by taking all the input measurements and applying the appropriate function – in this case (X1 + X2 + … + XN) / N.
“Again, do you have one in your pocket?“
A 5 year old child would have got board with this joke by now.
“How do you measure a length?”
I don’t know – maybe try a tape measure.
Your favorite example goes into some detail as to how they measured the height and radius of the a water tank.
“Do you have one in your pocket?”
No little boy. I do not have a water tank in my pocket.
“You measure the length of an OBJECT!”
Yes. If you want to know the length of an OBJECT. Are you finally realizing it is possible to measure a length?
“It is the OBJECT that is being measured,”
No. It’s the length of the OBJECT that is being measured. Or anything else you want to know the length of. Do you have this OBJECT in your pocket? Measuring it’s length won’t tell you the quantity of the object, just it’s length. Hint, the OBJECT’s quantity is probably one.
“not the measurement itsefl!”
Duh!
“How do you measure a radius? Do you have one in your pocket?”
Could somebody give Tim a shove, his needles stuck.
“You keep confusing measuring the value of a property with being the property itself.”
You keep lying about what I’ve said to avoid admitting you were wrong. Measuring a quantity does not give you the quantity – that’s why you have uncertainty. That’s why there is a distinction between a measurand and a measurement.
Am I going to have to go through every definition from the GUM, just for you to ignore it.
And you still haven’t internalized the fact that if you insist an average is not a mesurand, then by definition it cannot have a measurement uncertainty.
“You measure an average the same way you measure any complex measurand – by taking all the input measurements and applying the appropriate function – in this case (X1 + X2 + … + XN) / N.”
Again with the “numbers is numbers” meme. They don’t have to mean anything in the real world.
If you can’t store it then how do you measure it? You may be bored with that question (and I notice your spelling), but you can’t answer it either.
The fact that you don’t have an answer other than “numbers is numbers” should be a signpost telling you to think again about what you are saying. We don’t live in your statistical world, we live in the real world.
“And you still haven’t internalized the fact that if you insist an average is not a mesurand, then by definition it cannot have a measurement uncertainty.”
What in God’s name do you think I have been telling you? And you *still* can’t use the terms “sampling uncertainty” and “measurement uncertainty”, can you?
“Again with the “numbers is numbers” meme.”
Numbers are numbers – they have been for millennia. It’;s a shame your education never caught up with that.
“Numbers are numbers – they have been for millennia. It’;s a shame your education never caught up with that.”
from wikepedia:
“Mathematics is a field of study that discovers and organizes methods, theories and theorems that are developed and proved for the needs of empirical sciences and mathematics itself. “
from britannica:
“Mathematics, the science of structure, order, and relation that has evolved from elemental practices of counting, measuring, and describing the shapes of objects.”
from Mario Livio, a leading astrophysicist:
“We accept the view, initially espoused by Galileo, that mathematics is the language of science and expect that its grammar explains experimental results and even predicts novel phenomena. ”
Einstein said:
““How is it possible that mathematics, a product of human thought that is independent of experience, fits so excellently the objects of physical reality?””
Numbers are not just numbers. They are used to describe and understand PHYSICAL REALITY. You and so many others want to imbue numbers, like statistical descriptors, with some kind of meaning all of their own while having no relation to physical reality.
A statistical descriptor is a *tool* useful in understanding the distribution of something that exists in physical reality. It is not a measurand. It is not an object that physically exists. You can’t keep one in your pocket or in your garage. It’s only *meaning* is what it can tell you about the distribution which physically exists. It has no meaning on its own.
In fact, the “average” can’t even tell you about a distribution on its own, *other* statistical descriptors are needed to fully understand a distribution. Yet you, bdgwx, and climate science refuse to provide even the most basic couplet of average and variance. I’ve just perused three different papers on climate science that I have on my hard disk and not a single one ever bothers to calculate the variance of the temperatures in any of the temperature data sets they use. NOT ONE.
Everyone in climate science and those who support the claims of climate science have been brainwashed with the meme that all measurement uncertainty is random, Gaussian, and cancels leaving only sampling uncertainty defining how accurate the average is! Not a single one believes that variance is a measure of uncertainty of the average, only the standard deviation of the sample means is important for judging accuracy.
And this includes *YOU* no matter how much you deny it.
He (and they) still can’t make it past the basic fact that uncertainty is not error.
Not a single one of those quotes claims that numbers are not numbers.
“Numbers are not just numbers.”
Moving the goal posts again. Numbers can represent lots of different things. That doesn’t mean they are not numbers.
“They are used to describe and understand PHYSICAL REALITY.”
That’s one use. But it’s only when you understand that numbers are abstractions that you can really make use of them. So much progress is made whenever people stropped obsessing over what numbers meant, and just accepted them for the abstractions they are. Zero, negative numbers, imaginary numbers for instance can all be dismissed as not real as they don;t represent the physical world.
“You and so many others want to imbue numbers, like statistical descriptors, with some kind of meaning all of their own while having no relation to physical reality.”
Most children progress from the idea of two apples plus three apples make five apples, and learn that 2 + 3 = 5, is a useful abstraction.
“A statistical descriptor is a *tool* useful in understanding the distribution of something that exists in physical reality.”
Exactly. Really, I don’t care if you want to think of it as existing in reality or just as useful model of reality. You still use it, and measure it in the same way as any of the wood you keep in your pockets.
“You can’t keep one in your pocket or in your garage.”
I can’t keep you in my pockets, but I still think there’s a chance you exist. Do you keep the diameter of the Earth, or the distance to Alpha Century in your pockets? Does that mean you can’t measure them?
“It’s only *meaning* is what it can tell you about the distribution which physically exists.”
And that’s a bad thing, why exactly?
“In fact, the “average” can’t even tell you about a distribution on its own, *other* statistical descriptors are needed to fully understand a distribution.”
Using the definition in TN1900, the “thing” you are measuring is the distribution, the measurand is the property µ of the distribution. That’s the property you are interested in if you want to find measure the average maximum temperature in that month at that station. If you personally want to measure other things, nobody’s stopping you.
“Yet you, bdgwx, and climate science refuse to provide even the most basic couplet of average and variance.”
The variance you also claim does not exist? The variance that is just numbers and does not describe any thing in the real world? What is a square temperature? How do measure it? Or do you now accept that numbers are numbers, and it’s possible to calculate useful information even if it is not something you could put in your pocket?
“Everyone in climate science and those who support the claims of climate science have been brainwashed with the meme that all measurement uncertainty is random, Gaussian, and cancels leaving only sampling uncertainty defining how accurate the average is!”
Nurse! He’s off again.
Think he has ever read GUM 4.1.3?
Nope! They haven’t actually read *anything* for context and meaning. They are genius cherry pickers!
I want to know how they store their lengths, widths, and averages. Must they be refrigerated? What do you use to measure them?
I see no one has supplied an answer as to where they store their lengths, widths, and averages.
Could it be that those things don’t physically exist?
How about this:
Add an irradiance source, and the irradiance doubles.
Add a temperature “source”, and the temperature doubles?
I don’t think so — only in the clown-world that is trendology.
“I see no one has supplied an answer as to where they store their lengths, widths, and averages.”
The same place you store pressure, time and current.
This has to be craziest argument in a history of crazy arguments. Try running a 200km marathon and the tell me length doesn’t exist.
I *can* store pressure. It happens every time I turn on my air compressor.
I *can store current, it’s what a capacitor does.
Time? Time is constantly changing, hard to store something that is in constant motion. Ever try to “store” a 2 year old? That doesn’t mean that time isn’t a measurand. I can point to my white hair to prove that I’ve stored time there.
I can’t sore a length as a measurand because it doesn’t physically exist. Neither can I store a width. It doesn’t physically exist. What *does* exist is a 2″x4″ board with properties of length and width. I *can* store that board.
Could somebody check up on Tim. I fear he’s having some sort of breakdown
How about annex D?
———————————-
D.1 The measurand
D.1.1 The first step in making a measurement is to specify the measurand — the quantity to be measured;
———————————————————-
Which is in effect the first step in an uncertainty analysis.
More total and utter malarky! 0.002C is the RESOLUTION of the ARGO sensors, It is *NOT* the measurement uncertainty of the ARGO float. The measurement uncertainty of the ARGO float is in the neighborhood of +/- 0.5C.
Manual read LIG thermometers prior to 1900 have at least a +/- 0.5C measurement uncertainty merely because of the fact the readings were all recorded to the nearest units digit. That’s even the case today in ASOS measurement stations, the recorded temp is in Fahrenheit and is in the units digit. Thus any use of those temps will have an uncertainty of at least +/- 0.5C.
You simply don’t understand even the basic concepts of metrology yet you continue to come on here and try to gaslight everyone that climate science can calculate average temps down to the hundredths of a degree. It’s simply sad.
Accuracy and Stability of Argo SBE 41 and SBE 41CP CTD Conductivity and Temperature Sensors
[Wong et al. 2020]
[Oka 2005]
bozo-x doubles-down on his nonsense.
“GISTEMP uncertainty is on the order of 0.15 C prior to 1900″
ROFLMAO… If you really believe that then you really are a totally gullible idiot !
It is total GARBAGE.
Show us where GISS measurements before 1900 came from.
Very few countries had any measurements at all.
Oceans weren’t measured in any systematic way whatsoever and only on narrow shipping lanes.
Who ever made that guess of the uncertainty is just making up a fantasy number with zero relevance to the real world.
Here is a chart of where surface data came from from 1880-1920.
Ocean data is even more sparse
NO-ONE in their right mind could possibly think that a realistic “global” temperature could be constructed from that.
GISS et al are TOTALLY FAKE.
bdgwx,
GISSTEMP uncertainty is more like +/- 1.5 deg C two sigma if you use the non-applicable statistics and terms of the normal distribution.
I have written this up as many times as it has been ignored.
Geoff S
“GISSTEMP uncertainty is more like +/- 1.5 deg C two sigma”
How can it be that small? According to all the leading experts here, uncertainties always add. That means if you are using say 10000 measurements in a month to determine the average, and each instrument has a random uncertainty of ±1°C, then the uncertainty should be ±100°C.
Clown #1
Well done on your counting skills.
But if you disagree with what I said you’ll have to take that up with Tim and Jim. And remember Tim’s done all the exercises in Taylor, so he can’t be wrong.
reread what he said. This time for comprehension.
OK. He said “Clown #1”. That’s it. The entire content of his wisdom. What interpretation do you think I’ve missed. He doesn’t spell out who he thinks is a clown – but given his predictable hatred to me, I’m assuming he means me.
Let me read it again,
Yep – still the same vacuous insult. Still content free. Maybe it seems profound to you. You seem put out that I replied sarcastically rather than whining about how many down votes I keep getting.
Thanks also on the complement. I do like to add some humour to my comments. But I can’t compete with the comic genii who come up with gems such as Bellend, Bellcurveman and Bellhop. Now that’s satire.
The “clown of monte karlo.” Has a certain ring don’t you think?
This is not a Reddit forum, child.
At least put forth something of substance alongside your silly insult.
You are kidding right? You clearly do not visit here often. Childish putdowns are mandatory. Try reading bnice2000’s posts. At least mine didn’t say he was “dumb.”
Gag, Simon the Marxist checks in …
Bellman,
Please avoid making up stuff then attributing it to me indirectly. Geoff S
“Please avoid making up stuff then attributing it to me indirectly. Geoff S”
I wasn’t intending to attribute it to you, and I’m sorry if you got that impression.
I was attributing it to Tim and Jim Gorman, and anyone who accepts their logic. It’s been something I’ve been arguing about with them for years. The main reason why so much of my time has ended up trying to explain why they are wrong.
But I am not making it up – it is what they claim to believe, and have tried to justify repeatedly.
This from the guy that thinks calculating the average of a bunch of 2″x4″ boards down to the thousandths of an inch will mean that a beam built with them will allow you to span a foundation with a thousandths of an inch accuracy.
Lie about me all you want. Just try to deny that you think the uncertainty of the average is the same as the uncertainty of the sum.
Yes or no, do you think the uncertainty of the average of 100 thermometers each with a random measurement uncertainty of ±0.5°C, will have an uncertainty of ±5.0°C? Do you extend that same logic to the average of 10000 thermometers?
“This from the guy that thinks calculating the average of a bunch of 2″x4″ boards down to the thousandths of an inch will mean that a beam built with them will allow you to span a foundation with a thousandths of an inch accuracy.”
If you say I’m that guy then you your are producing the worst straw man argument of all time.
The span is going to be a sum not an average. If I know, somehow the average length of a board to 0.001″ (and for some reason am using inches), then I would expect the uncertainty of the sum to be equal to 0.001″ times the number of boards. This is by a strange coincidence is the uncertainty of the sum.
Then why do you continue to beat on the standard deviation of the sample means as being a measurement uncertainty? It isn’t.
It is the measurment uncertainty that is useful in the real world. For components in a critical mission I might require that most of the component population fall within 3, 4, 5, or even six sigma’s from the average. For ultra-critical missions I might not even accept a statistical description of the population. I might require that *EVERY* single component be measured and affirmed to be within tolerance with *NO* outliers. How precisely you calculate the average using samples is useless.
No one is missing the fact that you REFUSE to use the terms “sampling uncertainty” and “measurement uncertainty”. You do so in order to continue to use the argumentative fallacy of Equivocation in order to hide what you are actually talking about. If you want to be taken seriously by those in metrology then use the appropriate terms!
“Then why do you continue to beat on the standard deviation of the sample means as being a measurement uncertainty? It isn’t.”
Are you talking about a sample or an actual set of boards being used to span a foundation?
“It is the measurment uncertainty that is useful in the real world.”
Do you want to know the uncertainty of the sum or of an average?
I’m assuming that in your example you have a number of boards and you want to know if they will be long enough in total to span the foundation. You measure each in turn with an individual measurement uncertainty and then put them through the propagation formula to get the uncertainty of the required measurand. If you are only interested in the total length then the measurand you need is the sum of the lengths. That’s if you accept the sum is a measurand. If for some reason you want to know the average length you can divide the sum by the number of boards, and divide the uncertainty of the sum by the number of boards.
But the only use I can think of for the average in this case is so you can multiply it by the number of boards again to get the total length, in which case you can multiply the uncertainty of the average by the number of boards to get back to the uncertainty of the sum.
“For components in a critical mission I might require that most of the component population fall within 3, 4, 5, or even six sigma’s from the average.”
That doesn’t seem very safe. What are you using for sigma here? The standard deviation of the components, or of the uncertainty?
“I might require that *EVERY* single component be measured and affirmed to be within tolerance with *NO* outliers.”
Seems sensible.
“How precisely you calculate the average using samples is useless.”
Then why are you doing it?
Once again, you seem to be assuming that becasue you can think of examples where an average isn’t useful, that means an average can never be useful. It’s just bad logic.
“No one is missing the fact that you REFUSE to use the terms “sampling uncertainty” and “measurement uncertainty”.”
Unbelievable. I’ve been using those very terms from the start. I was attacked by one of you for “inventing” the term “sampling uncertainty”.
“Do you want to know the uncertainty of the sum or of an average?”
The uncertainty of the average *IS* the uncertainty of the sum.
Again, if q = Bx then the uncertainty of q is:
ẟq/q = ẟx/x + ẟB/B ==> ẟq/q = ẟx/x because ẟB/B = 0
It doesn’t matter if x is itself a sum and B is the constant 1/n. The uncertainty of q is the uncertainty of x.
Until you start using the terms “sampling uncertainty” and “measurement uncertainty” associated with a mean there is no use even discussing the topic with you. You hade behind the term “uncertainty” by itself and change what it applies to as needed.
They are *NOT* the same thing. Sampling uncertainty is *NOT* measurement uncertainty.
“The uncertainty of the average *IS* the uncertainty of the sum.”
Still blind to the type of uncertainty. The correct answer is
The relative uncertainty of the average *IS* the same as the relative uncertainty of the sum.
“Until you start using the terms “sampling uncertainty” and “measurement uncertainty” associated with a mean there is no use even discussing the topic with you.”
We are only talking about measurement uncertainty here. If you want to treat this as a sample, then use the SEM – as I telling you.
“They are *NOT* the same thing”
As I’ve been telling you from day one.
The average is the sum divided by a constant. The uncertainty of the average is the uncertainty of the sum plus the uncertainty of the constant. The uncertainty of a constant is zero so the uncertainty of the average is the uncertainty of the sum. Taylor describes this in detail in his books if you would ever bother to study it instead of cherry picking from it.
Eq 10 from the gum *ONLY* applies when you are propagating the uncertainty of factors in a functional relationship describing a measurand. Since the average is *NOT* a measurand but a statistical descriptor, Eq 10 simply doesn’t apply.
You and bdgwx simply can’t grasp the concept that a measurement is determining the property of a measurand. A measurand is something you can MEASURE. The measurement is not the measurand, it is the value of a property of the measurand. The height of a barrel is *NOT* a measurand, it is a measurement of a measurand.
The average is a statistical descriptor. That is all it is. And it is *NOT* a complete statistical descriptor of a distribution. At least the standard deviation, the kurtosis, and the skewness have to also be provided in order to have a complete statistical description.
Got that? The average is not a functional relationship. It is a statistical descriptor of a distribution. A measurement is *not* a measurand, it is the property of a measurand.
Argument by assertion as usual. Still no reference to back it up. Still no argument beyond – “becasue I say so”.
And still no understanding that if the global anomaly is not a measurand it cannot have a measurement uncertainty – as defined in the GUM.
“The uncertainty of the average *is* the uncertainty of the sum”
And we are back to square one. Everything I’m about top say has been told to Tim hundreds of times – yet he’s incapable of even considering it. He has to be either very dense, or a troll, or both.
“The average is the sum divided by a constant.”
Do you consider a sum to be a measurement, but an average not? How can dividing a measurand by a constant stop it from being a measurand.
And, how are you talking about a sum of temperatures when your other claim is that as a intensive property temperatures cannot be added?
“The uncertainty of the average is the uncertainty of the sum plus the uncertainty of the constant.”
Correction – The relative uncertainty of the average is the relative uncertainty of the sum plus the uncertainty of the constant.
You know this. I keep telling you this. You’ve read the book – you claim to understand it, and you’ve even quoted the formula from Taylor. Yet your cognitive dissonance stops you from hearing the word, let alone accepting the consequence.
Let me spell it out yet again. The uncertainty of the average divided by the average is the uncertainty of the sum divided by the sum, plus zero. If the average is smaller than the sum, then the uncertainty of the average has to be smaller than the uncertainty of the sum, and it has to be smaller in the same proportion as the average is to the sum.
You know what the ratio is, because you just said it. the average is the sum divided by a constant. You have to divide the uncertainty of the sum by that constant to get the correct uncertainty of the average. That’s the only way your equation can work.
And you should know this is correct, because as I keep having to remind you Taylor spells it out in the first of his special cases resulting from the equation from multiplying and dividing – box (3.9).
“Taylor describes this in detail in his books”
If only you were capable of understanding what he describes.
“Eq 10 from the gum *ONLY* applies when you are propagating the uncertainty of factors in a functional relationship describing a measurand. Since the average is *NOT* a measurand but a statistical descriptor, Eq 10 simply doesn’t apply.”
Funny how you only decided that when you finally realized it didn’t give you the result you wanted. Before then you kept insisting that equation 10 was the one that had to be followed.
But as I say, if you don’t accept an average is a measurand then we can ignore all the GUM as it only applies to measurement uncertainties of measurands.
I take it from the rest of your rant that you now regard the global mean as a statistical descriptor of the average global anomaly, and are happy to accept statistical theory to estimate it’s uncertainty.
I’ve told you from the beginning that the uncertainty of the average was the uncertainty of the sum. You tried using Taylor to prove me wrong by cherry picking from his example 3.9 without reading the entire example where he says: “the fractional uncertainty in q = Bx is the sum of the fractional uncertainties in B and x. Because ẟB = 0 this implies that ẟq/q = ẟx/x.
It doesn’t matter if B = 1/n or B = 1,000,000; it’s uncertainty is zero. And the uncertainty of the average is the uncertainty of the sum.
You turned yourself into a pretzel trying to argue otherwise.
AND YOU STILL CAN’T ACCEPT THAT FACT!
“I’ve told you from the beginning that the uncertainty of the average was the uncertainty of the sum. “
And I’ve told you from the start that you are wrong. That’s why these arguments just keep going round in circles, getting nowhere. You think simply asserting something makes it true.
“You tried using Taylor to prove me wrong by cherry picking from his example 3.9…”
It is not an example, it’s a rule. It works in all cases where you are multiplying a quantity by an exact value. You don’t get to pick and choose when you use it, and pointing it out to you is not cherry-picking.
“without reading”
you then go on to quote the exact words I’ve repeatedly quoted to you. Claiming I haven’t read them when they are the entire point I’m making is rich, even by your standards.
“And the uncertainty of the average is the uncertainty of the sum.”
Literally the opposite of what the rule implies. You still don’t understand that ẟq/q is a relative uncertai Ty. It’s the uncertainty of q divided by q. If q and x are different then ẟq and ẟx must be different. And if q = X / 100, then ẟq must equal ẟx / 100.
that should be obvious to anyone who’s progressed to basic algebra or proportions. And it should be obvious to anyone who can visualise what the equation means. I suspec at some level it’s obvious to you, but your egotism won’t allow you to admit you are wrong.
“And I’ve told you from the start that you are wrong. “
That’s because you simply can’t accept that an average is a STATISTICAL relationship and not a functional relationship. Measurement uncertainty applies to FUNCTIONAL relationships, not statistical relationship determining a statistical descriptor.
Again, Taylor describes this perfectly in his book!
if q = Bx then the uncertainty of q is related ONLY to the uncertainty in x.
ẟq/q = ẟx/x
The uncertainty of B is zero. If B = 1/n then it is a CONSTANT and it’s contribution to uncertainty is zero.
It is truly that simple.
“It is not an example, it’s a rule. It works in all cases”
And it says ẟq/q = ẟx/x. That works in ALL cases no matter whether you accept it or not.
“Literally the opposite of what the rule implies. You still don’t understand that ẟq/q is a relative uncertai Ty. It’s the uncertainty of q divided by q. If q and x are different then ẟq and ẟx must be different. And if q = X / 100, then ẟq must equal ẟx / 100.”
Unfreakingbelievable.
The uncertainty of 100 is ZERO. If q = X/100 then B = 1/100 and the uncertainty is ẟq/q = ẟX/X. 100 doesn’t come in there anywhere because ẟB = 0.
ẟ is small delta and is meant to symbolize uncertainty in Taylor, IT IS NOT THE PARTIAL DERIVATIVE SYMBOL. Remember that.
You *still* haven’t figured out Taylor because you refuse to read it for meaning!
If you measure q and estimate its uncertainty, ẟq, and it has a functional relationship to x, e.g. q/100 = x, then the uncertainty of x is ẟq/100 which leads to ẟq = 100ẟx. THERE IS NO DIVISION OF THE UNCERTAINTY OF X BY 100.
Do you ever stop to consider how man contradictions you make in a single rant. First you say that the measurement uncertainty of an average is the measurement uncertainty of the sum. Then you say that the average cannot have a measurement uncertainty. then you say it’s becasue you think the average isn’t a functional relationship, which is wrong, but by whatever logic you use the sum would also not be a functional relationship. So it couldn’t have a measurement uncertainty.
Then of course, you insist that as temperature is intensive it cannot have a sum in any case, but you still want to use the measurement uncertainty of that meaningless sum to determine the measurement uncertainty of the average.
“ẟq/q = ẟx/x”
And you still don’t understand that the “/q” and “/x” mean these are relative uncertainties.
“That works in ALL cases no matter whether you accept it or not.”
You still can’t understand that I’ve been accepting it from the start – I’ve just trying to get you to understand what it means.
“The uncertainty of 100 is ZERO.”
Why do you keep having to go through this deflection every time. We all know that an exact number has no uncertainty it’s the basis of the rule given in 3.9 by Taylor. The issue is not whether we accept 3.9 is correct. It’s that you fail, over and over to understand that if the relative uncertainty of the average is equal to the relative uncertainty of the sum it MUST mean that the absolute uncertainty of the average is smaller than the absolute uncertainty of the sum.
“100 doesn’t come in there anywhere”
You’ve just pointed out that q = x/100. 100 very much does come into it. You do remember how algebra works I hope. Substitute x/100 for q.
ẟq/(x/100) = ẟx/x
=> ẟq = (x/100)ẟx/x
=> ẟq = ẟx/100
Which is how you get to the special case of 3.9.
“ẟ is small delta and is meant to symbolize uncertainty in Taylor, IT IS NOT THE PARTIAL DERIVATIVE SYMBOL”
Complete and utter strawman. Nobody is suggesting the ẟ is anything other than a symbol meaning uncertainty. I usually translate it into u(x) to avoid the confusion with calculus.
“You *still* haven’t figured out Taylor because you refuse to read it for meaning!”
Cling to that mantra if it helps you cope. But try to give a hint that you actually understand what is being said. That ẟq/q = ẟx/x means the relative uncertainties are equal, and that 3.9 shows what that means for absolute uncertainties.
“If you measure q”
q is the average, you keep insisting you can’t measure the average. What you are actually doing is measuring N things, adding them together to get x and calculating the uncertainty ẟx using the propagation rules for addition. Then you are dividing x by 100 to get q and using (3.9) to see what ẟq is. I can’t see why you keep getting these the wrong way round.
“then the uncertainty of x is ẟq/100 which leads to ẟq = 100ẟx”
Are I get it. More cognitive dissonance. Keep mixing things up until you get the answer you want.
“THERE IS NO DIVISION OF THE UNCERTAINTY OF X BY 100.”
And when all else fails, start writing everything in capitals.
If q/100 = x is the equation you want, then you are saying q is the sum and x is the average. So yes, in that case the uncertainty of q is 100 times the uncertainty of x, or in other words the uncertainty of the sum is 100 times the uncertainty of the average. Same thing as I’m saying just expressed in a deliberately confusing manor.
A measurand is something you can measure a property of.
Tell us what tool we can buy at the hardware store to measure an “average”.
“A measurand is something you can measure a property of.”
Not for the first time, it would help these arguments if you provided your own private definition first.
The GUM and VIM.definition:
It’s the quantity that is the measurand, not the object. It’s vapour pressure in the example not the sample of water.
“Tell us what tool we can buy at the hardware store to measure an “average”.”
Pencil and paper, a calculator, or a computer. The same as you would use to “measure” the sum of a load of wooden boards.
VAPOR PRESSURE IS A PHYSICAL THING! It is a measurand. The *value* of that measurand is a MEASUREMENT and not a measrand.
“It’s the quantity that is the measurand”
You are *still* trying to equate the measurement and the measurand. The VALUE is the measurement and not the measurand.
Learn to READ!
I’ll ask again: Where do you store your “averages”? In a refrrigerator?
My Fluke 62 does it.
My ExTech RH350 does it.
My Greenlee DM-820A does it.
Both of them MEASURE THE TOTAL RECEIVED. They then *calculate* an average. They do not *MEASURE* the average.
The average is not a measurand. It is a statistical descriptor.
Whether you call an average a measurand or not (The GUM says it is) the funny thing is that all 3 do so with intensive properties.
And, uncertainty is still not error.
Another F
Where did I use the word error?
Heh, another indication you don’t know WTF you are doing.
And another indication that karlo is just trolling. He’s been like this for years. Just asserting others are wrong, never actually explain why in his opinion they are wrong.
If you ask for an explanation he’ll just fall back on variations of “you wouldn’t understand the answer”.
Stop whining.
As always trying to educate you is a fool’s errand.
Much easier to have a laugh at your expense.
And you still don’t understand that error is not uncertainty.
Me: “If you ask for an explanation he’ll just fall back on variations of “you wouldn’t understand the answer”.”
km’s response: “As always trying to educate you is a fool’s errand.”
Should have added that he’ll probably start going on about me whining, as that’s his stock insult. He seems to think it’s a telling insult as anything you say can be described as “shining” if karlo doesn’t like it. Even pointing out how he refuses to answer any question can be regarded as whining.
All the 1000s of explanations before now went right over your Olmec head.
Get back to me once you figure out that error is not uncertainty.
“you wouldn’t understand the answer” #2.
A skeptic might wonder if he doesn’t know the answer.
Ask me if I care what you think.
No need. The volume of your comments directed at me demonstrates you do.
You wish.
Back to the bozo bin for you.
It’s not been ignored. Its been shown to be wrong. Don’t take my word for it. Prove it out for yourself using the law of propagation of uncertainty or the NIST uncertainty machine.
Clown #2
bdgwx,
Here’s a different slant re Aussie data. Our BOM now so end homogenised temperatures to those who compile global averages. The BOM have produced 4 versions of these ACORN-SAT adjustments. The versions differ, so that for a given station, the envelope enclosing all values is of the order of +/- 1.5 deg C.wide.
It is hard to work out which ACORN-SAT version was used from time to time. Therefore, we assume that the width of the individual station data is adequate to represent the envelope around the final global temperature estimate a la GISSTEMP etc.
What is “wrong” with this reality? Geoff S
The uncertainty of a given stations is not the same as the uncertainty of a spatial average. You have to propagate all of those uncertainties from individual stations all the way through to the final spatial average in accordance with the law of propagation of uncertainty.
No, you first have to understand what you are doing, instead of blindly plugging into the NIST machine.
spatial uncertainty is a SAMPLING ERROR, not a measurement error. They are *NOT* the same. You also have to propagate the measurement uncertainty!
Its a heck of a lot easier to claim it just cancels using a load of word-salad hand waving.
From the Mail:
I’m (not) surprised you’re not criticizing the Mail for using the deceitful Climate Reanalizer website as a source of “data”
Do I sound like a fan of the Mail?
In fact they are quoting Copernicus because they are the first with data. But GISS and others say the same thing.
Are you a fan of Climate Reanalizer?
Yes, if you want a whole atmosphere picture of multiple variables. But if you want the history of surface temperature, best look at an average of just surface obsrevations like GISS (or TempLS). It’s more homogeneous over time.
Surface temperature from GISS et al are TOTALLY UNFIT-FOR-PURPOSE of comparing global temperature over time.
1… They are not remotely “global”.
2… They are massively affected by non-climate factors like URBAN warming and agenda-driven mal-adjustments.
3.. It is not remotely “homogenous over time”… exactly the opposite.
And I know you are well aware of that fact.
That mean you are being a deliberate LIAR and attempting to CON people with deliberate mis-information.
Nick knows that CR is a totally fabricated analysis.
That is why he likes it. !
The GAT is meaningless, regardless of source.
I want to know where the warming took place. It certainly wasn’t at my location! Was it in the SH since it is winter there? Maybe in the tropics? The GAT tells us nothing other than it is a made up number that is meaningless to anyone at any given place.
“I want to know where the warming took place.”
Here’s the map from GISSTemp.
https://data.giss.nasa.gov/gistemp/maps/
And here’s my own map using UAH data.
Oh dearie me.
Doesn’t look anything like the UAH map does it.
It’s exactly the same, just a different projection and a different anomaly scale. My scale goes from -5 to +5, where as Spencer prefers to use one going from -9 to +9. This should be clear just from looking at the legend.
But if you prefer to remove some of the detail here’s mine using the same scale.
Well done highlighting the El Nino effect.
Now.. where is the human causation. ?
We could also look at just before the now subsiding El Nino.
Again, we know where the warming is coming from…
… and it is NOTHING TO DO WITH HUMANS
Why? The question was where was it hot this June, not in April 2023.
Proof , yet again, that you are well aware the June 2024 UAH temp was from the persistent El Nino.
And proof yet again that you have zero evidence of human causation.
Well done.
Nor in my location. Freezing my bollocks off since end May. The warming is always where no one lives or no one cares.
And GISS is telling me it has been a +1 degree anomaly where I live – Blatant lie!
It’s cold where you are so it must be cold all over the world.
Sure.
In June? Here is the map. Yes, the UK wasn’t especially hot. Most of the US was pretty warm
Again the use of data YOU KNOW IS TOTALLY CORRUPTED by Urban heat and agenda-driven fabrications and mal-adjustments.
And using the base period during the COLDEST period in the 20th Century.
Is there no end to your deliberate LIES and MISINFORMATION. !
There is no way they knew the full ocean temperature in the base period to even 3-4 degrees. The data just did not exist.
We had about 1.5 weeks of typical warm temperatures in June in the UK, otherwise it was distinctly cool.
Or as it used to be called, a typical British summer.
Nick,
“More homogeneous” is not synonymous with “better quality” or “closer to the best estimate.”
If you disagree, proof is required. Geoff S
Gaudete!
Ed Miliband to lead UK negotiations at Cop29 climate summit
Negotiations suggests there is another side, an opposing party. At COP get togethers the most remarkable feature other than the cost is the unanimity of those attending.
They all want the same thing.
I believe that same thing, is an invite to the next COP get together…
Miliband is quite mad.
“Quite”
Typical English understatement 😉
Gaudete!
Best rendition ( IMHO ) is Steeleye Span with Maddy Prior
Gaudete
Rejoice….
Thanks for that! I hadn’t listened to Steeleye Span in decades. I really liked Maddy Prior’s voice. I am going to break out the vinyl and give it a listen.
Probably God save us all would be better. He’s bound to offer more than anyone is demanding while singing Catch The Wind and strumming a ukulele.
That’s fantastic news! Milband is the ideal person to lead the UK delegation to COP as he is incompetent even by the standards of the Labour Party.
Nick is quite correct.
Copernicus report that the Earth has now warmed by +1.63 degrees for over a year now, well over the 1.5 limit that brings catastrophe.
To get below 1.5 degrees, the Earth now has to cool by 0.14 degrees.
Best of luck finding a mechanism that is going to cool the Earth by that much! (Spoiler alert… there isn’t one)
Net Zero measures are simply too late. We have already exceeded the catastrophe limit, just like Nick warned you.
So What.
Copernicus is NOT DATA… it is a fabrication.
Warming is GOOD
Enhanced atmospheric CO2 is GOOD.
The 1.5C “target” is a scientifically baseless made-up number anyway.
Great that we have reached it.. even though its only because of a strong El Nino event..
… Now what is the next target to aim for. ! 🙂
A ΔT of 1.5°C tells you nothing about the actual temperature. Assuming the baseline was 14°C (57°F), that means the current temperature would be 15.5°C (60°F). Not exactly warm. I would be more interested in a temperature index outside of the Arctic and Antarctic areas since the vast amount of population doesn’t live in those areas.
So when its it going to get warm enough to turn my central heating off, then?
The GAT cannot tell anything about the mythical “the climate”.
So, where is the catastrophe?
Well, in the UK it’s been cold and wet with 2 warmish days. A lousy summer.
If you were looking forward to a nice summer… then, it’s a disappointing catastrophe. Don’t put the jumpers away just yet.
It’s been sizzling hot in the American northeast. Probably the hottest since the ’30s. And wet- so the environment looks tropical.
I can’t print my response here, Joseph, other than you lucky bugger!
Well, it’s too dam hot for me. Most days have been from high ’80s to high ’90s F and very humid. Been this way for several weeks. I don’t mind so much if the night time temp drops below 70 F so the house can cool off cheaply, but the past week or so it only dropped to mid ’70s. One night it didn’t drop below 80. I’m comfortable in the house with AC but I’m an outdoors-man, working and enjoying the outdoors year round since the late ’60s- so staying in most of the day really sucks. I do get out to mow my large lawn but that’s about it and I do it early in the morning. Some days were not so humid and they were far nicer.
Would you really rather be cold and using the expensive heating?
Perhaps the grass is greener on the other side?
No, I’d rather not be cold. I’d rather have ideal weather. In the ’70s F and low humidity. In the ’60s or 50s at night. And I’d also like to be 30 years old again but without the baldness. 🙂
Lucky you. Here in my little piece of heaven we have been pushing or exceeding triple digits daily for what seems like weeks now. Night time lows high 70’s to mid 80’s. Keep the ac unit running 24/7 in the bed room, set to 78 degrees. Essentially, this is what the locals call summer.
1987 and 2012 were much hotter here in Michigan.
Nope.
It is hot, but it is also summer.
I think it’s the hottest in my 74 years. And, yes, I was aware that it’s summer.
Nonsense, it’s been hot but nothing abnormal. I have 60 years of experience and there have been many way worse.
OK, smarty pants- I didn’t say the entire planet- I’m talking where I live and only where I live, for 74 years- Wokeachusetts. I worked outside for the past 50 years so I think I’m capable of saying it’s the hottest SUMMER so far in my life. So take your “nonsense” and put it where the sun don’t shine.
Well Steve, you are ignoring the Sun as the IPCC and most alarmists do, after the present cycle 25 that we are in the peak of, there will be much lower solar magnetic activity and predicted cooling in the order of 0.1-0.5C over the next decade.
If we have already reached the 1.5C limit, that’s fine- another stupid alarmist prediction that has failed! If people took a little more care and attention to basic science, they would know that throughout our history warming periods have been kind to humans and we have experienced much higher global temperatures in the past when we thrived. So, to say a few degrees of future warming is going to be catastrophic, is just plain scaremongering, even stupidly criminal, if it forces ignorant governments to enact ridiculous overkill policies like Net Zero. So, Steve if you want to be on the right side of history you will rethink your ideas about global warming and the evidence of real science, not the rubbish environmentalists thank is reality.
stevencarr,
Would you please make a list of the Top Five catastrophes that have happened on the way to that 1.5C wramikng limit that rthe guys at Potsdam Institute admit they pulled out of thin air?
Or even a sign that a single catastrophe is developing?
Here in Australia, thelast 8 years have been in a cooling trend that makes me wonder how that can assist a climate catastrophe.
Geoff S
http://www.geoffstuff.com/uahjuly2024.jpg
I can’t follow your link due to my location, but with respect, I think they originally pulled the 2°C limit out of thin air (or their arses). I think the 1.5°C limit was magically conjured up as soon as they realized they were never going to hit the 2C limit in the first place (but I could be wrong).
The 2 degrees C seems to come from a modified version of Nordhaus’s DICE model, which gives a lower temperature change value for net negative effects than Nordhaus had calculated.
In turn, the 1.5 degrees C target was set to provide a large safety margin.
You can easily make the same plot from the same data at Roy Spencer’s blog. Lower troposphere, monthly temperature anomaly in deg C, version 6.0, years 2015 to now, linear trend, fitted.
Yes, the folk at Potsdam Institute, “By Papal Appointment”TM, made a public statement that was unsupported by data, that 2 deg C was the figure of note. Soon after they added the 1.5 deg C aspirational target. It was not a careful scientific analysis of past data. It was a public relations exercise throw away line and a disgrace. Geoff S
Stevencarr:
No, the Earth CAN be cooled by that amount if the 2020 mandate for low-sulfur fuel for maritime shipping is revoked.
*Oh no!*
Catastrophe huh?
I would have thought there’d be more to a catastrophe.
Catastrophe limit?? You need to get out more.
Except that what Copernicus produces IS NOT DATA.
It is a JUNK FABRICATION.
I am sure Nick is well aware of that fact.
“Of course temperatures go back at least a century before 1980”. Yeah tell us about the scarcity of GLOBAL records going back that far.
https://researchonline.jcu.edu.au/view/jcu/3EEE19904EEB05089D5676FF4150A41F.html
Or about the SH normals being mostly “made up. 😉
Entirely fictitious graph, with most temperatures for the Southern Hemisphere before the satellite era simply.made up.
The point Stokes always hides from.
That’s the one using the Karlized (aka heavily adjusted) data isn’t it?
The “Karlized” data shows less warming relative to the raw data.
[Karl et al. 2015]
Then why bother with the Fake Data “adjustments”?
roflmao.
You still falling for all the CRAP that comes from the Karls, the Schmits and the Horsefathers.
What a gullible little twit you really are.
There is no “raw data from the NH or most of the SH that looks remotely like that.
Nearly all raw data shows the 1930s,40s being similar to around 2000-2010.
And of course you have the massive urban warming in that graph as well.
It is totally MEANINGLESS as a representative of global temperature over time.
If it is published in any of the corrupt climate journals, he swallows it as gospel.
Not to mention that nothing remotely resembling “global” data exists from 1880-1920.
Heck realistic surface data it doesn’t exist even now.
Just coming from a whole heap of really bad urban and airport sites.
Nick, you are quite correct about the need to look at longer term temperature trends to reference recent global warming. However, using GISS or NASA data that has been corrupted by false homogenization in relation to the urban heat effect, and further adjusted to enhance recent warming like the graph above, only emphasizes the distrust people have in climate science and predictions of any climate crisis, which we are clearly not having as current mild warming is beneficial for the planet..
Fake Data and Fake Science.
Fake Data + Fake Science = Fake News
Can you post a link to a global average temperature dataset that you accept? I’d like to review it and compare it to GISTEMP.
How many more times are you going to post this same stupid question?
Sign of bankruptcy.
I don’t accept any of them.
A GLOBAL average temperature is just made up BS – there is no such thing that has any real-world meaning
How do you eliminate high warming rates then?
WUWT features it on every page.
There is no such thing as a global average temperature nor is there global data going back beyond the satellite era.
Dr. Spencer and Dr. Christy were able to figure out what the global average temperature is. Their result is featured on every page on the WUWT site.
And somehow, they don’t report absolute temperatures, nor are their numbers identical to the RSS numbers, which originate from the same raw data…hmm.
Free clue — the UAH is a proxy.
And UAH shows there is NO HUMAN CAUSED ATMOSPHERIC WARMING
With Fake Data?
And bozo-x thinks this is some kind of “official” endorsement…
warming rates of what? Minimum temps? Is that a problem that needs to be eliminated? Grain harvests would seem to indicate that it isn’t.
Good luck getting a rational answer.
Nice hockey stick.
That looks like a hockey stick.
You have to be joking or are you just ignorant about measurement uncertainty? I’m guessing, but your graph shows ~ ±0.15°C for measurement uncertainty in 1880. No one who has any education concerning metrology believes that.
I have taken the liberty of modifying the graph with some reasonable projections (blue lines) of possible uncertainties. You will notice I have reduced the uncertainty as time has progressed.
Using these possible projections, one may say an estimation of uncertainty surrounding the ΔT at present is 1.25 ±0.7°C, for an interval of [0.55 to 1.95]°C. This is not unreasonable considering that NIST found a measurement uncertainty of ±1.8°C in monthly temperatures. Anyone with an appreciation of the uncertainty in measurements would know that this interval is a good representation of what you DON’T KNOW. Uncertainty in measurement is the dispersion of values that could be attributed to the measurand. That is what this graph indicates.
I think this means that the change from 1880 to 2020 could range from 0.3 in 1880 to 0.5 in 2020 or 0.2 over 12 decades or about 0.02C per decade or 0.2C over the next 100 years.
The bottom line is that WE DON’T KNOW. Climate science DOESN’T KNOW. But having to admit that they DON’T KNOW would dry up finding money.
“I think this means that the change from 1880 to 2020 could range from 0.3 in 1880 to 0.5 in 2020 or 0.2 over 12 decades or about 0.02C per decade or 0.2C over the next 100 years.”
You still don’t get that a linear regression is not just the difference between the start and the end point. Or that you cannot just project a linear trend forward 100 years, especially when your data clearly isn’t linear.
But on the absurd uncertainty estimates in that graph, your own logic also means that the change could be from well below -1 in 1880 to well over +1 in 2020. a warming rate of about 0.2°C / decade. That’s almost three times the actual rate over the last 140 years.
It only seems absurd to you because you refuse to accept the concepts of metrology and measurement uncertainty.
You simply DO NOT KNOW what the actual value is. It can be anywhere in the measurement uncertainty interval.
So, yes, it *could* be a -1 in 1880 and well over +1 in 2020. YOU SIMPLY DON’T KNOW.
That is just something you have to accept as a result of measurement uncertainty. YOU SIMPLY DON’T KNOW.
Your “absurdity” is nothing more than a subjective opinion because you can’t accept that YOU SIMPLY DON’T KNOW.
There is a reason he is AKA Bellcurveman.
He will never grok even the simplest basics of MU.
“It only seems absurd to you because you refuse to accept the concepts of metrology and measurement uncertainty.”
I see you are going to ignore all my points and obsess on one word. And then try to turn this once again into your misunderstandings of how any of the equations actually work. Forget it. There’s no point trying to explain something to someone incapable of accepting they might be wrong about something.
You still can’t accept that the uncertainty of an average is smaller than the uncertainty of the sum, and until you can figure out why that leads to absurd conclusions.
“You simply DO NOT KNOW what the actual value is. It can be anywhere in the measurement uncertainty interval.”
You keep shouting that as if it’s something anyone disagrees with. You do not know what the actual value is – that’s why there is uncertainty.
“So, yes, it *could* be a -1 in 1880 and well over +1 in 2020. YOU SIMPLY DON’T KNOW.“
Now he’s resorting to all caps and bold, as if that improves his argument.
Again there is uncertainty so you do not know what the actual value is. But you do have intervals where it is more reasonable for the value to be. These are called uncertainty intervals. Ignoring probability or reasonableness, it’s possible for the actual value to be +10, +1000, in fact there is no limit on what it could be.
“That is just something you have to accept as a result of measurement uncertainty. YOU SIMPLY DON’T KNOW.“
If you were asked in an exam to estimate the uncertainty of a measurement and your answer was “I SIMPLY DON’T KNOW”, do you think you would get full marks.
A much better answer than the mK bullshit that you and all the climatistas push.
You really are an ignorant brain-washed twit, aren’t you.
Show us where surface data from 1880-1920 came from.
Then try to justify how a “global” temperature could possibly be created from the data.
“You still can’t accept that the uncertainty of an average is smaller than the uncertainty of the sum,”
The uncertainty of the average, the way you use it, is a metric for sampling error, it is *NOT* a value of the measurement uncertainty. Again, for the umpteenth time, no matter how precisely you locate the average it simply can’t tell you the accuracy of that average. Only a propagation of the measurement uncertainty can give you a hint of the measurement accuracy of the average.
You just simply refuse to admit that the variance of a data set is a direct metric for the uncertainty of the average. It just shows that you don’t even understand basic statistical descriptors.
The concept of the GREAT UNKNOWN *is* a foreign concept to many who have never had any real world experience with it. I can assure you that any carpenter that has wound up with a wavy ceiling in a room understands it. I can assure you that any machinist or mechanic who has made or installed a bushing that winds up seizing up the machine understands it. How precisely you calculate the average value of the 2″x4″ boards used for the ceiling walls or how precisely you locate the average value of the bushings becomes totally meaningless when the objects are actually put into use in the real world.
“But you do have intervals where it is more reasonable for the value to be.”
More cognitive dissonance from you. It always winds up there. How do you determine what is reasonable if you don’t know what is reasonable?
I assure you that the engineer that thinks a measurement uncertainty of 1″ in an 8′ 2″x4″ is unreasonable has *NEVER* actually had to live with that assumption of reasonableness. And the engineer who thinks that it is unreasonable to have a measurement uncertainty of 10″ when 10 of them strung together to form a beam has never had to actually live with their assumption of what is unreasonable.
And *that* is you to a T.
He will never understand, not possible.
You cannot tell anything “global” from GISS data.
It is heavily contaminated by urban expansion and agenda “adjustments”
It is not relevant in any way to REAL global anything. !
” This is not unreasonable considering that NIST found a measurement uncertainty of ±1.8°C in monthly temperatures.”
That’s for one month at one station, with a third of the data missing. And you still fail to understand that it is not a “measurement uncertainty” in the way you keep meaning it. It is saying nothing about how accurate the instrument is. The uncertainty is coming from the day to day variability. It’s saying what range of temperatures could have produced the observed data purely by chance.
NIST assumed ZERO measurement uncertainty. If measurement uncertainty exists then it gets ADDED to the uncertainty attributed to the variability. Meaning the real world measurement uncertainty would be *MORE* than +/- 1.8C.
If you talking about NIST TN 1900 E2 then understand that “the {Ei} capture three sources of uncertainty: natural variability of temperature from day to day, variability attributable to differences in the time of day when the thermometer was read, and the components of uncertainty associated with the calibration of the thermometer and with reading the scale inscribed on the thermometer.”
Why are you replying to a “contrarian”?
“and the components of uncertainty associated with the calibration of the thermometer”
You’ve been told at least a dozen times to actually read TN1900 before expounding on it. But you never have. It EXPLICITLY states the assumption that measurement uncertainty is insignificant – i.e. zero. By using Tmax the measurement uncertainty associated with time of observation uncertainty is also eliminated. Therefore TN1900, E2 does *NOT* capture the measurement uncertainty associated with the calibration of the thermometer or time of observation.
From the paragraph following the quote you provided:
“Assuming that the calibration uncertainty is negligible by comparison with the other uncertainty components, and that no other significant sources of uncertainty are in play, then the common end-point of several alternative analyses is a scaled and shifted Student’s t distribution as full characterization of the uncertainty associated with r. ”
You and bellman are cherry pickers, never bothering to read for comprehension or context.
First…you are contradicting your brother who said “This is not unreasonable considering that NIST found a measurement uncertainty of ±1.8°C in monthly temperatures.”
Second…you are deflecting and diverting away from the salient point Bellman was making which is that the uncertainty they computed in TN 1900 E2 includes components arising as a result of things beyond just measurement uncertainty.
Oh…and of course don’t forget that they…gasp…divided the standard deviation of the measurements of different things by sqrt(m) to compute the standard uncertainty.
Liar. You don’t understand the words you sling around with wild abandon.
Pardon me! I should have said the CALIBRATION UNCERTAINTY is zero.
I just gave you the quote from NIST TN1900 specifying what they considered. It was *NOT* instrument accuracy nor was it time of observation accuracy.
Please read for meaning: “no other significant sources of uncertainty are in play,”
By assuming that the data was random with no systematic measurement uncertainty and no other significant sources of uncertainty are in play that means the the precision with which the average value is calculated becomes the uncertainty of the mean.
Dividing by the sqrt(n) means that the example considers observed values are considered to be a sample of the population with the same standard deviation as the population. It’s an assumption which *should* be justified in some manner. There is *NO* guarantee that the standard deviation of the sample is the same as that of the population. *IF* the observed values are considered to be the total population then there is no reason to divide by sqrt(n) – the mean is just the mean and the standard deviation of the mean is zero.
It’s why the uncertainty of the mean from sampling is a metric of SAMPLING ERROR, it is not a metric of the accuracy of the mean. Why do you think the standard uncertainty gets expanded?
“NIST assumed ZERO measurement uncertainty.”
Stop lying. You’ve been given the relevant quote enough times. They assume the calibration uncertainty is negligible compared to the daily variation, not that it is ZERO.
And again, you are deflecting from the point. It was Jim who quoted the ±1.8 as “measurement uncertainty”, and then used that to justify his estimates for global “measurement uncertainty”.
The ONLY uncertainty they considered was the variation in the data. The quote makes that quite clear. bdgwx was implying that the calibration uncertainty and the time of observation uncertainty were at play!
You can nitpick the words all you want. It doesn’t change the assumptions made in TN1900, Ex 2 – WHICH YOU ABSOLUTELY REFUSE TO LIST OUT!
“The ONLY uncertainty they considered was the variation in the data.”
No. They considered several sources of uncertainty, but only used the variance in the data. That variance includes random measurement errors, including from the resolution, but that is insignificant compared with the daily variance. What they assumed was negligible was uncertainty from the calibration.
But regardless, you are just violently agreeing with me. The ±1.8°C uncertainty is not coming from the instruments, it is coming from daily variance.
“TN1900, Ex 2 – WHICH YOU ABSOLUTELY REFUSE TO LIST OUT!”
You are either lying or suffering from severe memory loss. And shouting it out doesn’t make you look any better.
Assumptions made in the TN1900 (Off the top of my head).
That last one is the one I have most problems with. The example seems to contradict itself about what is being measured, τ.
First they say it’s defined as the average of the 31 daily maximums
But then define it as the expected value of the distribution the daily values come from.
It’s the second definition that is used when talking about the SEM as a measure of uncertainty, but I question if that makes sense when talking about an actual average of 31 daily values.
“No. They considered several sources of uncertainty”
They threw all other sources of uncertainty away – just like climate science does!
“What they assumed was negligible was uncertainty from the calibration.”
As usual, you are cherry picking. You totally ignored the phrase in the quote from TN1900 that says; “and that no other significant sources of uncertainty are in play,”
You missed at least several assumptions.
In other words all of the assumptions lead to the general meme that all measurement uncertainty is random, Gaussian, and cancels. Therefore the sampling uncertainty can be considered to be the measurement uncertainty.
“It’s the second definition that is used when talking about the SEM as a measure of uncertainty, but I question if that makes sense when talking about an actual average of 31 daily values.”
Then you must also question the use of the SEM as the measurement uncertainty for the data used by climate science.
If the meme “all measurement uncertainty is random, Gaussian, and cancels” doesn’t seem to apply in TN1900 then the meme doesn’t apply in climate science either.
Believing it applies in one case but not the other *is* cognitive dissonance.
Just to be clear that quote came directly from NIST. If you feel like anything is being implied then that is the result of NIST’s statement which I no part of. I will say that I happen to think it is more than an implication though. NIST spells it out directly, explicitly, and literally. If you don’t think 2 of the sources of uncertainly specifically mentioned as being in play really are in play then I suggest you take that up with NIST. And don’t the irony of you accusing me of refusing to list something out when that is exactly what I did went unnoticed.
“Just to be clear that quote came directly from NIST. If you feel like anything is being implied then that is the result of NIST’s statement which I no part of. I will say that I happen to think it is more than an implication though. NIST spells it out directly, explicitly, and literally. If you don’t think 2 of the sources of uncertainly specifically mentioned as being in play really are in play then I suggest you take that up with NIST.”
You didn’t even read the quote I gave you from TN1900. It is from the paragraph following the one you quoted.
Again: ““Assuming that the calibration uncertainty is negligible by comparison with the other uncertainty components, and that no other significant sources of uncertainty are in play, “
I bolded and italicized the words so maybe you can read them for meaning.
NIST themselves stated that those other sources of uncertainty are not in play. That is *NOT* something I just made up.
If you have a problem with that then it is *YOU* that should take it up with NIST, not me.
Did they say the calibration uncertainty was zero?
Does the statement in the following paragraph contradict the statement in the previous paragraph?
I’m not the one that has a problem with it.
Your problems are legion.
“Did they say the calibration uncertainty was zero?”
Can you read?
“““Assuming that the calibration uncertainty is negligible “
You seem to have a a problem with this statement. Take it up with NIST.
Ever heard of proxy data? Kobashi et als three Grenland ice cores…. ?
?rlkey=2l8rg1423akh3wbxovq99vqbg&dl=0
The 1,000 plus papers in this glossary documenting proxy records of past cycles that were cyclically warmer than now throughout the Holocene era, and we know now is 1.5K warmer than the coldest in 8,000 years? And not warming at rate, range or on a period different from those past cycles.
https://www.google.com/maps/d/u/0/viewer?mid=1akI_yGSUlO_qEvrmrIYv9kHknq4&ll=-3.81666561775622e-14%2C0&z=1
And when you combine all those proxy data points from all over the world you get something like this.
[Kaufman et al. 2020]
Oh look, another hockey stick.
Meaningless and deceptive.
Complete anti-statistics. driven by a wacky anti-human agenda.
There are proxies from all around the world showing the MWP was WARMER than now, and was GLOBAL.
Tree rings are a really bad proxy before there was sufficient CO2.
And slapping urban temperatures and fakery on the end just make them look even more idiotic.
All he does is gaslight people with the crap generated by the alarmist climate cabal.
Strange that this data does not agree with the NWS Heating or Cooling Degree days.
Start with the link and look at both heating and Cooling degree-days. Both are basically FLAT.
You’ll find this for lots of locations around the globe for cooling degree days. I did this a couple of years ago for several stations on all the continents, e.g. north africa, south africa, north south america, south south america, etc.
Most of the locations showed flat or a very small increase. A few were down and a few were up significantly. The only important factor I could identify was geography and terrain for the various locations. It’s not obvious that any of the data analyses done on temperatures make any weighting attempt for geography or terrain. Yet the cooling degree day values for a station on the west side of a mountain range came out differently than on the east side.
Climate science today certainly leaves a LOT to be desired when it comes to physical science and its application. It seems to be dominated by statisticians and computer programmers who just see the numbers as numbers and not measurements of physical reality. The average of measurements of different things is a statistical descriptor and not a measurement itself. Complete statistical description of a data set requires providing at least the variance, kurtosis, and skew of the data along with the average. Yet you *never* see those statistical descriptors provided in *any* climate science literature!
“Of course temperatures go back at least a century before 1980.” I’m going to help you out here… “Recording of temperatures go back at least a century before 1980.” This may upset you, as you are not used to accuracy.
It’s beyond a joke.
2 years ago it was 20c warmer – 40C
But that wasn’t hotter etc than 2024
Right.
There are more basket cases in the field of climate madness than anywhere else.
Yep – and they use the baskets to collect our money.
I complained to the BBC that the Ed Hawkins stripes only go back to 1850 on their stories why didn’t they go back further in stories about the “warmest ever”. At the second attempt the asked Ed Hawkins who said that reliable records only go back to 1850.
Can you put a line on that chart when human civilisation began?
It provides context.
And also can you tell us the solar output, say half a billion ya, the continental configuration and hence heat transport via ocean currents – not to mention the atmospheres aerosol content, and orbital specifics. IOW: the graph, in no way represents the Earth’s current biosphere. Just the reflexive ideological misconception (being kind) of CO2 being the only driver/feedback (as far as the science tells us) responsible for global temperatures. CO2, as the most sig non-condensing GHG, is that on the current Earth, it’s increase being exterior the natural carbon cycle. But to claim the relationship back over geological ages is comical dissonance. Mind you – no surprise there. As if this backwater blog has any import as regards science peer review. Hey, but if it gives you an outlet for your existential anger/confusion – at least you’re not pointing an AR15 at Trump to exorcise it. You’re welcome Mr various.
And now Blandon rises to the bait with a does of insane marxist TDS.
“As if this backwater blog….”
Says the slime from the sewer. !
Can you provide any empirical evidence of warming by atmospheric CO2
You are just blethering rancid AGW mantra BS… as usual.
^ +1000 this
“As if this backwater blog.. “
Yet you keep turning up. Another one leading a sad life.
Mate in convict land reckons they better hurry up and find the missing heat or his power bills are doomed-
Polar outbreak hitting eastern Australia to bring most snowfall in two years across NSW over next 24 hours (msn.com)
“”Mate in convict land””
Bruce?
Nick Stokes – I note the warming slope to 1940 is the same as recently. How come? Can’t be CO2
Can you provide evidence for this?
Looks to me like the warming slope to 1940 is both slower and far, far shorter that the more recent one.
Any amount of things could account for this. Reductions in aerosols during that period is the commonly reported response.
You are looking at agenda corrupted ex-data…
The only type you are interested in.
Before the scam adjustments most “global (lol)” data fabrications showed strong warming from 1900-1940.. this actually match historic reports from places like the Arctic, as well.
Here’s a chart by Phil Jones. I hesitate to use this chart since Jones bastardized the chart. A real depiction of the temperatures would show the 1880’s, the 1930’s and 1998 on the same horizontal line as they were all equalling warm.
But Phil did preserve the warming trends since the Little Ice Age ended. As you can see all three warming periods warmed at the same magnitude.
And I think it’s dishonest to pin “hottest. June” on the peak of a diminishing El Niño spike without explanation. Don’t you, Nick?
Most of the dangerous warming is occurring on the Greenland plateau in January. It is already up 10C in past 70 years. Those few folk who on the the plateau are now coping with a dangerous warm minus 25C rather than the less dangerous minus 35C.
Australia has also warmed over the satellite era. July temperature up almost 4C south of the Tropic of Capricorn. December temperature over the same region down about 1C. But, on average, Australia has warmed a degree or two.
RickWill,

This is the graph for Australia. I have fitted the linear least squares line because everyone seems to expect it, but it is not really a good thing for the purists. Geoff S
Note that your r^2 value (0.1522) says that the linear regression only explains a little over 15% of the variance. That means the 85% of the variance cannot be be predicted.
That is +0.19 C/decade.
So what?
So that’s ~ +2C per century warming in a system that is adapted to basically zero warming.
That’s ‘so what’.
Do you get it?
You don’t, do you?
Implicit extrapolation from linear regression
Huh?
Just another gibberish comment from the Luser. !
Yep!
What a moronic comment !
Anti-math extrapolation.
Australia, is as subject to El Nino events as anywhere else.
COOLING from 1998 – 2016
Cooling from 2016 to start of 2023 even with EL Nino effect at the end.
Explain how CO2 has any part in this pattern , which shows COOLING most of the time.
Explain to us how there is any human causation.
“ in a system that is adapted to basically zero warming. “
What a load of b*llocks.
Does it really matter what the BBC say? As I have written previously, their head of news and current affairs deemed any words from deniers concerning climate change as blasphemy and would not allow them to be aired. In the meantime they populated all their output with insidious tales of weather from soaps to nature documentaries. The first objective of putsch is to take control of the broadcast medium and the BBC methodology is a surprisingly stark example.
So, here we are at the source of ‘No Platforming’. If you don’t want your view challenged or evaluated just refuse to invite it in; It has been done before as Martin Luther and Galileo would have attested. it makes you wonder what any reference to the ‘being in the real world’ actually embraces? If a view is unopposed because its opponents are ‘imprisoned’, shut away, have their tongues ripped out and you proclaim unanimity amongst the conformers the ratified ones, then the world is as you proclaim it and all the facts and all the research is just noise, incoherent clutter from charlatans and nutters to be ignored or slanderously misrepresented. Do we ever stop and consider that if such corporate tactics can be employed against climate science what other fundamental issues are being subjected to the same treatment?
“”if such corporate tactics can be employed against climate science what other fundamental issues are being subjected to the same treatment?””
You can start with issues like race, gender and go all the way up to geopolitical chess.
Apparently David Craig looks out of the window every morning and concludes that the weather in his locality must be replicated across the entire globe.
You have absolutely ZERO understanding about “global” anything.
What real people see from their window is FAR MORE HONEST than anything GISS et al produce.
NailGun weighs in, the rest of the trendologists can’t be far behind.
Prediction confirmed.
Do you look out of the window when its raining and deduce that globally it isn’t raining?
How many distinct climates are there? Quite a few. Forget the silly notion of a global climate. We can safely say that there are different climates, eg tropical, temperate, polar etc.
Averaging is pure nonsense.
Different places have different temperatures (and weather, which must be news to David Craig). The use of anomalies removes these differences.
The fact that you don’t believe this to be possible amounts to nothing more than an argument from personal incredulity.
I recall you scorched in what was it? Oh yes 0.04C
There is no global climate and no global temperature.
They are very much mumbo jumbo constructs.
A mumbo jumbo climate religion based on sacred modelling – in the Temples of Syrinx.
Eh?
No monkey, it amounts to people who have a far better understanding of statistics and mathematics that you will ever be capable of…
… stating the facts.
Just providing the mean of mid-range temperatures is not very informative. Rarely is the standard deviation provided. While NASA provides global temperature estimates with an assumed precision of +/-0.005 deg C, the Empirical Rule suggests that, based on range, the standard deviation for global temperatures is several ten’s of degrees.
Anybody can calculate the standard deviation of any data set using longhand (which is boring) or Excel’s STDEV function. You have to also adjust for auto-correlation, especially in the monthly data.
You are a liberty to rebut all the peer-reviewed papers published in support of the various and plentiful data sets.
Let us know how you get on.
I have already done so. The problem, as Clyde says, is that climate science never does!
What’s being rebutted is that the uncertainty being quoted in the hundredths digit is just plain idiotic when the variance is considered. Range is a metric for variance. A large range means a large variance. A large variance means a wide uncertainty interval for the value of the average.
The large range in temperatures means a large variance and, therefore, a large uncertainty in the average. Sampling error is *NOT* substitute for variance or measurement uncertainty no matter how much climate science wishes it to be so.
From Hansen’s paper of GISS uncertainty: “The random uncertainties can be
significant for a single station but comprise a very small amount of the global LSAT uncertainty to the extent that they are independent and randomly distributed.”
It’s utter hogwash. It’s a total ignorance of metrology principles. Its a total refutation of the GUM. The measurement uncertainty is neither independent or randomly distributed. You’ve been given the reason why multiple times. Dr. Pat Frank has even published a paper on it for LIG thermometers.
Fungal proves that it is nothing more than a monkey shown how to use Excel..
but with absolutely ZERO mathematical comprehension of what it is actually doing.
It’s actually on the order of ±0.05 C and ±0.15 C for the later and earlier parts of the record respectively. [Lenssen et al. 2019]
50 mK? HAHAHAHAHAH
Still bullshit, but you will never understand why.
The big difference between the global average temperature and local weather is only one of them is real.
Again, an argument from personal incredulity.
Responded to by an argument from personal stupidity.
I’d call it…. a profound and deeply held belief
Look, your claim that the concept of a global average temperature is one you find hard to accept is your opinion, to which you are entitled.
The fact is that all the various meteorological and scientific agencies that produce these data series, all of which are in close agreement with one another, are taken seriously by the world’s scientific community.
This very site updates the UAH data set on a monthly basis; so presumably they are in agreement with UAH that it is possible to derive a reasonable estimate of a global average temperature.
You might not believe it, personally; but that doesn’t matter to anyone other than yourself.
a global average temperature
Is completely meaningless.
Why? Tell that to someone freezing in ‘abnormally cold temperatures’ and then tell that to someone who is ‘baking’ in abnormally high temperatures simultaneously somewhere else in the world.
I could do with some US temperatures, right now. Northern Ireland? That’s up to you.
Thank you for your opinion.
See the scientists.
Tell Roy Spencer and WUWT that they are barking up the wrong tree.
Better still, read the many peer reviewed papers that accompany these data sets and rebut them, in your own time.
Alternatively, come to a ‘skeptic’ support group and plead about it all being “meaningless”. That’ll help.
Is that the best you can do.
Global temperatures fabricated from urban surface sites is totally meaningless as a measure of “global” change over time.
The fact that you don’t understand that fact shows just how little functional brain you have.
“Look, your claim that the concept of a global average temperature is one you find hard to accept is your opinion, to which you are entitled.”
It’s not an opinion. It is science fact. Temperature is an intensive property. Intensive properties are not homogenous spatially. There is no “average” intensive property. The value of the intensive property is what you measure at a point in space. The Earth doesn’t have an “average” temperature, especially on you can *measure*. If it doesn’t have an “average” temperature then a “average” global anomaly is also meaningless.
“The fact is that all the various meteorological and scientific agencies that produce these data series, all of which are in close agreement with one another, are taken seriously by the world’s scientific community.”
What data series? None of them do a valid calculation of measurement uncertainty. Meaning the final answer you get from all of them is “we don’t know”. Which, if actually admitted, would dry up most funding for “global warming”.
Thanks for taking the time to respond so I don’t have to.
ToeFungalNail, BellEnd, and Stokes seem unable to grasp the essential physical difference between an extensive variable and an intensive variable. IQ problem?
Artificial Stupidity?
Willful ignorance. The difference has been explained to all of them multiple times. There are none so blind as those who refuse to see.
“ToeFungalNail, BellEnd, and Stokes”
Wow, you can feel the scats Swiftian wit, run out of steam in real time.
I think we all understand the difference between extensive and intensive properties. It’s just some here have a misapprehension that it’s impossible to have an average of intensive properties. This is just wrong as has been explained multiple times.
And if it was true that there aint no such thing as an average temperature, much of WUWT output would have to be labeled as wrong.
“I think we all understand the difference between extensive and intensive properties.”
No, you don’t. If you did you would know that summing temperatures, an intensive property, is garbage.
I’ll ask it again since no one has seen fit to answer.
If I have an object at 10C and a second object at 20C and I combine them do I wind up with a temperature of 30C? If not, then how can there be an “average” value.
If I have an object with a density of 2 kg/m^3 and another object with a density of 4 kg/m^3 and I combine them do I get a density of 6 kg/m^3? If not, then how can there be an “average” value for the density?
You continue to live in statistical world where calculations don’t have to have any real world meaning. The fact that you can do the calculation is all you are concerned about.
WUWT is using what climate science provides. That doesn’t mean WUWT is wrong, it means climate science is wrong.
The fact that society once believed the earth was flat didn’t make Columbus wrong. It meant society was wrong.
Ironically, in recent times there has been a resurgence of flat earth believers, even in supposedly otherwise intelligent people.
One can ‘define’ the average as the arithmetic mean of all samples, recognizing that the sampling schema can result in different answers, negating any claim of high precision and accuracy. The real problem is alarmists assigning a meaning to a global mean calculation that is different from the definition. What one is calculating is the probability of the temperature obtained from a new measurement. For that, the standard deviation is essential, but rarely reported.
Statistics don’t lie, but liars use statistics.
Then you will have no problem rebutting or debunking all the massive scientific literature that supports it.
It’s published and free to download.
Go do it! Go, go buffalo!
Don’t be put off by the fact that none of these supporting papers have been rebutted; despite desperate attempts to do so.
I’ll hold your coat – while we wait……………….
I already debunked it. You can’t average intensive properties and get a meaningful value.
Go to the many publishers of the peer reviewed papers supporting the data sets and inform them of your great wisdom.
Why don’t you ever point this out to Spencer or WUWT every time they publish a global average? Why do you keep insisting the pause is a real thing when it’s based on these meaningless averages?
Do you think Spencer doesn’t read WUWT?
How many times has he pulled this “why don’t you go ask Spencer” canard?
And how many times have you chickened out of explaining to Spencer why you think his life’s work is a fraud?
I read WUWT but you still insist on telling me how UAH is meaningless. I’m just wondering why you don’t point that out to the person who actually produces the useless information, or complain to WUWT for wasting our time repeating it.
Fungal has too little comprehension or educations to understand the term “intensive properties”.
Falling back on your IGNORANCE, refusing to even attempt to understand the mathematical reality.
We expect nothing more from you and your anti-brain comrades.
Fallacy alert: arguing with the mob.
The ‘mob’ being?
I didn’t claim that I found it hard to accept. I claimed that it is absolutely meaningless. And that’s not an opinion, that’s a fact. As Tim Gorman notes below (or above, depending on where this comment shows up) you apparently have no understanding or concept of what an intensive property is.
Just because it is meaningless to you doesn’t make it meaningless to everyone else.
It might be in your best interest to be skeptical of the Gorman’s positions. Relevant to this discussion they think W.m-2 is extensive. They also appear to have no problem using averages of other intensive properties. I’ll ask you. Do you think W.m-2 is extensive or intensive? Do you think it is okay to take averages of other intensive properties? Are you okay with cooling degree days (CDD) and heating degree days (HDD) even though they involve computations that are mathematically equivalent to averaging temperature?
bx-whatever dips into his enemies files again, so preditable.
PS–you forgot to demand another “global average” link here, HTH.
W.m-2 is a measure of the power per unit area, or surface power density. But you’re a scientist and I didn’t need to tell you that. Since surface power density can be determined or affected by a variety of things, I don’t think it is an intensive property. But since you don’t specify what exactly it’s supposed to be an intensive property of, your question is difficult to answer.
Seriously, I do like to learn new things and am not averse to being corrected if I’m wrong. Can you explain to me what it is an intensive property of and why?
W.m-2 is intensive because it is independent of the size of the system. One simple test is to partition the system and see if it changes the value of the property. A flux (in W.m-2) does not depend on the size of the system nor does partitioning the system change the flux. For example, the Sun has a radiant exitance of 6.3e7 W.m-2. If we look at only 1/2 of the Sun it is still 6.3e7 W.m-2. If we look at 1/4 it is still 6.3e7 W.m-2 and so on. No matter how you partition the Sun the radiant exitance of the surface you are considering is still 6.3e7 W.m-2. The wikipedia article provides a good introduction to the concepts of extensive and intensive.
“W.m-2 is intensive because it is independent of the size of the system.”
That’s total malarky! W/m^2 ADD. Double the amount of substance radiating and you will get twice the w/m^2!
“A flux (in W.m-2) does not depend on the size of the system nor does partitioning the system change the flux.”
Again, total and utter malarky! You apparently don’t even have any experience in photography either! And partitioning the system *will* change the amount of flux encountered. Cut the capture area of a microwave dish in half, i.e. partition the system, and you will capture less flux!
“For example, the Sun “
The sun is far enough away that its light can be considered to be a parallel beam. The flux is the same throughout the beam. If you cut the intercepting area in half, e.g. a partial eclipse, you *will* capture fewer total watts. W/m^2 is a RATE, not a fundamental property.
Tell me again how two light sources doesn’t provide twice the light of one!
An honest thanks for your response and explanation. I did already skim the wikipedia article on extensive and intensive properties. I’ll read your response more carefully and see if I have any questions.
Based on quick scan, I’ll agree that solar insolation (or radiance if that is correct) is an intrinsic property of the sun, not sure how that transfers to the earth.
There are several examples involving the Earth. NASA’s Earth Fact Sheet lists several intensive properties of Earth that are averages: Mean density, albedo, TSI, etc.
Where in your reference does it say TSI is an intensive property?
What would the TSI be if there were two suns in the sky? Would the radiation received be additive?
Density is intensive because densities don’t add. You can’t take a soil sample of 1 kg/m^3 and a second sample of 2 kg/m^3 and add them together to get a density of 3 kg/m^3 because you don’t know their masses!
But you *can* take a radiation flux of 2 W/m^2 from one source and a radiation flux of 3 W/m^2 and add them to get 5 W/m^2 at the same point in space!
The IUPAC vocabulary. The perpendicular flux at Earth’s TOA is not dependent on the extent of the surface area of Earth’s TOA. It’s always same regardless of what it is or how it changes. If we were to partition the Earth’s TOA surface area each partition would have a flux of around ~1360 W.m-2 regardless of the size of the partition. Therefore it is an intensive property. BTW…this is why the Moon has the same TSI. The extent of the surface area of the planetary body at 1 AU does not modulate the TSI.
More. Probably close to 2x as much; its complicated due potential shadowing issues. Regardless that is irrelevant because the increase here is due not to the extent of Earth’s TOA surface area changing, but due to the extent of the energy source(s) changing. It might click if you consider that the same thing happens if the temperature of the Sun increases to 6870 K. If that doesn’t do it for you then consider density which you accept as intensive. If you fill a container with water the density changes. It changes not because of the partitioning of the container, but because of its composition. Remember, the ratio of two extensive properties like flux or density is an intensive property with the denominator usually serving as the reference system.
You ran away from your conservation of energy nonsense.
As Tim told you (and you ignored), the sun is essentially a point source with negligible divergence. But you don’t understand what this means.
“The IUPAC vocabulary.”
That is a definition of what an intensive property is. It does *NOT* say that TSI is an intensive property.
“It’s always same regardless of what it is or how it changes.”
That’s because you have only one source! If you measure the density of one object then that is its density. That does *NOT* mean that density is an extensive property.
“More. Probably close to 2x as much”
If you can add it then it is an EXTENSIVE property!
QED!
Incredible.
Like talking to an Olmec head.
First…The IUPAC definition does NOT say that a property is extensive if its numerator is extensive. What it says is that the property is extensive if it depends on the extent of the system.
Second…By your definition density is extensive because I can add the water mass I put in a container which changes its density. Obviously this is absurd since density is intensive.
Third…You’re missing the salient concept of extensive and intensive. That point is regarding whether the metric is dependent on the extent of the system the property is associated with. For density that is the volume (the m^3 part) in the ratio. It is not the mass (the kg part). Similarly for radiant flux the system is the surface area (the m^2 part) in the ratio. It is not the power (the W part).
Fourth…Again, the simple test is to ask whether partitioning the system (the m^3 part for density or m^2 part for radiant flux) results in a different value. It doesn’t. Therefore those metrics are intensive.
Fifth…You can easily test whether radiant flux is intensive in your own home. From a distance aim an IR thermometer at a surface with a homogenous temperature like a hard floor. The instrument will observe the radiant flux (in W.m-2) and convert it to a temperature via the SB law. Record the temperature. Now move closer to the surface so that the instrument is now observing a subset or partition of the original surface. Record the temperature. You will get the same value indicating that the radiant flux in the SB law was also the same. This proves that radiant flux is intensive.
“First…The IUPAC definition does NOT say that a property is extensive if its numerator is extensive. What it says is that the property is extensive if it depends on the extent of the system.”
I didn’t say that it did. I said if you can add the values it is an extensive property. You can add radiant flux. You admitted that when you agreed that two suns in the sky will give twice the radiant flux of one sun.
“By your definition density is extensive because I can add the water mass I put in a container which changes its density. Obviously this is absurd since density is intensive.”
You are not changing the density of the water. You are changing the mass of the water. If you could put two gallons of water in a one gallon bucket *then* you would be changing the density of the water! Be sure and take a video when you try that and post it somewhere. I’m sure lots of people would enjoy watching it!
“Third…You’re missing the salient concept of extensive and intensive.”
*I* am not missing anything. The difference is simple. All you are doing is flapping your gums trying to prove black is white. If you can add the values then it is an extensive property. If you can’t it is an intensive property. It is truly just that simple, a six year old could grasp the concept. Yet you can’t, or WON’T.
“or m^2 part for radiant flux) results in a different value. It doesn’t.”
You admitted that it does. Or are you now going to say that having two suns in the sky won’t increase the amount of radiant flux hitting the earth?
You are mixing up receiver and source as well. Cutting a receiver area in half will result in seeing 1/2 the amount of joules per second that are being intercepted. Why do you think the dish at Arecibo was so big? So it could intercept more photons! The number of photons are related to the amount of joules per second being intercepted, i.e. the radiant flux! If the amount of radiant flux didn’t change with surface area of the receiver then the dish at Arecibo could have been one inch in diameter!
Think he can point his IR meter at the sun and get an accurate measurement of the solar irradiance?
How he came up with these goofy ideas about thermodynamics is incomprehensible.
The other half of this mystery is that his trendology compadres think this gobbledygook he writes reflects reality and they high-five him.
Don’t let him gaslight you. If there were two suns what would be total energy received at the surface of the earth? You can *add* the energy from one to the energy from the other and get the total. That means it is an extensive property.
It’s ironic considering I’m not the one dismissing the 1st law of thermodynamics here.
Energy and radiant flux are not the same thing. Energy has units of joules. Radiant flux has units of W.m-2. Partitioning the extent of a surface (the m-2 part) does not change the radiant flux. Therefore it is an intensive property. Energy being extensive does not make radiant flux extensive. That’s not how it works. I will again point you to the IUPAC definition.
You don’t have Clue #1 about conservation of energy nor irradiance.
“Energy and radiant flux are not the same thing. Energy has units of joules. Radiant flux has units of W.m-2″
What in Pete’s name do you think a Watt is?
Why do you always insist on coming on here and lecturing people about things you have absolutely no understanding of?
In essence you are saying that if a property of an object is homogenous then that property is an intensive property.
That’s total garbage. The *real* definition of intensive and extensive is whether they are additive or not. Mass is extensive because it can be added. Radiation is extensive because it can be added. Density is *intensive* because it can’t be added. Temperature is intensive because it can’t be added.
1kg + 2kg = 3kg 1 W/m^2 + 1 W/m^2 = 2 W/m^2
2 kg/m^3 + 2 kg/m^3 ≠ 5 kg/m^3 2C + 3C ≠ 5C.
A watt (W) has SI units of j/s.
A radiant flux (W.m-2) has SI units of j/s.m-2.
They are clearly different.
No I’m not. What I’m saying is that if a property is independent of the extent of the system then it is intensive. You are challenging that. That’s why we are going back and forth here.
Not according to IUPAC. Note that all of the other sources I checked use verbiage that is consistent with the IUPAC verbiage.
“A watt (W) has SI units of j/s.
A radiant flux (W.m-2) has SI units of j/s.m-2.
They are clearly different.”
As usual, you TOTALLY missed the point. Just because two objects are emitting green light it does *NOT* mean they are emitting the same number of joules per second! It is the number of joules per second that determines the number of joules per second hitting an area – and therefore the flux intensity at that point.
“What I’m saying is that if a property is independent of the extent of the system then it is intensive.”
But the number of joules being emitted per unit time *IS* dependent on the extent of the system! The number of joules being emitted per second is dependent on the number of electrons emitting. Cut the number of electrons in half and you cut the joules per second in half! Therefore the number joules per second hitting an area is cut in half!
Jeesh! A first grader could figure this one out!
And the response of his Flukemeter to a green LED is … zero.
he won’t understand that!
“Not according to IUPAC. Note that all of the other sources I checked use verbiage that is consistent with the IUPAC verbiage.”
The issue isn’t what they say. The issue is your reading comprehension and knowledge of basic physics!
An extensive property is one you can add by combining two samples.
W/m^2 is a measure of radiative flux, e.g. light from a light bulb. Two light bulbs (that are the same) will give you twice the radiative flux. Simple addition. Just try it. Get yourself a light meter and two light bulbs and measure what you get with one turned on and measure what you get with both turned on!
Temperature is an *intensive* property. You can’t add the temperature of one object with the temperature of a second object to get a total temperature.
EXACTLY what intensive properties do believe I think it is ok to average? Be specific!
Degree-days are NOT the same as averaging temperatures. Degree-days are the area under a curve. The area under a curve *is* an extensive property. You *can* add areas to get a total area.
Your problem is and has always been a total lack of real world experience in the physical sciences. Just like bellman, Stokes, and all the others that believe that temperature is an extensive property that can be added. You are a blackboard mathematician that can’t even understand what measurement uncertainty *is*. It’s why you always assume it is random, Gaussian, and cancels.
That is patently false and violation of the 1LOT.
This has nothing to do with distance away from the Sun. The Sun’s radiant exitance is 6.3e7 W.m-2 regardless of how far away you are.
Similarly if you place a flat surface in space at 1 AU it will receive 1360 W.m-2 over any partition of that surface.
No they won’t. Again, that would be a violation of the 1LOT. I did try it. Consider a 50 W bulb with a surface area of 10 cm2. That is 5 W.cm-2. In one second that would be 5 W.cm-2 * 10 cm2 * 1 s = 50 j. Or using the 1LOT directly from the output rating it is 50 W * 1 s = 50 j. Now let’s consider two bulbs and use your erroneous addition rule for fluxes. The total surface area is 5 W.cm-2 + 5 W.cm-2 = 10 W.cm-2. Then we have 10 W.cm-2 * (10 cm2 + 10 cm2) * 1 s = 200 j. But if we use the 1LOT directly from the output ratings we get (50 W * 1 s) + (50 W * 1 s) = 100 j. 200 j does not equal 100 j. See the problem?
The average soil moisture represented by a pixel on the graph I linked to.
I didn’t say it was the same. I said “they involve computations that are mathematically equivalent to averaging temperature”.
Review the mean value theorem.
The great expert on thermodynamics speaketh:
…which only states that energy is conserved, but can be converted into different forms.
BZZZZZZZT
F
“That is patently false and violation of the 1LOT.”
You wouldn’t know the 1LOT if it bit you on the butt!
“No they won’t. Again, that would be a violation of the 1LOT. I did try it.”
I have no idea what you tried. But I *KNOW* you have never looked at an array of LED flashlights in the hardware store. They are usually grouped by lumen capacity – a measure of radiant flux. They add LED’s to get more lumen out of the flashlight. A 40 lumen flashlight may only have one LED. An 800 lumen flashlight will have multiple LED’s OF THE VERY SAM TYPE. Their outputs ADD! Radiant flux is an extensive property.
The flashlight I have here at the desk has 8 led’s in it. The one I keep by the bed has 4 leds. Guess which one is brighter, i.e. a higher output!
“The average soil moisture represented by a pixel on the graph I linked to.”
Think about this for a minute. I have 3 grams of water in a cubic foot at Point A. I add to this a cubic foot of soil from Point B that has 4 grams of water in it.
What do I have for total water? 7 grams! A straight addition of an extensive property. The average for the two samples is 5 grams.
Now I have a cubic foot of soil at 10C at Point A and a cubic foot of water at 20C at Point B. I add the two together. Do I get 30C for a total temperature?
Your inability to understand basic physical science seems to have no limit.
“I said “they involve computations that are mathematically equivalent to averaging temperature”.”
They are NOT* the same thing. Calculating the two in the same manner means nothing physically. Area under a curve is an extensive value, temperature is an intensive value. You can add extensive values. You can’t add intensive values.
I can add a gallon of water to a gallon of water and get two gallons. I can’t add 10C to 20C and get 30C – unless you are a blackboard mathematician or a climate scientist.
I tried adding fluxes from different surfaces. I checked the result to see if it was consistent with the 1LOT. It was not.
Lumens is not a measure of radiant flux. The SI units of a lumen is cd.sr. The SI units of radiant flux is W.m-2. Those are two completely different things. Lumens is extensive. Radiant flux is intensive.
Says the guy who erroneously thinks the values in the graph he posted has units of grams and that one of the inputs into the model that is used to create it is an average temperature.
Yet your preferred method of HDD/CDD does just that.
You don’t know WTF you are doing.
This has NOTHING to do with conservation of energy.
But it is consistent with your whacko ideas about heat transfer.
It does because the 1LOT can be used to show that radiant flux cannot be extensive because if it were then adding values of radiant flux would violate the 1LOT as I demonstrated above.
You are right in that my ideas about heat transfer are consistent with the 1LOT. However, I do have to protest implying that the 1LOT is “whacko”.
BTW…you can easily verify that radiant flux is intensive by using a handheld IR instrument. From a distant vantage point aim the instrument at a surface with a homogenous temperature. Observe and record the temperature reading (which is calculated from the SB law). Now walk closer to the surface so that the instrument is observing a subset of the surface. In this manner you are partitioning the extent of the system that the instrument is observing. Record the temperature and compare it to the previous reading. If they are the same then you know that the radiant flux is the same.
Bullshit. Total bullshit, pure and simple.
Get some real science and engineering training, PDQ.
Oh wait, Olmec heads can’t learn. My bad.
Then it should be easy for you to do the experiment showing that an IR thermometer will report a lower temperature (and thus lower radiant exitance in the SB law equation) the closer you get to a surface. Give it try. Report back what you observed.
Free clue, boson, an IR thermometer is not a wide-band radiometer.
You don’t know WTF you are doing.
He doesn’t even understand that an IR meter is reading total energy captured and not flux intensity. His understanding of actual physical reality is almost total. He’s a blackboard mathematician.
Yep. And if he steps backward, the numbers will change because it is now looking at other objects.
You *really* have no idea what you are talking about. Most IR thermometers have a “distance to spot” limitation. When you are far away from what you are measuring you are capturing radiation from the area surrounding the target as well as from the target. That’s why you get a higher reading from a longer distance away and a lower reading up close. The meter is reading total energy captured and not just the flux intensity.
You haven’t gotten ANYTHING right on this subject. All you are doing is continuing to demonstrate your lack of knowledge of physical science.
STOP DIGGING!
“Luminous flux differs from power (radiant flux) in that radiant flux includes all electromagnetic waves emitted, while luminous flux is weighted according to a model (a “luminosity function“) of the human eye’s sensitivity to various wavelengths; this weighting is standardized by the CIE and ISO.”
Oops, another F.
Your own source says luminous flux is different from radiant flux. Luminous flux is extensive because it is candelas multiplied with steradians. Luminous intensity that is intensive because it is lumens per steradian. Again, the simple test in determine if a metric is extensive or intensive is to ask whether the partitioning the extent of the system yields a different value. For lumens the answer is yes. Partition a light emitting surface into 2 halves and each half will emit 1/2 the lumens. Therefore it is extensive. For radiant flux the answer is no. Partition the same light emitting surface into 2 halves and each half will emit the same W.m-2 as the whole. Therefore it is intensive.
You really are an Olmec head.
It is weighted according to optic response. Do you understand what this means? No.
Take an LED flashlight, shine it on a surface. Take a second flashlight, repeat.
The irradiance has DOUBLED.
Clown.
He simply can’t figure this one out. He has apparently lived in a bubble his entire life. He’s obviously never broken open different light bulbs to see their filaments. There is a *reason* why a 100watt floodlight puts out more light flux than a 5 watt refrigerator bulb. He thinks you can cut the filament in the 100watt bulb in half and still get the same light flux out of it!
Let’s be precise. What I think and what is proven by countless experiments is that if you point a thermopile at say a 1 m^2 surface with a homogenous temperature it will report a specific flux in W.m-2. If you then zoom in the radiometer so that it is observing a subset or partition of that same surface it will still report the same flux in W.m-2 as it did when it was observing the whole. Do the experiment and prove this out for yourself.
ZMONG!
Do you really think the thermal radiation from your mythical “surface” is collimated?
1/r^2 is a killer.
And how do you “zoom in” a thermopile radiometer?
Another F
He doesn’t even know what is being measured!
Nope!
No, but irrelevant.
You bring it closer to the surface. My Fluke 62 MAX has two lasers that tell you the diameter of the circle being observed so that you know which surface area is being measured. BTW…the reported temperature is…gasp…the spatial average so this is bound to trigger you as well.
The divergence absolutely is relevant.
This Flukemeter not a radiometer you twit.
From fluke.com, 62 MAX:
———————
Accuracy:
≥0 °C: ±1.5 °C or ±1.5 % of reading, whichever is greater
≥ -10 °C to <0 °C: ±2 °C
< -10 °C: ±3 °C
Temperature Coefficient: ±0.1 °C/°C or ±0.1 %/°C of reading (whichever is greater)
Spectral Response: 8 to 14 microns
———————
This is a narrow-band device, you CANNOT use it as a radiometer (unless you are completely unhinged).
Oh, and check how Fluke is able to get the accuracy down to the mK range—NOT.
He won’t understand a thing you have said!
Unfreaking believable!
You *can* average extensive properties.
If the surface area changes the reading then you are looking at an EXTENSIVE property!
If the irradiance over the surface is nonuniform, it can be averaged to get the total incident power! Or the average irradiance!
A thermopile measures TEMPERATURE DIFFERENCES, not radiant flux.
The thermopile will read the same thing because TEMPERATURE is an intensive property.
A photocell is a form of a quantum detector that measures photons hitting a surface. Cut the number of photons emitted by cutting the emitting source in half and you will see a lower reading. That shows that it is measuring an extensive property.
Pyrometers are, again, temperature measuring devices. They measure the temperature based on the frequency of the emitted radiation, not on the amount of it. Thus it is measuring an intensive property.
When are you going to stop digging your hole ever deeper? You know nothing of physical science as you are amply demonstrating here. It’s pretty apparent you are cherry picking stuff off the internet hoping something you post will make sense. You have failed so far!
Nope—he thinks he can measure irradiance with an IR thermometer.
And he has the gall to blame the conservation of energy for these whacky ideas.
yep. He doesn’t know that an IR thermometer works on the temperature of the source, i..e the frequency being generated, instead of the amount of radiation it is generating.
Yet another absurd claim. IR thermometers work by calculating the temperature using the SB law and the radiant exitance of the surface in W.m-2. There is no spectrometer to even observe the frequency in an IR thermometer.
Liar—the detector in your Flukemeter is in essence a “spectrometer” on the basis of its narrow wavelength response range.
I don’t just think it. I know I can do it because I’ve done it many times. Lots of people have done it.
The ONLY way to get a valid number is to know the spectral response of the IR thingie, AND the spectral irradiance of the source.
You don’t know either.
Another F
Plus he has no concept of irradiance varying with distance from a source, nor any basic knowledge of spectral irradiance.
Yet here he is trying to tell experienced professionals they don’t understand radiometry, and he does.
What a joke.
He’s a cherry picker when it comes to physical science. He’s frantically searching the internet for cherries he thinks confirms what he is saying. He has no basic understanding of the concepts.
None.
Finds something he likes and plugs in, problem solved.
“Your own source says luminous flux is different from radiant flux.”
Do you see the operative word in that sentence? The word FLUX?
“Partition the same light emitting surface into 2 halves and each half will emit the same W.m-2 as the whole. Therefore it is intensive.”
Really? How do you come up with this? Do you have even a basic clue as to how a light emitting diode works? The light from an LED is based on the number of electrons that get excited and which then drop back to base state. The fewer electrons the less light is emitted. So if you cut the size of the emitting substrate in half you get fewer electrons and less light.
You are confusing FREQUENCY of the emitted light with how many photons are emitted. The frequency, i.e. the color of light is dependent on the distance the electrons fall from the conduction band to the base band, i.e. how much excitation they receive. That is fixed by the excitation source, e.g. the voltage of the battery connected to the diode. That voltage is an intensive property. The flux that gets emitted is not.
Have you ever even wondered why a 100watt floodlight is larger than a 5watt refrigerator lamp?
First…This has nothing to do with frequency. I never mentioned it.
Second…I’m not confusing anything. The confusion is you conflating lumens with radiant flux and then energy with radiant flux. Those aren’t equivalent metrics.
You get less light. But the radiant flux in W.m-2 is the same. Do it. Don’t just thump your chest at me and pretend like you’ve figure out something no else has. Actually do the experiment and prove this out for yourself. I’ll even help you do the experiment if you want.
BTW…you shouldn’t even need to do the experiment. Just think about it. And I mean really think about. It has to be this way otherwise the 1LOT is violated. If you cut your emission surface in half you cut your emission power in half, but the only way that can be true is if the radiant flux is the same. Consider a 1 m^2 surface emitting at 100 W.m-2. The whole surface emits 100 j of energy each second. Now partition the surface into two equal parts. Part A emits 50 j of energy each second. Part B emits 50 j of energy each second. That is 50 j + 50 j = 100 j. But the radiant exitance from each part is still 100 W.m-2. That is (100 W.m-2 * 0.5 m^2) + (100 W.m-2 * 0.5 m^2) = 100 j. Remember, when you partition a system you decrease the m^2 part. Part A is 0.5 m^2 and part B is 0.5 m^2. 0.5 m^2 + 0.5 m^2 = 1 m^2.
Another whacko word salad, utterly devoid of any meaningful content.
What happens when move away from your goofy surface?
Frequency is FUNDAMENTAL to radiation.
That you don’t understand the lumen is a testament to your ignorance.
“Second…I’m not confusing anything. The confusion is you conflating lumens with radiant flux and then energy with radiant flux. Those aren’t equivalent metrics”
Of course they are equivalent. The only difference is the frequencies they measure! Flux is flux!
“You get less light. But the radiant flux in W.m-2 is the same”
You *are* kidding, right?
“Part B emits 50 j of energy each second. That is 50 j + 50 j = 100 j. “
You *are* kidding, right?
Watts are joules per second. If one half is emitting 50 joules per second it is emitting a radiant flux that is 1/2 of what the total (100 W/m^2) is. That means it is an extensive property.
So let’s review where we are. I say cd.sr and j is different than W.m-2. You say I’m wrong. Are you standing by this challenge?
I dead serious. The radiant flux in W.m-2 is same no matter which partition of the surface you observe. The 1LOT says I’m right.
I’m dead serious. 1/2x + 1/2 = x. The principals of algebra says I’m right.
That is absurd. So you’re telling me a 1 m^2 surface emitting at 100 W.m-2 which is 1 m^2 * 100 W.m-2 = 100 W magically starts radiating only at (50 W.m-2 * 0.5 m^2) + (50 W.m-2 * 0.5 m^2) = 50 W if observe the surface as two halves? Where does the other 50 W go?
HAHAHAHAHAHAHAHAH
Your esoteric version of thermodynamics might, but the real deal laughs at your idiocy.
“So let’s review where we are. I say cd.sr and j is different than W.m-2″
They are both an energy flux! One is based on a smaller spectrum than the other. It’s like saying that the flux intensity of violet light is different than the flux intensity of green light! Flux is flux – period.
“The radiant flux in W.m-2 is same no matter which partition of the surface you observe. The 1LOT says I’m right.”
Actually it isn’t! You are assuming a 100% homogenous material doing the radiating. That isn’t physical reality. The radiation from an LED is based on the doping of the material. If that doping is not homogenous then you will get different flux intensity from different spots on the material. If the filament in an incandescent light bulb is not 100% homogenous then you will get a different flux from different parts of the filament.
“I’m dead serious. 1/2x + 1/2 = x. The principals of algebra says I’m right.”
And yet you can’t seem to understand what the algebra implies. 1/2x + 1/2x = x implies that you have an EXTENSIVE property. You can add extensive properties. You can *NOT* add intensive properties.
” So you’re telling me a 1 m^2 surface emitting at 100 W.m-2 which is 1 m^2 * 100 W.m-2 = 100 W magically starts radiating only at (50 W.m-2 * 0.5 m^2) + (50 W.m-2 * 0.5 m^2) = 50 W if observe the surface as two halves? Where does the other 50 W go?”
It goes into the container you put the other half into. When you separate the halves YOU SEPARATE THEM and measure them separately!
You are trying to say that I am going to PRETEND to cut this rock in half and then measure the mass of the rock. Oh, my! It’s the same mass as it was before I pretended to cut it in half so mass must be an intensive property!
You would totally fail even high school physics!
He’s just slamming word salads into the keyboard now.
Unfreaking believable. 50 joules/second is the same flux as 100 joules per second in his world.
j/s is not a measure of flux. It is a measure of power.
50 j/s is not the same as 100 j/s.
Technically j/s could be considered a flux by some; it’s just not radiant exitance, irradiance, radiant intensity, or W.m-2 which is what we’re discussing. I often shorten radiant flux density, radiant exitance, and other commonly used verbiage to radiant flux or just flux for brevity as long as I include or have previously specified W.m-2. It is unfortunate that there is no official definition of these terms which is why I do try to allows include the W.m-2 units in an effort to avoid confusion.
Another indication of your abject ignorance of radiometry.
and he will never admit it!
Of course not, he is the great expert.
The problem is that it is *YOU* that are confused.
It’s why the terms are “radiant flux” and “radiance exitance”. The fact that you didn’t know the difference is *YOUR* lack of knowledge, no one elses.
“Technically j/s could be considered a flux by some; it’s just not radiant exitance, irradiance, radiant intensity, or W.m-2 which is what we’re discussing.” (bolding and italics are mine, tpg)
I missed this. We were talking about radiance flux. Radiant flux is measured in W/m^2.
We were *NOT* talking about radiant exitance, irradiance, radiant intensity, or W.m-2. We were talking about radiance flux.
Flux is an extensive property. Period. Exclamation point. Which you said was not correct. You said flux is an intensive property. Period. Exclamation point!
And he’s wrong, dead wrong, his word games can’t overcome how wrong he is.
“j/s is not a measure of flux. It is a measure of power.
50 j/s is not the same as 100 j/s.”
No kidding? Are you FINALLY figuring this out?
Flux is a measure of the flow of power. A flow of 50 joules/sec into 1 m^2 is different than the flow of 100 joules/sec into 1 m^2.
Yet yo have been arguing that they are the same! Flux flows ADD. Flux is an extensive property!
“Luminous intensity that is intensive because it is lumens per steradian”
Have you *ever* had to light candles to see where you are going when the electrical power has failed? Does lighting more candles give you more light?
More candles will only add more light if luminous intensity is an extensive property that adds.
If you place a piece of cardboard over the half of the opening of a flashlight does it provide less light for you to see by? That will only happen if the light is an extensive property.
“Partition the same light emitting surface into 2 halves and each half will emit the same W.m-2 as the whole. Therefore it is intensive.”
Malarky! The light from an emitting surface is generated from electrons being excited and then falling back to base level. Partition the surface into two and you will have fewer electrons in each half to be excited. I.e. each half will emit half the w/m^2 that the whole emits. QED – an extensive property!
Yes, but irrelevant. Adding more candles does not mean the radiant exitance in W.m-2 of the candles has changed.
If you place a piece of cardboard over the half of the opening of a flashlight does it provide less light for you to see by?
Yes, but irrelevant. Blocking the opening of a flashing does mean the radiant exitance in W.m-2 of the flashlight has changed.
Hardly, blocking half the opening of a flashlight isn’t going to change it’s radiant exitance in W.m-2.
The number of electrons is not the same as radiant exitance.
Patently False.
What happens is that each half emits 1/2 the watts (W) over half the surface area (m^2). As a result the radiant exitance is 1/2 W / 1/2 m-2 which is the same as W.m-2.
More bullshit.
Another F
“Yes, but irrelevant. Adding more candles does not mean the radiant exitance in W.m-2 of the candles has changed.”
Total and utter bullshit! W/m^2 tells you how many joules per second will hit a surface area. It is that value of W/m^2 that tells you how bright the light is that is hitting the area!
If lighting a second candle provides more light on the wall behind the candles then the W/m^2 hitting the wall has increased. That means that the W/m^2 from each candle adds to the total. If the total is the addition of the components then it is an extensive property!
“Yes, but irrelevant. Blocking the opening of a flashing does mean the radiant exitance in W.m-2 of the flashlight has changed.”
Did you mean to say the W/m^2 does *NOT* change? Because if you say the W/m^2 *has* changed then you have an extensive property!
“Hardly, blocking half the opening of a flashlight isn’t going to change it’s radiant exitance in W.m-2.”
It changes the radiant flux the flashlight provides to the surrounding area!
“The number of electrons is not the same as radiant exitance.”
OMG! The number of electrons being excited and falling back is what creates the photons that make up the radiant flux!
Did you *really* think that no one would notice that you have now stopped talking about radiant flux and changed to radiant exitance? It’s not even obvious that you know what radiant exitance is.
Radiant exitance is the joules per second emitted by a surface area. If you reduce the number of electrons being excited at that surface area then you reduce the joules/second being emitted by that surface area. That is an EXTENSIVE property! Add more electrons to that surface area and you get more joules/second!
You still haven’t accepted the fact that radiant flux is describing a flow of photons in an electromagnetic wave. As the number of photons change the radiant flux changes also. As you add more photons to the EM wave the amount of joules/sec goes up! And vice versa. An extensive property!
“What happens is that each half emits 1/2 the watts (W) over half the surface area (m^2). As a result the radiant exitance is 1/2 W / 1/2 m-2 which is the same as W.m-2.”
You are mixing up terms which just goes to show you don’t understand what you are talking about! The radiant flux emitted is the integral of the radiant exitance. If the radiant exitance of an object is R_e then the total flux is R_e * surface area! If you cut that surface area in half then the radiant flux is cut in half as well!
I’ll ask again – DID YOU REALLY THINK NO ONE WOULD NOTICE THAT YOU ARE TRYING TO CHANGE THE SUBJECT OF DISCUSSION?
More bullshit, of course it has changed.
“I tried adding fluxes from different surfaces. I checked the result to see if it was consistent with the 1LOT. It was not.”
Which is *NOT* adding fluxes on the same surface! In other words you moved the goalposts!
The answer is what you said if you had two suns. Their flux would add. That means that the flux is an EXTENSIVE property.
It’s really no more complicated than that!
Watts/m^2 is *NOT* an intensive property. You can *add* w/m^2 from two sources to get the total W/m^2 at a point in space. If W/m^2 was an intensive property you couldn’t do that.
Again, it is no more complicated than that.
You keep trying to come up with a rationale for believing that W/m^2 is not an extensive property.
“Lumens is not a measure of radiant flux.”
Never said it was. They are BOTH radiation flux. They both add. Like I said, you apparently have no real world experience in photography or you would know that.
Go look it up! lumens are a measure of visible light intensity based on the response of the eye. Radiant flux is a measure of *total* flux, of which light is just a part. They are both FLUXES and they are both extensive properties!
“Yet your preferred method of HDD/CDD does just that.”
You are not even a competent mathematician! The area under a curve is *NOT* an intensive property! The area under the curve is *NOT* an addition of the y-values, it is a multiplication of the y-value and the x-interval. ∫sin(x)dx
Why do you insist on displaying your ignorance of math and physical science like this?
The goal post is whether changing the extent of the system changing the property. The property here is radiant flux in W.m-2. The system is the surface represented by the m-2 part in the radiant flux.
You are the one moving the goalpost here because you’re trying to change the power component represented by the W part in the radiant flux.
That has nothing to do with a property extensive or intensive per the IUPAC definition.
Again that has nothing to do with a property being extensive or intensive. Being able to add is related to conservation laws which has nothing to do with being intensive. Being intensive means that the property is independent of the extent of the system. Again, refer to the IUPAC definition.
No they aren’t. One has units of W.m-2 and the other has units of cd.sr.
I did look it up. That’s how I know the SI units for luminous flux is cd.sr. The SI units for radiant flux is W.m-2.
I didn’t say it was. I said the area under the curve involves a computation that is mathematically equivalent to averaging. Refer to the mean value theorem. And again…your own preferred source for HDD/CDD literally tells you to use an average temperature in step 2 of the integration method.
“This is ironic coming from a guy who confuses sums with averages, confuses addition (+) with division (/), thinks the derivative of x/n is 1, etc.
“The goal post is whether changing the extent of the system changing the property. The property here is radiant flux in W.m-2. The system is the surface represented by the m-2 part in the radiant flux.”
How many times must this be explained to you. The flux from a light emitting diode depends on the number of electrons in the material that are rising and falling in energy. If you cut the number of electrons in half by cutting the radiating material in half you will get 1/2 the number of electrons emitting light – i.e. a lower flux level!
“You are the one moving the goalpost here because you’re trying to change the power component represented by the W part in the radiant flux.”
When you cut the number of radiating elements what do you think happens to the joules/sec being emitted? I.e. the WATTS being emitted?
“That has nothing to do with a property extensive or intensive per the IUPAC definition.”
It’s the definition of intensive and extensive! You can add extensive properties and you can *NOT* add intensive properties!
“Being intensive means that the property is independent of the extent of the system.”
If you reduce the number of radiating elements then you also reduce the flux intensity being radiated. So the flux being radiated *IS* dependent on the extent of the material!
I pointed this out to you before and you just dismissed it. You *ARE* confusing frequency with amount. The FREQUENCY of green light doesn’t change when you change the amount of material. The AMOUNT of green light being admitted *does* change however. And flux is measuring the AMOUNT, and not the FREQUENCY of what is being transmitted.
“I said the area under the curve involves a computation that is mathematically equivalent to averaging.”
You aren’t even a competent mathematician. The area under a curve has *NOTHING* to do with averaging. Where in the formula ∫sin(x)dx do you see an *average*? It is a SUM, not an average!
“I did look it up. That’s how I know the SI units for luminous flux is cd.sr. The SI units for radiant flux is W.m-2.”
Your lack of knowledge of physics is still showing! They are BOTH measures of flux. From wikipedia:
“Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source’s spectrum, the contribution of each wavelength is weighted by the luminous efficiency function” (bolding mine, tpg)
Again, FLUX IS FLUX. What you are arguing is what scale you use. Kind of like using yards or meters. You are trying to argue that because a yard is not the same as a meter that they are not measuring the same thing – the length of a measurand.
“Refer to the mean value theorem”
You don’t even understand what the mean value theorem *IS*. It merely states that there is a point along an arc where the tangent of the arc at that point is the same as the tangent between the end points of the arc.
SO WHAT?
That does *NOT* mean that the area under a curve is an AVERAGE! It’s only use is in *estimating* the area under the curve by finding a point on the curve, in the interval of interest, where a rectangle through that point will have the same area as under the curve. As you make that interval of interest approach zero you are finding the area under the point. Basic calculus.
Again, where in ∫sin(x)dx do you see an average?
If the area under a curve was not an extensive value then the equation
∫sin(x)dx from 0 to pi = 2∫sin(x)dx from 0 to pi/2
would not be true. But it is!
If area was not an extensive value then the area of two football fields would not equal the area of one football plus the area of a second football field.
NONE OF THESE INVOLVE *AVERAGES*. They are not averages. Their calculation is not equivalent to calculating an average.
“And again…your own preferred source for HDD/CDD literally tells you to use an average temperature in step 2 of the integration method.”
That is *NOT* my preferred source. Stop putting words in my mouth! *MY* preferred source, which I pay a monthly subscription fee to says:
——————————
Approximation methods can never be as accurate as the Integration Method, because, without using detailed temperature records, it is impossible for them to fully capture the temperature variations that occur within each day. The better approximation methods do manage to come pretty close on most days in most climates, but I would still recommend using accurately-calculated data whenever possible.
———————————– (bolding mine, tpg)
“The goal post is whether changing the extent of the system changing the property. The property here is radiant flux in W.m-2. The system is the surface represented by the m-2 part in the radiant flux.”
This is an insane sentence — if you are trying to measure solar irradiance, the area of the sun is never included, it doesn’t factor in.
And the simple fact he continues to ignore is that if the number of suns is doubled, the irradiance increases.
The various bozos will now give me red marks for telling what is true.
HEHEHEHEHEHEHEHEHHE
P.T. Barnum just wishes he could have had this crew in his employ.
And as I keep saying the irradiance increases in that scenario because the numerator (W) of the W.m-2 ratio is increasing; not because the denominator (m-2) decreased. A similar thing happens to density (kg.m-3). It can increase because the numerator (kg) of the kg.m-3 ratio increased. It’s still an intensive property.
And I’ll repeat. If you partition a 1 m^2 surface in your backyard into two halves each with 0.5 m^2 area then each of the halves will get the same solar irradiance in W.m-2 as the whole. Therefore solar irradiance is an example of an intensive property of that 1 m^2 surface in your backyard. Likewise, the radiant exitance in W.m-2 of that same 1 m^2 surface will not change just because you partitioned it into halves each with 0.5 m^2 area. Therefore radiant exitance is an example of an intensive property.
So what?
Not a single word of this noise you keep bleating changes the fact that irradiance is an extensive property. Like this word-salad nonsense:
You’ve twisted yourself into a pretzel, you think you know what the word means, but you don’t.
Take your goofy word experiment farther, divide 1 m2 into 1000 pieces: each one still sees E W/m2 (assuming the irradiance is actually uniform). So what?
The real issue is that you have to multiply E times the area to get the energy each section receives.
And this is certainly NOT a constant.
Adding a source onto the area increases the irradiance.
Keep digging. Maybe AnalJ the wannabe climate pseudoscience guy or Slimon or blob or bellcurvewhinerman can bail you out.
“So what?”
ROFL!! You beat me to it!
You are right, he is a blackboard mathematician:
https://wattsupwiththat.com/2024/07/14/the-hottest-june-on-record/#comment-3942572
“And as I keep saying the irradiance increases in that scenario because the numerator (W) of the W.m-2 ratio is increasing; not because the denominator (m-2) decreased.”
NO ONE SAID ANYTHING DIFFERENT!
Flux is a flow. As a flow it can be additive. Put two hoses into the bucket you are filling with water and you get twice the water flow! It’s an extensive property! The dimensions of that flow is liters/sec.
What you keep on saying is that flux is an intensive property. IT IS NOT AN INTENSIVE PROPERTY!
“And I’ll repeat. If you partition a 1 m^2 surface in your backyard into two halves each with 0.5 m^2 area then each of the halves will get the same solar irradiance in W.m-2 as the whole.”
That is *NOT* what you said. You said a radiating object would emit the same radiant flux if you removed half of the object.
You are just plain lying now. Be brave and admit you were wrong!
“Therefore solar irradiance is an example of an intensive property of that 1 m^2 surface in your backyard.”
How many times are you going to try this idiocy? Taking away half of the receiver does *NOT* affect the object doing the radiance at all! The radiant flux remains the same. You are trying to redirect the issue being discussed! The value of the radiant flux is based on the sender, not on the receiver!
But guess what? If you cut the receiving area in half you ONLY GET HALF THE ENERGY INPUT to the receiver!
Radiant flux is (joules/sec)/ m^2. So lets have a receiving area of 4 m^2 and a flux of 2 (joules/sec)/m^2. Total energy received is 4 * 2 = 8 (joules/sec). Now cut the area in half. All of a sudden the received energy becomes 2 * 2 = 4 joules/sec. So in one case you get 8 joules every second and in the other you only get 4 joules every second.
You did *NOT* claim irradiance is an intensive property. You did not claim that radiant exitance is an intensive proprety. You claimed that radiant FLUX is an intensive property. And now you are trying to convince everyone that you said irradiance and radiant exitance and not radiant flux.
Just stand up and admit you were wrong!
He won’t (and can’t); Custer here knows everything. Every time he puts fingers to keyboard his incompetence rears itself.
And now AJ and bellcurveman are back to claiming area-weighted temperature averages magically transmogrify into an extensive quantity, therefore the climate science Fake Data numbers have deep meaning.
Ok. That’s work with this example.
First…you define radiant flux with units of W.m-2. I’m good with that if you are.
Area: 4 m^2
Radiant Flux: 2 W.m-2
Technically this is power (joules/sec). Energy is just joules.
But the number is right since 4 m^2 * 2 W.m-2 = 8 W.
Okay. The area is now 4 m^2 / 2 = 2 m^2. Got it.
Again…this is technically power (joules/sec). Energy is just joules/sec.
But whatever the number is right since 2 m^2 * 2 W.m-2 = 4 W.
Yep. And notice that in both cause the radiant flux is still 2 W.m-2.
That’s the salient point here. The radiant flux does not change because you partitioned the surface. Remember, you defined “radiant flux” as having units of W.m-2 in this scenario.
So what?
Your goofy word gyrations still don’t overcome the simple fact that solar irradiance is extensive.
The “so what” is that TG changed the extent of the system, yet it had no effect on the radiant flux (W.m-2) so clearly it is independent on the extent of the system. Therefore the IUPAC definition says that it is an intensive property.
Bullshit. WTF is the “system”?
This committee “definition” is useless.
Total and utter bullshit!
The value of the flux is *NOT* determined by the receiver! It is determined by the transmitter!
Can you not even grok that simple fact?
The flow of water into a bucket is *NOT* determined by the size of the bucket! The flow of water into the bucket is determined by how much water is being sent into it from the source.
The amount of water being sent ADDS as you add additional water flux, e.g. additional hoses. The amount of light or x-rays or whatever ADDS as you add to the source!
Radiant flux is what is being SENT, not what is being received!
WAKE UP and look around at the real world around you!
I know. That’s what makes it an intensive property. Remember, the definition of intensive means that it does not depend on the extent of the system. The system here is the surface for which that flux is associated with and is represented by the denominator (m^2) the flux ratio W / m^2. The sender of the power is represented by the numerator (W).
I know. That’s why when you add water (kg) to the volume of the bucket (m^3) you change its density. Like as with the flux above the numerator (kg) is in reference to the sender while the denominator (m^3) is in reference to the receiver. The system here is the bucket and its volume (m^3) and which the density (kg/m^3) is associated with.
Not necessarily. Whether it is sent or received is determined by what the denominator (m^2) is in reference to. If the denominator (m^2) is in reference to the receiver then the radiant flux is associated with the receiver. If the denominator (m^3) is in reference to the sender then the radiant flux is associated with the sender.
Radiant exitance is the term usually used to describe the scenario where the denominator (m^2) of the radiant flux is in reference to the sender.
Irradiance is the term usually used to describe the scenario where the denominator (m^2) of the radiant flux is in reference to the receiver.
Both are intensive.
“I know. That’s what makes it an intensive property.”
Good Lord! Again, the value of the flux is determined by the SENDER. It is *NOT* an intensive property. If I put two SENDERS in the system then I get the SUM of the flux from each headed toward the receiver! That means flux is an EXTENSIVE property.
If I put TWO similar hoses into a bucket both outputting the same amount of water per unit time do I get TWICE the water flowing into the bucket compared to having just one hose?
Until you can answer that question you don’t have a clue about intensive and extensive.
You are still rying to say that exitance is flux because the two have similar units. The dimensions simply don’t tell the story! The physics do. And you don’t have a clue about the physics.
“That’s why when you add water (kg) to the volume of the bucket (m^3) you change its density.”
You don’t change the density of ANYTHING! Density is volume/mass. That won’t change no matter how much water you put in the bucket. If you have one gallon of water it’s mass is about 4kg of mass. The density is 4kg/1gallon. If you have TWO gallons of water you have a mass of about 8kg. The density is 8kg/2gallons or 4kg/gallon.
The volume of the bucket simply doesn’t matter unless you are calculating the mass/bucket. 1 gallon of water in a two gallon bucket will have the same density as 1 gallon of water in a five gallon bucket.
The flow into the bucket will be the same whether it is a two gallon bucket or a five gallon bucket.
Get in the shower with an umbrella and turn the shower on full. You will get a value of flow determined by the shower head and the water pressure hitting you in the face. Now open the umbrella and put it between you and the shower head. You will have the SAME FLOW of water but you won’t be receiving the same flow. The FLUX OF WATER remains the same. Your umbrella didn’t change that one iota!
Now, pretend you have two shower heads. How much water will be hitting you in the face with the umbrella closed? It will be twice what was hitting you in the face with just one shower head.
Again, the flux is determined by the sender and flux ADDS.
If you can’t understand the simple physics of this you’ll never understand radiation, flux, emittance, and irradiance.
He will never acknowledge the truth here — he desperately needs irradiance to be intensive to justify his fraudulent air temperature data mannipulations and Fake Data.
There are a lot of inaccuracies in your post and irrelevant questions that do nothing other than deflect and divert that we’ve already hashed out so I’ll respond to this one statement.
Not it isn’t. Density is mass/volume. The units are kg.m-3.
If it wasn’t extensive then cutting the received area in half wouldn’t result in the power received being cut in half.
Subtraction is just addition of a negative value.
Yes it would. Let’s do the math together.
Consider the whole…
Surface Area: 4 m^2
Radiant Flux: 2 W.m-2
Power Received: 4 m^2 * 2 W.m-2 = 8 W.
Now consider half of the whole.
Surface Area: 2 m^2
Radiant Flux: 2 W.m-2
Power Received: 2 m^2 * 2 W.m-2 = 4 W.
Notice two things. First, the radiant flux stays the same at 2 W.m-2 therefore it is intensive. Second, the power received changed from 8 W to 4 W. Therefore radiant flux being an intensive property necessarily means that if you cut received area in half the result is that the power received gets cut in half.
Do you understand the math here?
I understand that you have the reasoning capacity of a turnip.
Nah. Even a turnip knows how to turn its leaves to maximize irradiance captured from the light flux, i.e. increasing the extent of the receiving area. bdgwx doesn’t have that much reasoning capacity.
Correct, my bad.
At the risk of being beaten up from all sides, isn’t anything with units of W/m^2 power density, or should that be W/m^3?
In any case, just like density, if you have 2 of the values you have the third. There was a chap named Archimedes who did some work in that area some little while back.
Radiation is usually determined by what is hitting a plane in 3d. Radiation propagates as a spherical wave front whose area expands over distance, not as a volume expanding over distance. That is why the inverse-square law applies to radiation flow.
Water is not a direct correspondence because water flow *is* measured by volume, not area. You could calculate water flow using a plane intersecting the flow and determining its velocity at the plane but that’s not the common convention. Water is a “volume flow”. The concept of “flow” and “flux” apply to both, however.
Heat conductance is typically measured by surface area. From “Introduction to Heat Transfer” by Brown and Marco, 1942:
“As already stated, the quantity of heat that will flow by conduction per unit time is proportional to the thermal conductivity of the material, the area of the conductor normal to the path of flow, and the temperature gradient at the area being considered.”
“The coefficient of thermal conductivity is defined as the quantity of heat that will flow across unit area in unit time if the temperature gradient across this area is unity.”
For something like a continuous radiation flow, e.g. a flashlight, you could measure the flux per volume but I’m not sure how you would use it. Since radiation impacts are typically considered to be at a surface that means the amount hitting the surface per time is the important thing to know. For something like light hitting the ocean that’s a lot like conduction with a gradient through the material. You could generate a “coefficient of light conductivity” for ocean water along with a temperature gradient for dR/dl factor (change in radiation per unit length) and that may in fact exist, I’ve never investigated it.
Most quantities with denominator units of m^2 are a flux.
Most quantities with denominator units of m^3 are a density.
Power density is most often defined as having units of W.m-3. For example, the AGM battery in my truck has a power density of 334 W.m-3.
There is no general rule for flux though. For example wikipedia uses “irradiance flux density” for a quantity with units of W.m-2, “volumetric flux” for a quantity with units of m3.s-1, “spectral flux” for a quantity with units of W.m-1, etc. There are many different definitions and units for flux.
“Notice two things. First, the radiant flux stays the same at 2 W.m-2 therefore it is intensive.”
That’s because the receiving system doesn’t determine the flux! A balance scale doesn’t determine how much mass is placed on it, the person doing the placement does!
The FLUX is an extensive property because the amount of flux is determined by the sending system and it is additive at the sending system.
“Second, the power received changed from 8 W to 4 W”
“herefore radiant flux being an intensive property necessarily means that if you cut received area in half the result is that the power received gets cut in half.”
Which means the power received IS AN EXTENSIVE PROPERTY AS WELL! Since the power received is additive and depends on the extent of the receiver it is an extensive property!
You are making a fool of yourself. I hope that gives you pleasure.
And if the irradiance is nonuniform, changing the area of the receiver changes the power received! Assuming it is exactly halved is completely false.
And how does another temperature source double the temperature?
Once again his goofy ideas make James Watt roll over in his grave.
“Yep. And notice that in both cause the radiant flux is still 2 W.m-2.”
That’s because you are calculating what is being RECEIVED. What the receiver gets doesn’t define what is being SENT!
“The radiant flux does not change because you partitioned the surface. Remember, you defined “radiant flux” as having units of W.m-2 in this scenario.”
Again, the receiver doesn’t define the flux value, it defines how much “power” can be received!
Turn the whole thing around and calculate it from the source. Double the amount of substance being radiated and the flux gets doubled too!
It is the *source* that defines the flux value!
You calculated it too. And I’ll remind you that you got the same answer.
Duh.
Sure. We can analyze that. But not until you have a full understanding of what is going on in your first example.
Do you understand why radiant flux (as you defined it) is intensive?
What happens when the numerator portion (W) of the radiant flux (W.m-2) changes?
What happens when the denominator portion (m-2) of the radiant flux (W.m-2) changes?
What system is the radiant flux a property of here? Is it a property of the body modulating the numerator (W) or is it a property of the body referenced by the denominator (m-2)?
Custer is baking another insane pretzel.
Radiant flux as I defined it *IS* extensive. Add extent to the sending system and you add to the radiant flux. The dimensions of the flux don’t determine what is extensive and intensive.
The rest of your post is just word salad babble.
You increase the W portion by increasing the W portion. There are several ways that can be done. One is by adding to the extent of the system – e.g. add two light emitting diodes in your flashlight instead of one. You can do the same thing to decrease the amount of flux being transmitted by the flashlight by removing diodes. Or you can increase the voltage and current being used with a diode to increase the number of electrons that are excited per unit time in order to increase the exitance of material.
You don’t increase or decrease the denominator. It is defined as being 1 meter. The flux is how many watts is generated PER SQUARE METER. A square meter is a square meter. You can’t even get this correct! Do you change the length of an hour when calculating miles per hour?
“What system is the radiant flux a property of here?”
The sending system!
“Is it a property of the body modulating the numerator (W) or is it a property of the body referenced by the denominator (m-2)?”
Babble. Just plain babble. In what world do you live where a m^2 is not a m^2 all the time?
yep. you got the downchecks for telling the truth.
He’s going to try and convince you that he said radiant exitance, not radiant flux.
He can’t hide; his own words, right here in B&W: “Therefore solar irradiance is an example of an intensive property“.
And the surface area in m^2. Remember a flux (in this context) is the amount of stuff passing through a surface.
And 1/2 the number of electrons are doing so through 1/2 the surface area. Remember, you cut the substrate in half too. So 1/2 W divided by 1/2 m^2 is just W divided by m^2 or W.m-2.
It’s gets cut. Just like the surface area; it gets cut too.
Sorry. Not even this is true unfortunately. Flux has many different definitions depending on the context. The context we are discussing here is a transport flux across a surface specifically a flux involving power transport with units of W.m-2.
If you’d stop the Dunning-Kruger act just long enough to review the mean value theorem specifically as it relates to integrals you would see that it unequivocally and indisputably does have something to do with averaging. What it says is that there exists an average value c such that the integral of f(x) from a to b can be calculated more simply and with trivial algebra as f(c)(a-b). That’s it. Full stop. There exists an average of the function f that allows you to compute value of a definite integral with nothing more than one subtraction and one multiplication.
The so what is that you can integrate a function of an intensive property by first calculating the average of the intensive property. This is an unequivocal and indisputable fact proven mathematically.
It is not a mere “estimate”. It is the area under the curve.
If f(x) = sin(x) then per the mean value theorem integral[f(x), dx, a, b] = f(c)*(a-b) where c is the average of f(x). Using some not so complicated algebra we have c = arcsin(integral[sin(x), dx, a, b] / (b-a)). That is your average.
And using the mean value theorem we can show that c = arcsin(2/π). That means the average of sin(x) from 0 to π is 0.6901.
BTW…that is a fun problem because it involves u substitution since arcsin(sin(x))=x is only valid for the domain x : {-π/2 to π/2}. I’ll be super impressed if you can post back with the correct steps for solving for the average c.
Are you not the same Tim Gorman that posted this?
You’re insane; your own words convict you: “Therefore solar irradiance is an example of an intensive property“.
It is not.
I reread my post and there is wording that is confusing (my fault). To clarify c is the location of the average. f(c) is the actual average. I solved for c because the question asked was “where in ∫sin(x)dx do you see an average”. c is where in the function f that the average exists.
It was a fun exercise for f(x) = sin(x). But the salient point of the mean value theorem is that you can compute a definite integral if you know the average of the function without even needing to know what the function is. It’s actually one of the most powerful theorems in all of math.
The reason why this is relevant to the concept of intensive properties is that if you can compute the average an intensive property then you can integrate it to convert it into an extensive form. For example, if you know the average density of the system you can easily compute the mass of that system by multiplying the average density by the extent of the system without having to fully integrate through the extent of the system. It’s a real world example where knowing the average of an intensive property is useful.
Try that for a sine wave. You’ll wind up with zero.
“ For example, if you know the average density of the system you can easily compute the mass of that system by multiplying the average density by the extent of the system without having to fully integrate through the extent of the system.”
Ask someone figuring board-feet for a cost estimate if that works.
Or surveying the distance between two remote points.
You are *still* trying to equate exitance with flux. They are *NOT* the same thing!
The integral of sin(x)dx is = cos(x). Where in that do you see an average.
You are *still* just posting word salad trying to fool everyone you know what you are talking about.
You already asked me to do it for a sine wave. And I did.
Not necessarily. Your example ∫sin(x)dx from 0 to pi neither has an average of zero nor is it located at x=0.
I already answered that question.
And the integral of the entire sine wave will consist of 0 to pi and from pi to 2pi = 0
Are you now going to admit that the average daytime temp is *NOT* related to Tmax or to theTmid-range?
The has nothing to do with your request and my response.
Why would accept something like that? Clearly there is a relationship between Tmax and Tmin and Tavg. Does that mean (Tmax+Tmin)/2 is the best way to compute Tavg. Nope. But that topic, whichever direction you want to take it, does not invalidate the indisputable mathematical fact that there exists a link between the area under the curve and an average of the curve.
It has *everything* to do with. You are just trying to ignore reality.
Tavg is *NOT* an average if it is (Tmax + Tmin)/2. It is a mid-range value and is useless for determining climate.
“exists a link between the area under the curve and an average of the curve.”
Except degree-days are *NOT* an average, it is a sum. sin(x)dx is a degree-day. sin(x) is a temp. dx is time. That gives temp * time. Degree-day.
There is no divisor. You and bellman have no real understanding of calculus.
I’ve done the calculations on my own 5-min daily data. Tavg is *NOT* Tmid-range.
You simply won’t face facts but you can’t find an average of an intensive property. It’s not just “not the best way”, it’s physically meaningless. I’ve given you pictures of temps in NE Kansas several times. There is no gradient between locations indicating that there is a functional extensive relationship of temperatures between locations. The temp depends on a lot of input values that *are* extensive and have gradients but temp is not extensive.
Mid-range temps were used in the past because it was *easy* (and were all that was available). That’s still why they are used by climate science – they are *easy*. But “easy” doesn’t translate to “physically meaningful”.
Science is supposed to advance in knowledge and understanding. But climate science is still stuck in 16th century knowledge and understanding when it comes to temperature. Climate science is still stuck in the “true value +/- error” meme while the rest of the world abandoned that meme for “estimated value +/- uncertainty” 60 years ago (and even before then for many). And you are right there with climate science.
Climate science practitioners are unable to think in terms of anything except global averages.
Climate science has no understanding of heat flow!
Absolutely correct, his knowledge of integration is abysmal.
A lumen is the result of integrating spectral irradiance with a weighting function that varies with wavelength. He’ll never comprehend this.
he continues to confuse frequency (energy level) with flux. They are *not* the same thing. The energy level is a factor in determining the total amount of joules being emitted per time by an object but it does NOT mean the same number of joules per second is being emitted by two different objects just because they emit green light.
Exactly—he doesn’t understand that total (broadband) irradiance is the integral of spectral irradiance.
I noticed that every time the two source example has been brought up, he has ducked the question, choosing instead to yammer away about “radiant exitance” (which is simply flux emitted per unit area) and the “1LOT” (which is simply energy conservation).
He started off talking about stuff he knows nothing about – as usual.
Now he is trying to convince everyone that the discussion was about radiant exitance which is an intensive property instead of radiant flux. As if the entire thread isn’t available for everyone to see what he was talking about.
Exitance is something I never had to use in radiometry, off the cuff I can’t even tell you the difference between the two.
He’s using it as if it was irradiance:
Basically exitance is how many electrons are emitting per unit area. If an object of 4 m^2 has each square meter emitting from two electrons then if you look at the first square meter it will have the same exitance as the next square meter. If you take away one of the square meters the the others will still emit what two electrons can emit. It’s an intensive property of the substance. The *flux* from the object will change because one will have 8 electrons emitting and when one of the square meters is taken away there will only be six electrons emitting.
He didn’t understand the concepts well enough to use the proper terms. Now he’s trying to convince everyone that he didn’t say what he said.
Its as if he thinks the area itself is emitting, but it is not. The emitter is still external. Beyond this I can’t even try to figure out his thinking.
He’s the guy that would look directly at an arc welder’s arc.
Heh.
I think I understand what he is arguing, it is classic Stokesian logic — because exitance has units of W/m2 and is intensive, this means that any quantity with the same units must also be intensive.
He’s wrong, the units don’t decide — physics does.
Yep.
No knowledge of basic physics. He’s a cherry picker.
I remember studying it in an EE class as applied to LED’s. If you go down the rabbit hole far enough even exitance is an extensive property in that it depends on the number of electrons per unit area that are emitting. Once the material in the LED is created you can’t change the number of electrons emitting per unit area but you *can* change the number of electrons emitting per unit area while the material is being created by changing the doping in the semiconductor.
I’ve never found it very useful since I don’t create semiconductor material. An engineer designing LED’s and how they are doped probably does. My concern was always the total flux emitted by the LED which is the exitance times the emitting surface area – or what is on the LED data sheet for lumen output! In other words , its radiant flux – which is an extensive property.
bdgwx *still* doesn’t actually understand what he’s talking about. He’s just trying to cover his backside so he doesn’t have to admit he was wrong.
Interesting enough, wide-bandgap solar cells made from compound III-V semiconductors such as GaP become LEDs when a forward bias is applied. The same engineering that reduces recombination for collecting photons makes them also emit photons at the bandgap wavelength.
I didn’t know this. I’ve never really studied solar cells in any detail. My guess is that bdgwx and company never have either.
These are from the terminology I have:
emissive power—discouraged in favor of the preferred term
radiant exitance.
irradiance, E [W·m–2], n—at a point on a surface, radiant flux incident per unit area of the surface; the derived unit heat flux density, irradiance in Standard IEEE/ASTM SI 10.
radiance, W·m–2·sr–1, n—the SI derived unit radiance in Standard IEEE/ASTM SI 10.
radiant energy, Q[J], n—energy in the form of photons or
electromagnetic waves.
radiant exitance—see radiant exitance at a point on a surface.
radiant exitance, emitted—see radiant exitance at a point on a surface.
radiant flux, Φ[J/s], n—the SI derived quantity power, radiant flux in Standard IEEE/ASTM SI 10.
radiant power—see radiant flux.
radiant exitance at a point on a surface, M[W·m–2], n—quotient of the radiant flux leaving an element of the surface containing the point, by the area of that element.
radiant exitance—see radiant exitance at a point on a surface.
radiant exitance, emitted—see radiant exitance at a point on a surface.
Notice that irradiance is a derived SI unit, but exitance is not.
I’m trying to follow the thread of discussion, which has become convoluted, but there seems to be some miscommunication of the nature of the “two sun problem” as posed. It is true that if you put a twin sun on the other side of earth, the received flux would change. The earth would be receiving twice the power per unit area as with one sun.
However, the surface flux emitted by the suns would not increase whether there were two suns or 1000. They would still be emitting the same power per unit area, the total power would just be larger because the emitting area has increased.
I’m not sure where to go with this beyond this point of clarification because it isn’t clear what the point of the discussion is anymore, but yeah.
What isn’t clear are bgw’s bizarre claims about thermodynamics and irradiance.
The area of the sun is NOT a factor when measuring solar irradiance.
That is true, because the irradiance is power per unit area, but that is consistent with what bdgwx is saying, hence I’m not clear what the objection is. In fact it kind of seems to go hand in hand with that they are saying.
If you think his word salads are coherent, well, put down the weed is all I can say.
If most other people can easily understand the things you’re calling a word salad then the problem might be in your own comprehension.
It is glaringly obvious that he has no training in these subjects, yet tries to lecture people as if he was some sort of expert.
This is all so climate pseudoscience.
It isn’t clear to me that the folks arguing with bdgwx in this thread have taken much time to understand what is being said. It strikes me that there’s an overzealous desire to just contradict whatever they are saying, and it’s lead to a convoluted and pointless collection of rabbit trails and no clear sight of what the primary objection is actually supposed to be.
I’m not even certain that you or the Gormans could concisely articulate what you think bdgwx’s position is and what your objection to that is without contradicting each other.
The point is that bgw yammers up, down, and sideways that irradiance is an intensive property.
He is wrong, 1000% wrong. His goofy thought experiments do not support his assertion.
Its that simple.
Do you need more help?
It wasn’t just a word salad, it was WRONG!
irradiance is *NOT* what bdgwx was talking about. It is *not* consistent with what he was saying. Irradiance is an extensive property. He was trying to convince us that it isn’t.
“but that is consistent with what bdgwx is saying, “
Bullshit. Who do you think you are fooling? bdgwx claimed radiant flux is an intensive property of an object and then wasted two days of people’s time and bandwidth trying to prove that assertion as correct.
Now he’s claiming the subject under discussion was radiant exitance and not radiant flux when it was RADIANT FLUX he kept talking about in his posts!
Again, this is quibbling over semantics. bdgwx was obviously describing power per unit area so your objection is merely over the term used.
It’s always been about radiant exitance. That’s what my post was about that triggered you immediately and started this chain reaction of one absurd claim after another defending your original claim that it was an extensive property which is also absurd. And it’s not just radiant exitance that is intensive. The concept applies to any transport flux (W.m-2) involving radiation (among other things) including irradiance as well. You’re the one that started throwing around all of these different terms perhaps as a diversionary tactic or perhaps to gaslight me. I don’t know. And it is clear from your context in various posts (ie here and here and here and here) that you accepted that we were discussing transport fluxes involving radiation with units of W.m-2. Then somewhere along the way you decided you’d change the discussion to just W or some other units equivalent to power (ie. here and here). You’ve even been flipping back and forth with your definitions at different points in time. It makes having a meaningful conversation with you nearly impossible.
Oh look bg-whatever has a new LITS to obsess about.
How unusual.
And that terse one-liner definition of intensive cannot change the fact that irradiance is extensive.
Which you used to claim that anything with units of W/m2 must therefore also be intensive, absurd.
Then you pulled out that useless committee definition out of the ether to prove your absurd claim.
I never said ALL quantities with units of W.m-2 were intensive. But we can’t even get to the exceptions because you and TG are so violently triggered by the fact that a transport flux of radiation is intensive which is what started this whole conversation.
Let me make sure I have this straight. So my claim that an intensive property is one “whose magnitude is independent of the extent of the system” is absurd? Yes/No?
Your own words:
“Therefore solar irradiance is an example of an intensive property“
You are wrong, completely wrong, and your incompetence and pride prevent you from acknowledging the truth.
You don’t understand the words you sling about.
“I never said ALL quantities with units of W.m-2 were intensive.”
Of course you did! You did it when you said flux in W/m^2 is intensive!
“you and TG are so violently triggered by the fact that a transport flux of radiation is intensive which is what started this whole conversation.”
You *still* can’t admit that flux is extensive. The transport of radiation is described as a FLUX. Flux is extensive! Transport means FLOW. A flux is a FLOW. Flows are extensive. You can’t eve admit that you can increase the flow of water into a bucket by ADDING a second hose!
His very own words:
“Therefore solar irradiance is an example of an intensive property“.
He’s wrong, and can’t admit he is wrong.
You are making the same mistake that bdgwx started off with.
You started talking about radiant flux and then changed to talking about radiant exitance. One is an extensive property and the other is an intensive property.
Flux is flux. It is an extensive property which bdgwx denied. The entire thread was bdgwx trying to convince us that flux is an intensive property.
flux and exitance are not the same thing. It’s no one’s issue but bdgwx’s that he didn’t know what he was talking about.
It sounds like you’re just quibbling over semantics. It isn’t clear what the actual issue you’re trying to resolve is. I see that you are very intent on proving some point about the definitions of the terms flux, exitance, and irradiance, but what purpose that serves has been lost in the fray. Bdgwx has followed a consistent usage of the terms they’ve employed.
Not at all — this all started with the B&B clowns trying to get around temperature being an intensive quantity so the can justify the averaging gyrations of climate pseudoscience.
“…trying to get around temperature being an intensive quantity…”
I’ve never suggested that temperature is not an intensive property. It’s just nonsense to claim that means it cannot be averaged.
“I’ve never suggested that temperature is not an intensive property. “
You suggest that every time you say you can average temperatures!
How do you average something you can’t sum? You can’t add intensive properties.
The same way TN1900, Dr Spencer or anyone else looking at averages of temperatures. By rejecting your premise that you cannot sum temperatures. It’s very easy to sum temperatures or any set of numbers. Adding up is one of the first things they taught us at university.
The issue has never been about whether you “can” add up temperatures, it’s about the fact that as temperature is intensive the sum is not meaningful.
You think that makes the average meaningless. I disagree.
“It’s very easy to sum temperatures or any set of numbers.”
There you go! Statistical world at its finest.
Number is Numbers! They don’t have to exist in the real world!
“it’s about the fact that as temperature is intensive the sum is not meaningful.”
The average is only meaningful as a statistical descriptor of the set of numbers. And even then it is not a *complete* statistical description. It is a guess at “what might be”, not “what is”.
Intensive properties can’t be added. If they can’t be added then you can’t have an average.
One more time, if you have a steel rod at 10C and a second steel rod at 20C you can’t add the temps and come up with 30C. The properties don’t add. Therefore there is no average value.
Plot this and you get a point at 10C and one at 20C but there is no distribution of temperatures between the two points.
“Statistical world at its finest.”
Yes. Statisticians have developed this advanced mathematical concept called adding.
“Intensive properties can’t be added. If they can’t be added then you can’t have an average.”
And yet you keep plugging TN1900, which manages to do exactly that. Just uses that adding procedure on maximum daily temperatures to get a monthly average.
“One more time, if you have a steel rod at 10C and a second steel rod at 20C you can’t add the temps and come up with 30C.”
You just did. You really need to understand the difference between “can’t” and “shouldn’t”.
But for some reason you have no problem adding 100 temperature readings and working out the uncertainty of that sum.
“The properties don’t add. Therefore there is no average value.”
I’ll repeat, as you ignored it the first time. You do not need the sum to be a meaningful in order for the average to be meaningful. Take your silly example. The figure of 30°C means nothing. The temperature of the two rods is not 30°C. But the average of 15°C could have lots of meaning.
“Plot this and you get a point at 10C and one at 20C but there is no distribution of temperatures between the two points.”
That’s another one of you misunderstandings. It has nothing to do with intensive properties. Have one rod of length 10cm and another of 20cm. The sum of 30cm has meaning. The average of 15cm has meaning. But that doesn’t mean you have a rod of 15cm. Once again the average does not have to be equal to an existing value.
“And yet you keep plugging TN1900, which manages to do exactly that. Just uses that adding procedure on maximum daily temperatures to get a monthly average.”
I keep plugging TN1900 as an example of how to handle measurement uncertainty. I *don’t* agree that temperature is an extensive value that can be summed! Once again, you *never* read anything for comprehension.
“You just did. You really need to understand the difference between “can’t” and “shouldn’t”.”
You *can* if you assume the meme that “numbers is numbers” like you do. You *shouldn’t* because they are intensive, not extensive. You *can* stay drunk all the time, that doesn’t mean you *should*.
“But for some reason you have no problem adding 100 temperature readings and working out the uncertainty of that sum.”
Because intensive quantities don’t add in the real world. Why is that so hard to understand? Most physical scientists and almost all engineers live in the real world, not in a blackboard statistical world where “numbers is numbers”. Only in climate science and statistics do you find those who believe you can add densities, temperatures, and colors as you do.
“That’s another one of you misunderstandings. It has nothing to do with intensive properties. Have one rod of length 10cm and another of 20cm. The sum of 30cm has meaning. The average of 15cm has meaning. But that doesn’t mean you have a rod of 15cm. Once again the average does not have to be equal to an existing value.”
“That’s another one of you misunderstandings. It has nothing to do with intensive properties. Have one rod of length 10cm and another of 20cm. The sum of 30cm has meaning. The average of 15cm has meaning. But that doesn’t mean you have a rod of 15cm. Once again the average does not have to be equal to an existing value.”
What physical meaning does that average of 15cm have? Can you use it to fill a 15cm gap? Can you put it in your pocket?
If you drop a lead ball in a bucket of water can you find an “average density” somewhere in the bucket?
That 15cm is a STATISTICAL descriptor. It doesn’t exist in the real world. And it isn’t even complete statistical descriptor unless you also know the standard deviation!
How many definitions of the word “measure” must you get before you understand?
Cambridge dictionary: “a unit used for stating the size, weight, etc. of something, or a way of measuring:”
Note the word “something”.
Merriam-Webster: “b : the dimensions, capacity, or amount of something ascertained by measuring”
Note the word “something”
American Heritage: “”A reference standard or sample used for the quantitative comparison of properties”
Note the word “properties”
dictionary.com: “the extent, dimensions, quantity, etc., of something, ascertained especially by comparison with a standard:”
Note the word “something”.
The Free Dictionary: “the extent, dimensions, quantity, etc., of something, ascertained especially by comparison with a standard:”
Note the word “something”
———————————————
No one but those on here trying to defend CAGW believes anything other than a thing can be measured. Only those things that can be measured can be a measurand.
from definitions.net:
—————————————–
measurand
Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events. In other words, measurement is a process of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind. The scope and application of measurement are dependent on the context and discipline. In natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures.
—————————————————-(bolding mine, tpg)
I could go on with multiple references, but you wouldn’t believe any of them.
You live in a world where “numbers is numbers” and don’t have to have any relationship to the real world.
“I keep plugging TN1900 as an example of how to handle measurement uncertainty. ”
Thus ignoring all the things the example does that you say are forbidden. Treating an average as a measurand. Treating a statistical descriptor as a measurand. Averaging intensive properties. Treating a sample as a measurement, and the sampling uncertainty as a measurement uncertainty.
“Once again, you *never* read anything for comprehension.”
You need to consider the possibility that what you write is incomprehensible. In two sentences you go from TN1900 is an example of how to handle measurement uncertainty, to TN1900 is doping things that are impossible.
Me:“But for some reason you have no problem adding 100 temperature readings and working out the uncertainty of that sum.”
Tim: “Because intensive quantities don’t add in the real world.”
Do you not see the contradiction.
“How many definitions of the word “measure” must you get before you understand?”
You could start by mentioning metrology definitions rather than dictionaries.
Or TN1900’s broader definition
“Only those things that can be measured can be a measurand. from definitions.net”
So ignore all the international standards, and just use some random dictionary on the internet. (And that’s just copying Wikipedia)
Attribute of an object – not the object.
That’s the bit you highlighted? You do know what “nominal” means, do you?
From the VIM
The thing being calculated is the area-weighted arithmetic mean, which is physically meaningful for surface temperature because the temperature at any point on the surface is the product of the local energy balance. This is the same principle as taking the mass-weighted mean as the average temperature of a volume, which is also physically meaningful.
Exactly. An intensive property becomes extensive when you multiply it by an extensive property.
Tim should know this as he’s always going on about degree days, and how they can be added.
Conversion is *NOT* what is being discussed!
What is being discussed is if you can ADD intensive properties.
If I put an iron rod at 10C in your hand and then put a second rod at 11C in your hand as well do you feel a temperature of 21C? Do you even feel a temperature of 10.5C?
Degree-days are an AREA under a curve. The AREA is an EXTENSIVE property. There is no addition of intensive properties involved. There is no “averaging”. If you understood calculus at all you would know this.
Your own preferred method for calculating degree days uses an average temperature.
And I’ll remind you yet again that the mean value theorem says that you can compute an integral using an average.
That is *NOT* my way of calculating degree-days.
And this area-weighting garbage has led to the current popular hysteria about boiling oceans, melting ice, and submerged cities, plus they have to make up data for the locations where there is none.
“What is being discussed is if you can ADD intensive properties”
No, the question is whether you can get a meaningful average. The sum will be meaningless but I say the average has meaning.
If it didn’t you would have to reject nearly every article here, including your beloved pauses. It would be meaningless to say that June in UK was cold, as this article claims.
“If I put an iron rod at 10C in your hand and then put a second rod at 11C…”
Really terrible example. You don’t feel temperature in your hands you feel heat. And heat is an extensive property. I’m going to cool down quicker if I put both hands in a bucket if water, than if I put just one hand in.
“Degree-days are an AREA under a curve”
Which is my pont. By adding an extensive dimension to the units you’ve created degree days, the product of temperature and time, an extensive property. You can integrate the curve to get a total of degree days over any length of time, and you can divide that total by the number of days to get the average degree days per day, or just the average degrees.
If the sum of degree days is meaningful, then so is the average.
” If you understood calculus at all you would know this.”
Why do you think these pathetic ad Homs carry any weight, when you’ve consistently demonstrated your inability to calculate the correct partial derivative, it even understand a simple equation involving ratios.
“No, the question is whether you can get a meaningful average. The sum will be meaningless but I say the average has meaning.”
The average has meaning as a statistical descripto – NOT as a measurand.
“Really terrible example. You don’t feel temperature in your hands you feel heat. And heat is an extensive property. I’m going to cool down quicker if I put both hands in a bucket if water, than if I put just one hand in.”
If you don’t feel temperature then why does climate science use temperature as a proxy for heat?
CDD is *NOT* an average. It is a sum taken over a day. It is not sum/time. It is sum-time.
We’ve been down the road twice before. The integral of sin(x)dx in dimensions is temperature-time. Temp is sin(x). dx is time. Its DEGREE-DAY, not DEGREE/DAY.
I keep telling you that you need to take a basic calculus course and you say you understand calculus. And then you come up with the idiocy that DEGREE-DAY is an average and not a sum taken over a day.
Again, sin(x)dx is y-x not y/x. Are we going to have to cover this for a fourth time?
“CDD is *NOT* an average.”
Complete strawman. I did not say they were an average. Thy are an example of how to handle intensive properties by multiplying by an extensive property. That allows you to sum them, and by that means get an average, without upsetting your sensitivities.
“I keep telling you that you need to take a basic calculus course...”
And I keep telling you to be less patronizing, and accept that people pointing out your mistakes might understand this better than you.
“Again, sin(x)dx is y-x not y/x.”
Nobody has mentioned anything about sines or taking their derivative. There is a temperature profile over a day that can be obtained by integration to give you the degree days for one day. Add a second day and you now have the sum of those 2 days – allowed becasue degree days are extensive. Add up any number of days and you get the sum of degree days over that period. Now you can divide by the number of days to get the average degree day per day – which is equivalent to the average quantity of temperature over that period. At no point has that invalidated your, “but you can’t add intensive properties” argument.
“hy are an example of how to handle intensive properties by multiplying by an extensive property. That allows you to sum them, and by that means get an average, without upsetting your sensitivities.”
So now you are agreeing that you can only find an average of extensive properties?
How then do you find the average of temperature which is an intensive property?
“average degree day per day”
That is an average of an extensive property. You just said so yourself.
How do you do that with temperature which is an intensive property?
If you can’t add intensive properties the how do you find an average?
“So now you are agreeing”
Only if “agreeing” in the Gorman dictionary means saying the opposite.
“How do you do that with temperature which is an intensive property?”
I’ve just shown you. Please pay attention.
Still no explanation as to why you think it’s OK for NIST to do this in TN1900.
You’ve shown me 1. you can’t read and 2. you believe “numbers is numbers” and don’t have to have physical meaning at all.
Numbers are numbers, there’s a clue in the fact that they both have the same spelling.
What you mean by having a physical reality is something that has been debated by philosophers for at least the last 2500 years, and isn’t something I’m particularly interested in.
Mathematics can be used to describe or model reality, but it can also describe things that may not exist in the “physical world” whatever you think that means.
None of this is relevant to point though. Where you are wrong is assuming that just because a step in a process leads to physically meaningless values, it must also mean the end result is physically meaningless.
If you want an example just consider adding in quadrature. Whateaning does the square if an uncertainty have? What meaning does the sum of these meaningless squares have. But then you take the square root, and you have the meaningful uncertainty of a sum.
The same applies to many applications of adding squares, including standard deviations. You insist the knowing the standard deviation of temperatures is meaningful, you demand it’s quoted, yet the SD is just the square root of the variance, and the variance can have no physical meaning, unless you can explain what a square degree is.
And that’s not including the fact that in order to calculate the variance you need to know the mean temperature which you ate also claiming doesn’t exist.
“If you want an example just consider adding in quadrature.”
You don’t even understand what adding in quadrature *means*. It’s pretty obvious that most of what you think you know is rote repetition of cherry picked knowledge. My guess is that you’ve never actually studied Taylor’s exposition on adding in quadrature.
It’s all based on Pythagoras and integral calculus – physically reality as described by geometry.
You’ve never bothered to study this any more than you’ve studied anything else. You are a cherry picking genius.
There’s a surprise. Rather than address the point I’m making, or consider the contradictions in his own arguments – Time resorts to personal abuse.
The point is that your claim is that any operation that relies on physically meaningless values as intermediate steps then the result must be meaningless. If you can accept physically meaningless squaring of values in a Pythagorean equation, you can also accept physically meaningless sums in an average.
“The thing being calculated is the area-weighted arithmetic mean, which is physically meaningful for surface temperature because the temperature at any point on the surface is the product of the local energy balance.”
BULLSHIT!
The energy balance is determined by the enthalpy and not by the temperature. Your claim is the same claim that climate science makes — if the mid-range temp in Las Vegas is the same as it is in Miami then the climates at the two locations are the same!
“This is the same principle as taking the mass-weighted mean “
Mass is an EXTENSIVE PROPERTY! You *can* average mass related properties.
Temperature is an INTENSIVE PROPERTY. You can *NOT* average intensive properties since you can’t sum them.
This is the same kind of climate science garbage that says you can infill temperatures of inland Ramona, CA with temps from coastal San Diego because they are so close together! Or you can infill temps in Holton, KS o the north side of the Kansas River valley from temps in Carbondale, KS on the south side of the Kansas River valley because they are close together. Or you can infill temps in Colorado Springs, CO with temps from Pikes Peak because they are so close together.
If the temp in Kansas City, KS is 50F and in Topeka, KS is 40F you can’t add the two to get 90F and then divide it by 2 in order to find an “average” temperature for northeast Kansas. But climate science thinks you can!
Not one single person has ever claimed this.
So is area.
No one does this, the average is calculated as an area-weighted mean – GISS uses station distance from the grid point as the weighting factor, for instance (see Hansen et al 1999)
Hansen, a really great reference for data fraud.
These averages tell you NOTHING about climate.
Not a single prediction made by Hansen, Mann, Gore, Jones, and all the rest of suspects has come to pass.
None. This should tell you something, but you voluntarily and consciously chose to believe them and propagate these hoaxes.
“Not one single person has ever claimed this.”
The totality of climate science claims this every time they combine either temperatures or anomalies from Las Vegas with temperatures or anomalies from Miami in order to obtain a “global” average temperature!
“So is area.”
Temperature is not area.
“No one does this, the average is calculated as an area-weighted mean – GISS uses station distance from the grid point as the weighting factor, for instance (see Hansen et al 1999)”
It’s STILL garbage!
Kansas City temps are influenced by the Missouri River valley (as well as population and UHI). Topeka temps are influence by the Kansas River valley (and vastly different population and UHI). Their flows are different and their impacts are different. It’s not just a “distance” thing.
The temps in San Diego and Ramona are very different, along with humidities and pressures. But since they are close in distance they get weighted the same. Pikes Peak and Colorado Springs are close in distance but vastly different in altitude.
It’s the *heat content” that matters, not the temperature. Why doesn’t climate science start using enthalpy instead of temperature? The data exists, it just has to be used!
The thing being averaged between Miami and Las Vegas to obtain a mean is the temperature anomaly, which contains no information about the climatology of the region represented by the anomaly, only about how much the temperature has deviated from what is normal for that region, which is the metric of interest for observing climate change.
Nor is temperature mass, but an area-weighted mean has physical meaning in the same way that a mass-weighted mean does.
That is a desirable feature, because the goal is for the mean to represent the distance weighted temperatures of all records within the 2×2 degree grid cell. Stations further from the grid point should be weighted less heavily.
I’m no so certain the data does exist historically, while we do have consistent and abundant temperature data going back more than a century.
f”The thing being averaged between Miami and Las Vegas to obtain a mean is the temperature anomaly, which contains no information about the climatology of the region represented by the anomaly, only about how much the temperature has deviated from what is normal for that region, which is the metric of interest for observing climate change.”
And yet the anomalies are being used to support CLIMATE CHANGE claims!
“Nor is temperature mass, but an area-weighted mean has physical meaning in the same way that a mass-weighted mean does.”
Only if you are weighting an EXTENSIVE property not an intensive property!
“That is a desirable feature, because the goal is for the mean to represent the distance weighted temperatures of all records within the 2×2 degree grid cell. Stations further from the grid point should be weighted less heavily.”
Again, this only works if you are doing it for an EXTENISVE property.
“I’m no so certain the data does exist historically, while we do have consistent and abundant temperature data going back more than a century.”
It’s existed on a wide spread basis since at least the 70’s. What do you think is needed to calculate enthalpy? That would give us 50 years worth of a baseline for enthalpy!
Yes, but you need to bold both the word CLIMATE and CHANGE above. A tendency of the anomaly in one direction over many years is climate change.
It’s a bit surprising to hear you claim that a mass-weighted mean has no physical meaning. If you took two bodies and connected them via a conductor and waited until their temperatures had stabilized, the resultant temperature would be equal to the mass-weighted mean of the original temperatures, assuming no heat loss.
I might be wrong, but I think that 140 years > 50 years.
that you subscribe to bg-whatever’s goofy notions about physics.
He denies that adding a source of irradiance doubles the irradiance, yet believes that adding a temperature “source” doubles temperature.
His brain is highly damaged.
“A tendency of the anomaly in one direction over many years is climate change.”
You just said above that the anomaly doesn’t indicate climate. So how can it indicate climate change?
Pick one and stick with it!
“Yes, but you need to bold both the word CLIMATE and CHANGE above. A tendency of the anomaly in one direction over many years is climate change.”
ROFL!!
You:
Your conclusion? Anomalies determine climate!
Find someone that knows basic logic analysis and see if it makes sense to them!
A determines B
C doesn’t determine A
C determines B
Which is funny because that is exactly what my Fluke meter does. And it will average temperature in…gasp…both the spatial and temporal domains.
And my Dynamic Meteorology book by Holton and Hakim is one equation after another involving the average of intensive properties including but not limited to temperature.
And I’ll remind you that you were okay with using products derived from average temperatures (here and here). So I question your conviction in regards to your claim above.
So what?
None of this changes the fact that climate science air temperature numbers are meaningless.
And what do these area-weighted means really tell you about climate?
Very little.
Plus they have led climate science practitioners to insert fake numbers where there are none. A great example is the southern Indian Ocean.
Temperature is one of the defining attributes of climate.
The operative words in your statement are “is one of”.
Climate is not defined by temperature alone – UNLESS YOU ARE A CLIMATE SCIENTIST!
An area-weighted mean is *NOT* applicable for surface temperatures by itself. It does *NOT* consider any of the actual factors in the energy balance.
There *is* a reason why the temps on the north side of the Kansas River valley are different from the temps on the south side of the Kansas River valley. Yet a spatial weighting takes none of these into consideration at all.
*MASS* is an extensive property, you *can* mass-weight components when calculating the center of gravity for a system. Temps are an *intensive* property, you can’t weight them directly, you can only weight the components of the temp that are extensive, e.g. humidity, pressure, altitude, etc.
Just one more major flaw in how climate science handles physical science.
The point of taking the mean to begin with is to extract the common signal for the region, it is true that the regional mean is not suitable for analyzing local temperatures.
The comparison is between mass and area, the weighting factors, not mass and temperature. Both mass and area are extensive.
Well this is nonsense, but it is what the trendology clown show want to believe.
AlanJ is, of course, correct. Both mass (kg) and area (m^2) are extensive properties since partitioning mass and area result in a smaller amount for both.
You are insane. This semantical smoke screen you keep trying to hide behind only exposes more of your insanity.
Add a source of irradiance, the irradiance doubles.
Add a source of temperature, the temperature doubles?
“AlanJ is, of course, correct. Both mass (kg) and area (m^2) are extensive properties since partitioning mass and area result in a smaller amount for both.”
What in Pete’s name does this have to do with weighting temperature, an intensive value, by area, an extensive value?
Weighting an intensive value is meaningless because you can’t add intensive values and multiplication is nothing more than a repetitive addition!
“The point of taking the mean to begin with is to extract the common signal for the region”
There is *NO* common signal. The river valley represents a barrier that changes the atmospheric signal at each location. The *exact* same thing applies regionally and globally.
“The comparison is between mass and area, the weighting factors, not mass and temperature. Both mass and area are extensive.”
You are missing the whole point. Is that intentional? If so there isn’t any use in arguing with you because you have a religious dogma bias.
TEMPERATURE IS NOT EXTENSIVE!
Weighting it by area is meaningless.
They will never acknowledge the truth.
“Not at all — this all started with the B&B clowns trying to get around temperature being an intensive quantity so the can justify the averaging gyrations of climate pseudoscience.”
Yep. Flux is intensive and temperature is extensive in climate science!
Upside-down world!
“It sounds like you’re just quibbling over semantics.”
The terms “flux” and “exitance” are NOT SEMANTICS! They are different things!
It is *never” a waste to challenge incorrect assertions.
“Bdgwx has followed a consistent usage of the terms they’ve employed.”
That tells me that you didn’t even bother to read the thread! The assertion was that flux isn’t an extensive property. That is *NOT* consistent use of the terms! It just means that bdgwx had no idea of what he was talking about. Even in the face of detailed explanation of what a flux is he adamantly refused to acknowledge that flux is an extensive property.
The assertion is that power per unit area is an intensive property, and that claim has been completely consistent throughout the thread. Your objection boils down to use of the term “radiant flux” to refer to power per unit area, which is indeed nothing more than a gripe about semantics.
And the claim is complete bullshit. This is NOT about “semantics”: climate science has just about zero basis in real physics, and the noise generated here proves this.
It IS about justifying the fraudulent air temperature numbers generated by climate pseudoscience practitioners.
That you choose to accept and propagate this hoax is a testament to your character.
They can’t even accept that an average is not a functional relationship with one y for one x. It is a statistical relationship describing a range of values – and it’s not even a complete statistical description of that range of values.
I simply can’t tell you how dismayed I am at the knowledge of basic physical science protocols and procedures demonstrated by those on here defending CAGW. They aren’t even good statisticians because they apparently don’t understand that an average by itself is not a complete statistical description of a distribution. The fact that they can’t separate “true value +/- error” from “estimated value +/- uncertainty” just compounds their lack of understanding of the physical world.
I’m still waiting for one of them to tell me they have an average being stored somewhere in a refrigerator.
I cannot agree more, the technical levels of climate science and its adherents are just atrocious.
I can’t even keep up any more. For every absurd claim made 2 more absurd claims are made defending the first.
Here is a summary of the absurd claims made in this blog post that I bothered remembering.
1) W.m-2 is an extensive property.
2) Partitioning a surface either receiving or sending radiant energy through it changes the flux in units of W.m-2.
3) Energy (j) is the same thing as a flux (W.m-2).
4) Lumens (cd.sr) is the same thing as a flux (W.m-2).
5) Watts (W) is the same thing as a flux (W.m-2).
6) Fluke 62 uses a spectrometer as opposed to a thermopile.
7) A thermopile cannot be used to measure radiant exitance (W.m-2).
8) IR thermometers read lower close up and higher at a distance..
9) Integrals have nothing to do with an average.
10) The mean value theorem can only be used for “estimating” the area under a curve.
11) Blocking half of the opening of a flashlight causes the unblocked half to reduce its radiant exitance in W.m-2 in half as well.
12) The definition of an extensive property is based on whether you can add the quantity as opposed to whether the quantity is dependent on the extent of the system like what IUPAC formally defines it as.
13) Radiant exitance has units of j/s.
14) The 1LOT is not relevant in regards to whether you can add fluxes (W.m-2) from different surfaces.
I also think working definitions of terms like “radiant flux” and even its units are confusingly being changed from post to post. The conversation got started because I said the “radiant exitance” of the Sun is independent of its extent so it is intensive. I was wanting to stick with that term, but TG started referring to it as “radiant flux”, “radiation flux”, “irradiance” and sometimes just “flux” which is confusing because many sources (like wikipedia, though even their use isn’t consistent) use it in the context of units of just watts (W). But, whatever, I’m adaptable I’ll use any term as long as it is reasonable and you stick to the same definition throughout the conversation. However, I’ve seen TG use “radiant flux” in the context of W and W.m-2 in different posts so it eventually became very confusing as to what I was supposed to be responding to. That’s why I tried to always include the units W.m-2 when I was referring to the context in which the discussion originally started.
Irradiance is extensive.
Temperature is intensive.
Sux2BU
He’s NEVER going to accept that fact. Even if he has to lie about what we tried to explain to him.
Which is coupled with the fact that his knowledge of radiometry is nearly zero:
First of all, I never said this: instead I referenced the narrow wavelength range from the specs of his instrument (of which he apparently knows nothing) and stated that it is as if it were a spectrometer.
Second, Fluke doesn’t bother to say what detector they use, but it is quite likely a semiconductor one, the response time of thermopile would be too slow for these IR guns, especially if they use the sighting laser to get emissivity values of the target.
And third, Fluke’s documentation is horrid.
Your incompetence is exposed, again.
You don’t know what the spectral response range of the instrument even means.
You know nothing about radiometry, but take heart, all the rest of the trendology clowns will high-five you regardless.
One of the hallmarks of a debate with the Gorman twins is that things invariably devolve into a series of separate, tangentially related threads that are impossible to keep track of. I think it goes back to their insatiable need to be contradictory no matter what is being said.
For whatever it’s worth I’ve followed your comments and your argument has been clear enough to me and seems to be correct.
Mostly because BG and bellcurveman can’t see their own technical ignorance and incompetence.
The cost of the truth is too high for them.
What happens is that GAT supporters make all kinds of idiotic assertions, like radiant FLUX being an intensive property, in order to rationalize their support of the GAT being physically meaningful.
Each of those idiotic assertions have to be refuted on their own – but of course the religious fanatic’s belief in the GAT keep them from ever understanding the refutations.
Like bdgwx *still* asserting that radiant FLUX is an intensive property.
Possibly my all-time favorite Pat Frank quote:
“Apart from being physically meaningless. There’s no physical theory to covert a proxy metric to Celsius (Statistics is no substitute for Physics).”
As long as you still agree that “radiant flux” is defined as a transport flux of radiation through a surface with units of W.m-2 then I definitely still assert that “radiant flux” is an intensive property.
And I think you’re own example (and my response to it) should be enough to convince you as well since partitioning the surface did not change the “radiant flux”. It was 2 W.m-2 when you considered the whole and 2 W.m-2 when you considered only half.
What you “assert” is irrelevant, as well as wrong.
You aren’t as important as you think you are.
You can’t even get the issues in the list correct!
Your claim was that RADIANT FLUX is an intensive property. It doesn’t matter what the dimensions are. You are using the argumentative fallacy of Equivocation by trying to change the definition of the issue at hand with no one knowing it.
Lumens ARE a flux. Radiant flux IS a flux. A flux is a flux. Flux is an extensive property.
“11) Blocking half of the opening of a flashlight causes the unblocked half to reduce its radiant exitance in W.m-2 in half as well.”
That is *NOT* what anyone tried to tell you! What we said is that if you block half the radiant flux then you will get less irradiance lighting up what the flashlight is pointed at. *YOU* would have us believe that the unblocked half would increase its exitance in order to maintain the same irradiance at the target!
“10) The mean value theorem can only be used for “estimating” the area under a curve.”
That is ONE use for it. If you understood calculus you would understand that. When the secant of the arc goes to the limit of zero, there is no mid-point of the arc to be found with the mean value theorem!!
“12) The definition of an extensive property is based on whether you can add the quantity as opposed to whether the quantity is dependent on the extent of the system like what IUPAC formally defines it as.”
This only shows thta you STILL don’t understand intensive and extensive properties. You are still trying to justify that radiant flux is an intensive property!
You can ADD properties that depend on the extent of the system. You can *NOT* add properties that do not depend on the extent of the system. Radiant flux, ALL FLUX, depends on the extent of the system. If you add a second light emitting diode to a flashlight that only has one you WILL get twice the radiant flux from the flashlight. The value of the radiant flux is an ADDITION of the flux from both LED’S. That means that radiant FLUX is an extensive properties.
I’m not even going to try and address the rest of your idiocy. For that is what it is. It’s an attempt to try and convince people that you were correct that radiant FLUX is an intensive property not dependent on the extent of the system.
I’d speculate that at this point he might know he’s completely wrong, but has too much invested, the cost of the truth is too high.
“12) The definition of an extensive property is based on whether you can add the quantity as opposed to whether the quantity is dependent on the extent of the system like what IUPAC formally defines it as.”
Yes.This definition he pulled out is a short one-liner written by a committee so vague that its worthless, but Custer here has planted the flag of the 7th Cavalry on it.
As I tried to tell him yesterday, it is physics that determines if something is extensive or not.
Yet another absurd statement. Even assuming you’ve switched to Wikipedia as your source of definitions like radiant flux (which even they don’t use consistently) there are many fluxes according to their definitions that are quite clearly intensive. I’ll also point out that Wikipedia even acknowledges the confusing nature of the terms. This is why I always define the terms I use in radiation discussions and include the units so that there is no confusion. I’ve seen on multiple occasions where confusion ensues because people are working with different definitions of terms especially when it comes to the topic of radiation and fluxes.
More of your innate incompetence: the units DO NOT determine if something is extensive or not.
Your knowledge of radiation and radiometry is nearly zero, all you can do is point to stuff on the internet you cherry-pick to argue with.
The confusion is *YOURS*, not wikipedia.
Flux is a FLOW. Find me a definition that says flux is something other than a flow. Not some an assertion that “there are many fluxes according to their definitions are quite clearly intensive”. Provide a SPECIFIC ONE.
You used the term “radiant flux”. You didn’t define it as “radiant exitance”. You are *still* depending on your assertion that W/m^2 is always intensive, whether it is a flux or an exitance.
Again, the confusion is YOURS, no one else’s. You kept right on adamantly claiming that radiant FLUX is intensive even after the definition of “flux” was pointed out to you multiple times. All you had to do was admit that flux is a flow and is an extensive property – but you still can’t bring yourself to admit it!
And now you are just whining about being misunderstood.
It’s the same thing. All of “radiant flux”, “radiation flux”, “radiant exitance”, “irradiance”, etc. terms that have been used in this discussion have all been generally agreed to mean a transport flux of radiation through a surface with units of W.m-2. All of those are intensive properties. There have been a few posts of yours in which you changes the context to that of just watts (W). And I’ll remind you that this discussion got started with my statement about the Sun’s radiant exitance of 6.3e7 W.m-2. You are the one who started throwing around different examples and terms.
More lies from an incompetent rube. How you manage to get by in the real world is a deep mystery.
#3 is either a lie or the result of reading disfunction:
How stupid is this, no one made this claim.
Frequency has SI units of s-1. Flux (at least a transfer flux like radiant exitance) has SI units of W.m-2. They are clearly different. Nevermind, that I never even mentioned frequency. You were the one who did that.
I know. That’s what I keep trying to tell you.
Just like I keep trying to tell you that energy (j) is not the same thing as radiant exitance (W.m-2).
Just like I keep trying to tell you that lumens (cd.sr) is not the same thing as radiant exitance (W.m-2).
Just like I keep trying to tell you that watts (W) is not the same thing as radiant exitance (W.m-2).
Energy (j), lumens (cd.sr), and watts (W) are all examples of extensive properties. That doesn’t mean that radiant exitance is also extensive.
Now you are just lying to cover your bare A$$.
Add a source, and the irradiance increases. A simple fact your word salad gyrations cannot change.
Similarly with density if you add mass the density increases. That doesn’t mean that density is extensive.
How do you add mass to water?
The question you ran away from previously.
By dissolving it into the water. Add salt to water changes its density. That doesn’t mean density is extensive.
Not if its water you are adding.
YOU WERENT’ TALKING EXITANCE!
You were talking radiant FLUX!
You are trying to cover your backside because you know you got caught talking out your backside.
Same thing. We’re talking about transport fluxes here. Radiant exitance is the W.m-2 exiting a surface via radiation. Irradiance is the W.m-2 entering a surface via radiation. I use “radiant flux” as a broader term that includes either case both of which are W.m-2. Radiant exitance is a radiant flux. And I’ve been clear about that from the beginning. The problem we run into here is that there are no official definitions of these terms so it is up to the participants in the conversation to be clear which I have been.
The units do not decide if the quantity is intensive, you are wrong.
It is the physics.
I’ve never said units decide if the quantity is intensive. I’ve been repeatedly advocating for the IUPAC definition which both you and TG have vehemently challenged.
You don’t understand what you are citing:
intensive quantityhttps://doi.org/10.1351/goldbook.I03074
Physical quantity whose magnitude is independent of the extent of the system.
We’ve never challenged the definition.
All we’ve done is try to tell you that even though you have read the words in the definiton you don’t understand the concept of intensive and extensive. It’s the same as your understanding of metrology – except I don’t think you even read the words associated with it.
The proof is that you keep trying to equate flux in W/m^2 with exitance in W/m^2. One is extensive and the other is intensive even though the units are the same. You said yourself that you use “radiant flux” in place of “radiant exitance”. They are not the same thing. You can’t use one for the other.
“Same thing.”
They are *NOT* the same thing! As KM points out, it isn’t the dimension that determines intensive or extensive, its the physics!
You use “radiant flux” because you don’t understand the basic physics of radiation.
Radiant exitance is *NOT the same as radiant flux. Radiant exitance is a factor in the total value of the radiant flux from a substance but it is *NOT* the flux itself. Radiant exitance is like density and radiant flux is like mass. Density determines the total mass of an object based on its volume. Density is an intensive property. You can’t say object A has density X and object B has density Y and add them together because you don’t know their volumes. But you *can* take the mass of each object and add them together to get total mass. It’s the same with radiation. You can’t add exitance of object A with the exitance of object B and get anything meaningful because you don’t know the radiating area of each. But you *can* add the total radiant flux from each object to get total radiant flux.
You still don’t exhibit *any* understanding of basic physics. Stop trying to lecture us on it.
Custer still won’t admit that doubling light sources doubles irradiance.
He can’t even distinguish between sender and receiver. He thinks the size of the receiver determines what is being sent and not the sender.
That’s not even remotely close to what I’ve been saying. And there is no possible way any one could justifiably construe anything I’ve said as insinuating it.
This argument that the size of the receiver determines the what is being sent is YOUR argument and YOURS alone. Like I’ve said before don’t expect me to defend YOUR arguments especially when they are absurd. And this definitely counts as an absurd argument.
“That’s not even remotely close to what I’ve been saying. “
You can’t even admit it to yourself, can you?
Your claim that flux is intensive because changes in the receiver doesn’t change the amount of flux is like saying a scale determines how much mass is placed on it. The person placing the mass on the scale, the *sender*, determines the mass placed on the scale (the receiver), not the scale!
The fact that flux is an extensive property is based that flux adds at the sender, the origination point. It is *not* determined by the receiver. What the receiver determines is how much of the flux is intercepted by the receiver and *that* is an extensive property as well. Cut the receiver extent in half and you get half the amount received. Double the receiver extent and you double the amount received.
The receiver determines what is received. It doesn’t determine what is sent.
The sender determines what is sent. It doesn’t determine how much of what is sent is captured at the other end.
Both the sent and the received are extensive properties.
Nothing else makes any physical sense. But then, making physical sense isn’t really high on your list of priorities, is it?
Absolutely correct. In addition, heat flux and irradiance are almost never 100% spatially uniform, so all his hand waving about dividing the receiver area is meaningless.
Something to consider: why would he stake out this crazy claim that the units determine if something is extensive or not?
1—He doesn’t really care about the implications of an intensive quantity, as a stat-math head they are all the same to him and can be plugged into the mean formula regardless.
2—Consider the units of flux, W/m2, which he keeps repeating over and over and over: he in essence is claiming the units determine if something is extensive or not. And, what else has the same units? The answer is of course climate pseudoscience “forcings”, a term which always bothered me. Why do they use it?
By using a subtle semantical trick, he and they have transformed CO2 into a source of heat flux, which is absolute nonsense.
3—Notice that Custer never replied to my examples using irradiance sources and temperature “sources” to demonstrate the difference between the extensive quantity and the intensive quantity. Making temperature a source is nonsense because it is only a result of heat flow (which he might understand if knew anything about thermodynamics).
But climate science (and bgw) thinks CO2 is a source of temperature!
“Just like I keep trying to tell you that energy (j) is not the same thing as radiant exitance (W.m-2).
Just like I keep trying to tell you that lumens (cd.sr) is not the same thing as radiant exitance (W.m-2).
Just like I keep trying to tell you that watts (W) is not the same thing as radiant exitance (W.m-2).
Energy (j), lumens (cd.sr), and watts (W) are all examples of extensive properties. That doesn’t mean that radiant exitance is also extensive.”
You tried to tell us that RADIANT FLUX is an intensive property. You never mentioned any other terms, especially exitance.
It’s not *OUR* problem what you didn’t know what you were talking about. It’s *YOUR* problem that you didn’t know what you were talking about.
And you simply can’t admit even now that radiant flux is an extensive property!
Assuming the working definition is that it is a transport flux of radiation with units of W.m-2 then it is intensive. You’ve flipped your definition so many times I have no idea what you are wanting it to be now.
I literally mentioned it at the very beginning. It’s what started your chain of one absurd claim after another.
“Assuming the working definition is that it is a transport flux of radiation with units of W.m-2 then it is intensive.”
Why won’t you listen? Flux is a FLOW. Flows can add! That makes it an extensive property. Just like sticking two hoses into a bucket causes more FLOW!
“I literally mentioned it at the very beginning. It’s what started your chain of one absurd claim after another.”
No, you didn’t. And you *still* aren’t. You are *still* trying to say that radiant FLUX is an intensive property!
From dictionary.com”flux[ fluhks ]
Phonetic (Standard)
IPA
noun
”
Flows can add! That means extensive. You can’t add intensive.
Irony-projection alert, level three.
TG: “That’s total malarky! W/m^2 ADD. Double the amount of substance radiating and you will get twice the w/m^2!”
And you still can’t or won’t admit this is the bare naked truth.
The Battle of the Little Bighorn.
The only one flipping definitions is YOU!
You even admitted earlier that the units definition doesn’t determine extensive vs intensive. Now you are back to saying that it does!
The circular do-si-do of climate metaphysics, where the ends determine the means, and the cart pulls the mule.
I’ve been using the term “radiant flux” as a transport flux of radiation through a surface with units of W.m-2 the whole time. I’ve been using “radiant exitance” as a type of “radiant flux” in which that flux is caused by a body radiating in accordance with Planck’s Law the whole time.
I’ve been using the term “intensive” as a physical quantity whose magnitude is independent of the extent of the system the whole time.
I have been steadfast in my usage of terms the whole time. And I stand behind what I have said. That is an intensive property is one that is independent of the extent of the system, radiant exitance is an intensive property. In fact, radiant flux of any kind is an intensive property.
That’s because it doesn’t. Again…the IUPAC definition says an intensive quantity is a physical quantity whose magnitude is independent of the extent of the system.
The reason I always to try to include the units is because I know that if I don’t you’ll get confused as to what the discussion is about. Though as it stands you’re still confusing concepts.
No. I definitely am not. I’ll repeat…units do not determine whether a property is extensive or intensive.
However, units are essential the definition of terms like “radiant flux” and “radiant exitance”, etc. and as such I always try to be clear in my meaning by including the units. That does not mean the inclusion of units makes something an extensive property.
Do you understand? Do you need me to walk you though what I’ve just said step by step? And no, I don’t mean that in a patronizing way. Those are genuine questions.
And yet the simple fact remains, despite all your word salad gyrations and definition games, two sources doubles the irradiance.
Irradiance is extensive.
Sux2BU
As I’ve been saying it does indeed double the irradiance.
Just like when you double the amount of water in a bucket it doubles the bucket’s density.
That doesn’t mean either irradiance nor density are extensive.
You are quite mad.
A bucket is a very bad analogy.
A solar collector does NOT have a rim to contain irradiance, excess sunlight cannot spill over it.
Do you ever think about what you type?
Did you really just write that?
You boys all need to go and have a nice cool drink 🙂
Yep. To be pedantic it wouldn’t quite double because the bucket or any vessel itself has mass. For example let’s say our vessel is actually a standard steel drum weighing 20 kg. It has a volume of 0.21 m^3. That means an empty drum has a density of 20 kg / 0.21 m^3 = 95 kg.m-3. Filled halfway with water the density is 590 kg.m-3. Filled full with water the density is 1085 kg.m-3. So it’s not quite double.
Look at all the assumptions you made there 🙂
And we all know about ASS U ME.
Their not assumptions. I looked up the volume and mass of steel drums and the mass of water.
Your original statement was:
The example is:
That assumes:
The example works for those specifics.
What about the cases where:
?
</pedantry>
All Custer here could manage is this cherry-picked terse one-liner, then stamp his feet about how it “proves” his assertion.
He is *STILL* trying to say that a flux is an intensive property! He just can’t admit that it isn’t the case!
Nope!
This is not a rational person.
Custer:
“Therefore solar irradiance is an example of an intensive property“
You can’t run away from these.
There would be something even more seriously wrong if they used the same approach and got different numbers. The best that one can say is that they are consistently getting the wrong answer.
If one were to bin all the annual global temperatures and produce probability distribution curves, the result would be a skewed curve with the mean different from the mode and median. While it might be possible to state the mean with high precision, in the context of high probability of a measurement with a large temperature difference existing, the high precision is really meaningless. Might it be that the mean is changing more rapidly than the mode? Which is the best metric to detect global warming? Why is the mean used? Is the range changing with time? There are so many unanswered questions because of the naive assumption that “the science is settled.”
You can average the density of two objects. What does that average tell you? It doesn’t exist. You can’t measure it. You can’t use that average to estimate the density of a third object.
Yet that is what climate science wants to do with temperature. Add the temperature in Quito, Peru in winter with the temperature of St. Louis in summer and find an “average” temperature. That average temperature doesn’t exist. You can’t measure it. It won’t let you estimate the temperature in Moscow, Russia.
It brings into question what climate science thinks the GAT is a metric for.
You certainly CANNOT ever construct a meaningful “global” temperature from randomly-spaced, erratic, urban-affected surface sites, and random, often non existent ocean measurement.
They are so changeable that they cannot EVER be used as a measure of “global” temperature change over time.
That you think they can just shows the absolute gormlessmess and non-functionality of your tiny little mind. !
UAH is possibly a reasonable representation of global temperature trends.
UAH shows no human caused warming
Do you have any empirical evidence of atmospheric CO2 causing warming.
Or are you just blethering mindless anti-science mantra as usual.
No monkey..
It is the difference between fakery (global temp) and reality (local measurement)
“As we all shiver in the autumnal weather during what is meant to be summer and some of us have even turned our central heating back on or continued using our winter duvets, there is one certainty – in a few weeks time, the good folk at Met Office and the BBC will tell us that we’ve just had the “warmest June on record””
Another failed prediction. The Met Office show June as almost 3 degrees below the record set last year. The coldest June since 2015.
In fact, as I speculated at the start of June, this was the first year June was colder than May.
But go on –
“In the article I proposed three possible tricks which the Met Office and the BBC could use to justify their claim of June being “the hottest ever””
Short answer, the author thinks that pointing out that the global average was a record for June, is somehow a trick to give the impression that the UK had just had a record June.
“Or will they instead try to fob us off by claiming that, although June in the U.K. was a disaster weatherwise, global temperatures (if such a thing can even be measured) were at record levels?”
The trouble is, it’s not just the MET Office that are saying this, it’s every global data set, including the good old UAH. Arguing that this is all some sort of conspiracy to make people in the UK think we’ve just had a local record, is too say the least, barking.
“Well, just as I predicted, we’ve been told that June was the hottest on record: From the Mail”
Strange how this starts of as an attack on the BBC and the Met Office, yet then quotes the Mail. And the Mail isn’t even using the Met Office for it’sglobal average, it’s using Copernicus.
The only time it quotes the MO is when it says
“”The Met Office””
I wonder, do you, like me, have to fund that nest of lunatics?
Do you?
UK is heavily affected by REALLY BAD urban surface sites.
DELIBERATELY bad, in that only 9 of the 58 surface sites added since 2000 meet class 1 or 2 WMO specification.
Add more airport and bad urban sites.. push up the calculated temperature… it’s a Met Office thing !!
Human Causation, finally found. 🙂
As well as that UK has had a period of increased sunshine.
Deleted – as posted it in the wrong place.
“three possible tricks”
Which is quite an improvement one one trick….
“”“I’ve just completed Mike’s [Mann] Nature trick of adding in the real temps to each series for the last 20 years (i.e. from 1981 onwards) and from 1961 for Keith’s [Briffa] to hide the decline.””” —Dr. Phil Jones, Director of the Climatic Research Unit, disclosed Climategate e-mail
You should check out Tommy Cooper.
Here in central Texas June was a bit cooler than normal. And so far July is doing the same.
Have you any evidence for that, sir?
UAH reported Texas was 1.5 to 2.5C above the 1991-2020 average.
That is to say, perhaps you were a bit unlucky with the weather where you live. I’m not doubting your word.
Here is a bit more detail of surface temperatures, USA, June 2024:
Oh dear, someone let Nick as his wife’s red lippy again !!
These are not real absolute temperatures they are simply ΔT’s from a baseline. You can’t look at this and say any one place is warmer than another. You can only say that it had a higher rate of change from a baseline.
You LUV those El Ninos, don’t you !
Still waiting for any evidence of human causation.
Slither away again, little worm.
That is a general look at a rate of change over a baseline average. Is 1 to 1.5 over the baseline normal for Texas? What is the anomaly when using a 1920 to 1950 baseline?
EVERY month since the climate hoax began has been be the hottest on record, and every future month will be.
Just as every hurricane season is predicted to be above normal.
According to UAH June 2024 was not as hot as May which was not as hot as April. I don’t think July or any month in the near future (this year) will be as hot as April.
NB: By “hot” the poster here means a GA-deltaT calculation, not a real temperature.
You’re saying UAH does not provide temperature data?
That will come as a shock to the WUWT team, who have been promoting it for more than a decade.
Nope. UAH does NOT report absolute temperatures, it is actually just a proxy.
Why does crossposting Spencer’s posts to WUWT constitute an endorsement?
I agree, UAH and RSS, etc provide proxy data. Data that correspond very well with one another and with the surface data, within confidence margins.
You should ask WUWT why it promotes and updates the UAH data on its site. I just note that it does. If it doesn’t endorse it, then why do that?
For information and free exchange of ideas, unlike the climate communists who try to cork any dissent.
Lots of articles of interest to the climate scam get cross-posted, how is this an official endorsement?
UAH is probably the only series that gives even a remotely accurate idea of “global” temperature trends.
It shows very clearly that EL Nino events have caused the small amount of warming since 1979.
It also shows that there is very little warming apart from those El Nino events.
That means that it shows NO HUMAN CAUSATION for the warming.
We are still waiting for you to show any evidence of warming by human released atmospheric CO2.
You continue to FAIL UTTERING !
UAH does *NOT* provide temperature data. It provides radiance data that has been “converted” to temperature. UAH has no way to identify if that radiance data actually represents the temperature of anything because intervening media can interfere with the radiance while the actual temperature stays the same.
UAH and RSS are a metric for RADIANCE. It doesn’t matter what math tricks they use or what dimensions they give it. It remains Raidance data and not temperature data.
It does however show reasonably reliable trend data and when warming happens.
ie at El Nino events.
Between those El Nino events.. very little warming
So UAH shows there is basically no warming from human CO2.
Do you have any empirical evidence that human CO2 causes warming?
Yes, the El Nino that caused the warming transient in mid 2023 to mid 2024, is gradually subsiding
Now you need to explain that to your fellow AGW whingers.
No evidence of any human causation… so NOT AGW.
You simply don’t know that! You are looking at a Global Average ΔT. That is, a rate of change in a baseline temperature. If you don’t know the baseline temperatures of two different months, you have no way to know if one was actually “hotter” than another.
May 2024 may have a baseline temp of 22°C (72°F) and an ΔT = 1.3°C => 23.3°C (74°F). June 2024 may have a baseline temp of 24°C (75°F) and a ΔT = 1.0°C => 25°C (77°F). That makes June warmer than May in absolute temperature!
The end result is that you CAN NOT claim one month is hotter than another unless you also know the absolute baseline temperature. You may claim, as from my example, that June had a higher growth rate of temperature than May, but that is all you can say.
There are a number of winter months that have higher ΔT’s than preceding months, but their monthly average temperatures are in no way actually warmer.
Climate science has bastardized much of physical science in order to propagandize the world. It would behoove you to actually take some physical science courses that require things like measurements, rates of change, hysteresis, and functional relationships. Analytic chemistry courses will teach you many of these items one needs to know when dealing with measurements.
Were this June not the hottest on record Mr. Craig would not have felt compelled to troll and scroll so many old tabloid headlines to distract us from the scientific substance of the present record .
His performance only serves to reinforce the reality of the continuing consequences of the Industrial Revolution.