Note: we’ve looked at solar to temperature correlations many times on WUWT, and our resident stats guru Willis Eschenbach has always found some flaw in the solar cycles to temperature analysis done by others. They say this in the paper:
It is often thought that the response to solar cycle is too weak at the surface to be detectable, and that even if a signal is claimed to have been found its statistical significance cannot be established. Using 150 yr of sea surface temperature data from 1854 to 2007 and an objective method, we found a robust signal of warming over solar max and cooling over solar min, with high statistical significance in the time domain.
We’ll see if this one pans out -Anthony

This paper demonstrates that a solar cycle response exists in global sea surface temperatures (SST). The abstract says “The signal is robust provided that the years near the Second World War are excluded, during which transitions from British ships to U.S. ships introduced warm bias in the SST.”
Signals of warming during solar maximum and cooling during solar minimum years are found in the global SST over the 14 cycles from 1854–2007. The magnitude of the solar cycle response averaged over the oceans between 60°S and 60°N is about 0.1°C of warming for each W/m2 variation of the total solar irradiance. This value was determined after excluding suspected bad data. The multidecadal trend of response to solar forcing is found to account for about a quarter of the observed warming in SST during the past 150 yr.
Some figures from the paper:


Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Solar Cycles in 150 Years of Global Sea Surface Temperature Data?
This paper demonstrates that a solar cycle response exists in global sea surface temperatures (SST).
Search terms: “solar cycle response sea surface temperatures”
The Royal Meteorological Society:
An influence of the 11-year solar cycle on surface weather patterns over Europe has been proposed for many years.
h ttps://rmets.onlinelibrary.wiley.com/doi/10.1002/qj.2782
The Met Office:
An overview of El Niño, La Niña and the Southern Oscillation.
h ttps://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/gpc-outlooks/el-nino-la-nina/enso-description
And the narrative
“The sun may be the key to all life on Earth, but it is not the reason global temperatures have been rapidly rising in the last century and a half.”
…
The sun’s overall brightness has only contributed about .01 degrees Celsius of global warming since the 1850s, according to the National Oceanic and Atmospheric Administration. “Overall, the sun has really changed very little, except for these fluctuations every 11 years”
…
Greenhouse gas emissions from the extraction of fossil fuels are the primary cause for global warming, climate scientists say.
h ttps://abcnews.go.com/US/sun-solely-responsible-rapid-global-warming-occurring-now/story?id=116634654
0.01C? Right.
Sea Surface Temperature data (known) is beyond assumed 60°N to 60°S (17.8°C) but actually 50°N to 50°S (21.3°C). 80°N to 70°S (14.1°C to 15.6°C). Sea surface temperature maximums are around 30°C. Minimum is the point sea freezes which is from 0°C to -1.8°C dependent on salinity. See how above 20°C is inappropriate. Some of us tend to buy anything were told about climate without any thought.
I would love see where all these sea surface temperatures were measured in 1854 !
Even up to 2005 most of the Southern ocean has sporadic, if any, regular temperature measurements,
And most of the measurements they have globally are just along the shipping lanes.
I call “bovex” on having “global” ocean data that could possibly be used for this sort of analysis.
001C accuracy is obviously crazy. The idea from above article, “about a quarter of the observed warming in SST during the past 150 yr”, is only a little saner.
If one assumes their 1/4 to 3/4 observation is correct…one can only conclude that increased solar input warms the ocean surface and puts more water vapor into the atmosphere. The additional 3/4 portion is the result of the absorption in the SW bands of solar heating of the atmosphere that now contains more water vapor…..but is also warmer and does not cause more cloud formation. (More clouds reflect more SW to outer space by far).
At some point a few years later the average sea surface temp has warmed enough due to the increased LWIR plus the bit due to the average warmer air temp….and sufficiently more clouds can form that reflect more SW to outer space….And voila! you have cycle that is apparently 25% caused by the 11 year solar TSI, but is really mostly due to the rate at which cloud cover changes over ocean basins….most notably the Pacific, which is huge and takes up nearly half of the Earth’s orb as viewed from the Sun once a day.
El Nino occurrences in the Pacific might be the measurable result of this hypothesis.
El Nino cycles show clearly on UAH trend. I’m sure if I analyze the other cycles, I will find exactly what I hypothesize because “confirmation bias” and “retrospective falsification” are very strong factors in any climate analyses. Sorta/s
We DON’T know that. We only have proxies and we only assume they provide accuracy since about 1930….
But but but simple inspection of the graph shows that TSI from the Sun lags SST. In other words, Sea Temps come first, and therefore Sea Temps CAUSE Solar Irradiance, not the other way around (unidirectional Arrow of Time and all that).
What they actually say is:
Wonder what accounts for the other >3/4 of the observed temperature trend?
SUVs, obviously.
Hot air from Climate “Scientists”, what else? 😀
No, they say nothing of the kind. They say that the signal they detect can be due to the 11 years solar cycle. They do not say, and can not say, that variations on longer time scales is not due to the sun.
I literally quoted the article. But to save you following the link, here is a screenshot of the pdf where they actually state it:
Now about those SUVs etc….
You enjoy your SUV!
I don’t own one. I do have a Fiat Tipo and it’s a regular size – non huge American or SUV type – vehicle.
London has narrow streets….
Of course

I enjoy my V8 Commodore ! 🙂
A large proportion of the people I know have big SUVs or big Trade type vehicles.
You know, people who actually do things, and build thing, and not just sit around winging like little babies about their sad little fantasies.
Hey, bnice: Which ‘proportion’.. the top or bottom? lol
Saw that, but from this analysis they can’t conclude that. The proper inference is that 3/4 of the trend is due to other causes, of which other solar influences can’t be excluded.
What other solar influences might those be?
Electromagnetic.
Eh?
The study evaluated the influence of the 11-year (Schwab) sunspot cycle. There are many other solar cycles such as the Gleissberg cycle (~ 70 years) and likely many others that we have not been around long enough to observe.
For the last 5000+ years, there has been a warm period every 1000 years or so, including the current one.
And your evidence for this is…..?
Climate data.
Answer: a huge mountain of proxies and historical data.
On longer timescales, the Gleissberg cycle describes a variation in the strength of the 11-year cycles and has a period of roughly 80 to 90 years. Another long-term variation is the Suess-de Vries cycle, which has a period of about 200 years. This cycle is prominent in records of cosmogenic isotopes, like carbon-14 in tree rings, which are influenced by the sun’s magnetic field over long periods.
https://biologyinsights.com/sunshine-cycles-how-the-suns-activity-affects-earth/
And not to forget the Eddy Cycle of around 1000years.
A must read by John A. Eddy
The sun, the earth and the near-earth space.
That’s great. Why hasn’t anybody pointed this out before?
We have. It’s just that you cliministas don’t want to hear anything that doesn’t support what you want to believe.
What caused the Minoan, Egyptian, Roman and Medieval warm periods? It wasn’t CO2.
No one is saying it was.
If it was something else, why should anyone assume that whatever happened before isn’t happening now? That’s the null hypothesis.
but you are saying it has caused this one? Got it.
Many possibles. Changes in UV or other wavelengths, solar wind, magneric field / GCRs, cloud interaction, etc.
Five down-votes (and counting) for quoting the article referenced in the post and enclosing a screenshot of it.
WUWT at its finest!
No. it’s not for quoting the article, it’s for your thought processes.
Climate change is natural. Just like it has been for 4.5 BILLION years.
Right, so the sun caused maybe up to slightly less than 25% of the warming observed in global average sea surface temperature (according to this paper).
What ‘natural’ forcing caused the other >75% of the observed warming?
Circulation f.e., horizontal and vertical, periodic and not periodic.
And is there any evidence for this?
Is there any evidence for other forcings?
Yes, this article’s about one of them.
That makes one.
Yes. Previous warm periods.
So now they know the SST’s back to 1850 0.25C? 😉
No, the precision comes from averaging (how many times, Oh Lord?!)
Precision comes from averaging
Did you engage your brain when you typed that?
He’s just regurgitating what he’s been taught to believe.
Average 30 random whole numbers and see how many decimal places your calculator/spreadsheet can generate. That’s just one month’s worth of data from one locality and doesn’t even count half degrees, which most thermometers had calibration for.
No one measured hundredths of a degree on a thermometer in the 1850s, but you can easily reach that level of precision when averaging the data they did have.
That this has to be continuously explained to adults is remarkable thing.
I have seen stupid statements before, but TFN manages to top them all.
First off, no hand calculator calculates error directly. That is a separate calculation.
Secondly, you actually believe that if you divide one whole number by a second that the number of decimal digits equals the error estimate?
What are you talking about? It’s not a measure of error; it’s basic maths.
Wow, you are actually proud of the fact you know nothing about statistics.
It’s not basic math. It’s associated with measurements – that makes it basic metrology. A discipline which you know nothing about.
IT’S NOT BASIC MATHS. IT’S DETERMANING THE TEMPERATURE OF THE SEA 150 YEARS AGO WITHOUT KNOWING WHAT IT WAS.
Oh dear, FN thinks the number of decimals shown on a calculator means “accuracy” or “precision”
That is hilarious, no matter who you are. !
I guess dividing 10 by 2 is less precise then dividing 10 by 3. The second has more digits, so it must be more accurate.
That’s what climate science believes!
And with that you have demonstrated that everything that you say can be ignored.
You know nothing about resolution limits or significant digit rules. You are a blackboard statistician where numbers is just numbers and have no physical meaning.
In SSTs, how does that account for some unknown annual, decadal or unusual local event which in normal circumstances would not be present and/or relevant?
Just average it away? Skating on thin ice dude.
I learned the difference between precision and accuracy years ago. Your turn…..
You can’t average measurements of different things and different sensors, and improve the accuracy.
It’s quite telling that you never find anyone with statistical experience amongst the so called climate science.
They aren’t different things. They’re all sea surface temperatures. Of course you can average them.
They are different things, because they are not the same piece of the ocean.
I never said you can’t average them, what I stated was that such averaging does not improve accuracy.
Only someone with absolutely no knowledge of statistics would believe that you can use one thermometer to measure the temperature of the arctic ocean, and another sensor to measure the temperature of the Mediterranean, and that would improve the accuracy of both measurements.
Then again, I have never met a so called climate scientist who has any expertise in statistics. Heck, very few of them have any expertise in basic science in the first place.
Particularly when those measurements don’t actually exist for much of the ocean before the ARGO buoys were implemented.
Even after ARGO buoys, we still don’t know with any precision. We would need to increase the number of buoys by several hundred at least, before we could get the kind of accuracy these guys are claiming.
They are taken from the same locality. You could say that air temperatures from a particular region are invalid because air is in motion. Same nonsense.
Please learn the first rule of holes.
Are you actually saying that a reading in the Arctic and a reading in the Mediterranean are the same locality?
You are still ignoring the fact that there were only a couple of sensors reading the oceans in the 1850’s.
And the fact that air is in motion does mean that a fixed sensor will be measuring different molecules of air each time you take a reading.
That is both basic science and logic.
Do you really want to keep on embarrassing yourself?
It’s not nonsense. There may be a temperature gradient between two close points in the ocean but there is no “average” temperature. If you take a bucket from each point, one bucket at 30C and the other at 26C, when you dump them together in a bigger bucket you do *not* get 56C in the bigger bucket. You don’t even know where the mid-point of 28C might be if you can’t define the functional relationship of the gradient between them. The spatial mid-point might even be at 20C – and how would you *know*?
You can *NOT* average temperatures. Temperature is an intensive property.
If I give you a rock of 2 grams and a second rock of 4 grams you’ll have a total mass of 6 grams. Their average mass is 3 grams. I can give you two rocks of 3 grams each and you’ll have the same total mass.
If I give you a rock of 70degF and a second rock of 80degF you will *not* have a total temperature of 150degF in your hands. That means that an “average” value is meaningless. If I give you two rocks of 75degF each you’ll have totally different physical realization than one rock of 70degF and a second of 80degF. Totally different than what you have with mass.
That is why taking a temperature in Topeka of 90degF and one in Boston of 80degF does *not* give you an average temperature of 85degF.
If you have two points in the same medium, each with a different temperature, and there is a functional relationship defining the gradient between the two points you can find the mid-point temperature of the gradient. But that is *NOT* an average temperature, it is just the temperature at a different point in the medium.
There is no such thing as a “global average”. There is no common medium connecting the measurement points and there is no functional relationship defining the temperature gradient between any two points in the overall data set of measurements. There is *no* average temperature between Berryton, KS and Holton, KS. There is just the Berryton temperature and the Holton temperature. Because of the Kansas River valley between them the functional relationship of the temperature gradient between them is impossible to define either for the short term or the long term. And they are only 50 miles or so apart!
Climate science is a joke from the word go. Climate science doesn’t understand what an intensive property is. Climate science doesn’t understand what resolution is. Climate science doesn’t understand significant digit rules. Climate science thinks all measurement uncertainty is random, Gaussian, and cancels. Climate science doesn’t understand the need to weight random variables with different variances when concatenating them into a common data set. Climate science doesn’t understand that the residuals between a set of estimated measurement value and the least-squares trend line is *not* measurement uncertainty, it is merely a best-fit metric. Climate science doesn’t understand that the standard deviation of sample means is not measurement uncertainty, it is metric for *sampling* error, and adds to measurement uncertainty.
The truly sad thing is all of the climate science white papers what have been written over the past 70 years that demonstrate each of the issues listed above. The relationship between the vast majority of climate scientists and physical science is non-existent.
You can average temperatures, so long as you keep track of the margin of errors.
Quit pushing bad science and bad math. That’s TFN’s job.
You can calculate an average value but it is meaningless. It is always meaningless for intensive properties.
Again, if you have two masses, one of 2 grams and one of 4 grams, you can calculate an actual average value. Two objects of the same value as the average will result in the same amount of substance in your hand.
If you measure the temperature at two different points in space, there is no guarantee that the mid-point is the “average” of the two values. It *might* be the mid-point of a linear temperature gradient between the two points but it might not be that either.
The temperature at the peak of a mountain is *NOT* the average of the temperature at a point on the east side of the mountain and a point on the west side of the mountain, both equidistant from the peak. In fact, the temperature at the peak won’t be found any place in between the two points except at the peak. Because of the difference in insolation from the sun, the temperature gradient won’t even be the same on the east side of the mountain and on the west side of the mountain.
“Numbers is just numbers” is a statisticians view of the world but it is the meme needed to calculate an “average” temperature. It just doesn’t work in the real world.
WOW, you are really displaying your total lack of mathematical education now.
The “large samples” rules assumes “stationarity” of what is being measured, which temperatures measured at different places at random most certainly do not have.
Two not so precise measurements averaged increases precision? Where did you learn your BS, running through a class room of primary school? 😀 😀
Look up the word ‘precision’. It doesn’t necessarily equate to ‘accuracy’. Maybe English isn’t your first language, so I’m not judging.
My guess is that *YOU* have no idea of what precision is either. Or accuracy. Or the relationship between the two.
Precision is a measure of the reproducibility of measurements and has precisely NOTHING to do with averaging.
Averaging data that doesn’t even exist.
So accurate 😉
Even if data did exist, this type of data should be averaged in quadrature, not using the “large samples” fallacy that climate pseudo-scientists, in their ignorance, like to pretend they can use.
A few dozen readings allows one to gauge the entire oceans to 9.25C? Especially when the instruments themselves are only accurate to about 1.0C.
Yet another climinista who is proud of his ignorance in basic statistics.
Oh Lord, show us some physical science textbook references where precision can be increased through averaging!
Read this page.
https://phys.libretexts.org/Bookshelves/College_Physics/College_Physics_1e_(OpenStax)/01%3A_The_Nature_of_Science_and_Physics/1.03%3A_Accuracy_Precision_and_Significant_Figures
Precision is an uncertainty stated as a stardard deviation. It is not reduced by averaging. It is a parameter of the averaging. The probability distribution won’t change regardless of the number of entries.
Read this page.
https://www.nist.gov/system/files/documents/2019/05/13/sop-29-assignment-of-uncertainty-20190506.pdf
Lastly, if you are sure you are correct, explain why you can’t use a yard stick to measure the thickness of a trace on an semiconductor. The logic of your assertion is that you just need to make enough measurements to obtain the precision you want.
How can you say that without your face melting off?
‘Averaging’ = Average Result
‘Precision’ = Precise Result
Get it?
“So now they know the SST’s back to 1850”
No, the Climate Alarmists do not know what the sea surface temperatures were in 1850. It wasn’t until about World War II that the sea surface temperatures were recorded on a large scale.
Before World War II, they are just guessing at what the sea surface temperatures were.
That’s how you get bogus, bastardized Hockey Stick global charts, by using a lot of sea surface temperature guesses to skew the trendline.
So what is the trend in global sea surface temperatures since WWII, since you trust only them?
Given the quantity and quality of the readings taken before that point, there is no reason to trust those measurements to provide an accurate reading for the temperature of the entire oceans surface.
Southern oceans did not have sufficient coverage before 2005 to even consider any nonsense data averaging.
There are papers on how to calculate error intervals based on the number of sensors vs. the volume being measured. I wish I could relocate a few of them.
Errors based on the accuracy of your sensors are added to this number.
And there’s the deflection from having his azz handed to him about precision and accuracy.
In 4.5 billion years the Sun has orbited galactic central ~20 times. That means the Earth has passed through the (4) major spiral arms ~100 times.
Of course, that has no effect…
Why would it be having an effect at this time in particular and what is the evidence to support that?
Clearly you can’t be bothered to find out. What happened to basic curiosity?
It gave way to demanding answer of others.
Here’s a few pointers:
The CLOUD experiment
Svensmark
Shaviv
When the CLOUD results came in CERN scientists were told not to discuss them on the grounds that it is a politically live issue…
Svensmark’s cosmic ray theory is already in tatters. He said increased cosmic rays mean increase cloud cover which means cooling.
Cosmic rays reaching earth have increased since he came up with this idea, but warming has continued.
Any evidence for this?
The memory hole is not in the imagination. It is – as the wuhan incident showed all too clearly – very real.
I don’t recall you showing any evidence of anything other than your anti-human bias
CERN: “Don’t interpret the CLOUD experiment results”
CERN ‘gags’ physicists in cosmic ray climate experiment
https://www.theregister.com/2011/07/18/cern_cosmic_ray_gag/
“CERN Director General Rolf-Dieter Heuer told Welt Online that the scientists should refrain from drawing conclusions from the latest experiment.
“I have asked the colleagues to present the results clearly, but not to interpret them,” reports veteran science editor Nigel Calder on his blog. Why?
Because, Heuer says, “That would go immediately into the highly political arena of the climate change debate. “
How can there be a debate with a consensus?
Seems that statement demonstrates the science is not settled.
And of course TFN knows as a fact, that whenever anyone his high priests scream out BLASPHEMY, that the object of their scorn has been completely refuted.
Celestial Driver of Phanerozoic Climate?
ABSTRACT: Long periodic geodynamic processes with durations between 150 and 600 Million years appear to be in phase with similar galactic cycles, caused by the path of the solar system through the spiral arms of the Milky Way. This path is assumed by some authors to cause climate change due to cosmic ray fluctuations, affecting the cloud formation and the related albedo of the Earth, which periodically lead to glaciations every 150 Ma.
Btw, the press release for that paper caused Rahmstorf to turn round and ask his community for help to refuse the paper as it could harm “the Cause”
Aren’t the arms orbiting along with the sun?
Do some reading… Density wave theory
Guaranteed to learn something.
I should have been clearer. I wasn’t suggesting everything moved at the same rate, just that it’s not as simple as multiplying the number of rotations by 4. In any event the science seems to be far from settled, but the estimates I’ve seen suggest we cross an arm around every 200 million years or less. That’s a lot more crossings than your 100 times.
The galactic year is… 225 million Earth years.
And we cross a galactic arm every 140 – 200 million years. Hence we do not pass through 4 arms every galactic year, and have not made the crossing ≈ 100 times.
And looking back I made a major mistake on suggesting it was more time than you said. It’s actually a lot fewer.
No
No.
Never ever
Let’s see . . . 4.5 billion years divided by 100 is about . . . yeah . . . 45 million years between spiral arm passages.
The above article is concerned with cyclic variations over the last 150 years, so yeah, about 0.0003% of time between two successive spiral arm passages.
Of course, that basically means “no effect” over 150 years.
This is perhaps the most idiotic “illustration” I’ve seen of the very many such posted on the Web.
Whoever “illustrated” the video simply does not understand that the elliptical orbits of the planets around the Sun are gravitationally bound to follow the mean velocity of the Earth’s motion with respect to the inertial frame of the Milky Way.
In other words, the Sun’s gravity overwhelms that of the exterior galaxy mass and forces the planets to stay (for all intents and purposes) in their elliptical orbits around the Sun even as the solar system travels around the galactic center once every 225-250 million years and during each 60-70 million year interval associated with a full cycle up and down through the galactic plane.
The video you presented would have people believe the planets of our solar system are not tightly gravitationally bound to the Sun.
In addition, the video’s illustration that the planets can freely and independently spiral out of the current plane of the ecliptic displays an absence of understanding basic astrodynamics, specifically lack of how angular momentum of orbiting bodies is conserved absent application of an external force/torque.
Bottom line the video is RIDICULOUS.
The sun drags every planet, asteroid, and moon along as it orbits the galaxy. The physical path is in three dimensions.
That the illustration was not sophisticated enough to show totally accurate orbits doesn’t mean what it is portraying is ridiculous.
The point is that everything in the solar system is constantly experiencing different conditions as they travel thru space.
“The sun drags every planet, asteroid, and moon along as it orbits the galaxy.”
Nonsense. Every planet is orbiting the galaxy along with the sun. The sun’s gravity causes the planets to orbit the sun, but it isn’t causing the planets to orbit the galaxy.
I said:
The sun drags every planet, asteroid, and moon along as it orbits the galaxy.
If the sun disappeared, do you think the trajectories of the planets wouldn’t change? If they do, then the sun is dragging them along with it.
“I said”
I know what you said. I quoted it verbatim. There’s no need to keep repeating it.
“If the sun disappeared, do you think the trajectories of the planets wouldn’t change?”
They would move in straight lines relative to the solar system. They would still orbit the galaxy.
“If they do, then the sun is dragging them along with it.”
Have you never heard of Galileo or Newton? Motion is relative. If you travel in car at a constant velocity you are not being dragged along by the car, something that will be apparent if the car hits a brick wall.
How dumb. If you are not being dragged by the car, how exactly did you reach the speed of the car? You’ve obviously never been in a high horsepower car, or airplane. If you have, what do you call that pressure against your back. It is dragging you! Unless you are frictionless, even at a constant velocity, the car is dragging you.
Of course they would continue orbiting the center of the galaxy, at least for awhile. However, without the sun, they would continue moving in a straight line at whatever angle they were moving at when the sun disappeared. There would be nothing keeping them in a circular motion. What happens to a ball at the end of a string when the string breaks?
You’ve never studied vector calculus have you?
“If you are not being dragged by the car, how exactly did you reach the speed of the car?”
Look Up the word acceleration. You are being “dragged” by the car as it accelerates. You feel that as being pushed back into the seat. Once you have reached a constant velocity you are no longer being dragged.
“Unless you are frictionless, even at a constant velocity, the car is dragging you.”
Try jumping out of the car. What happens to your speed relative to the ground when you are no longer being dragged by the car?
“Of course they would continue orbiting the center of the galaxy, at least for awhile.”
How if the sun is no longer dragging them?
“However, without the sun, they would continue moving in a straight line at whatever angle they were moving at when the sun disappeared.”
You are claiming that only the sun is affected by the gravity of the galaxy. You are saying Newton was wrong.
Totally incorrect example! You’ve never taken physics have you? Have you ever heard of angular velocity?
The sun and earth is no different than a ball at the end of a string. How does the string keep the ball moving in a circular motion? It imparts a fonstant acceleration to the ball. At the moment the string breaks, straight line motion begins.
Why didn’t you answer what occurs to the ball when the string breaks?
“At the moment the string breaks, straight line motion begins.”
Relative to you. That’s what I said about the sun disappearing. The planets move in a straight line relative to the solar system. (Ignoring the pull of over planets for the sake of simplicity.)
But what happens if you were not stationary, relative to some other reference frame when the sting snaps? Say you were on a planet moving around the solar system at 100000 km/h. If the string were dragging the ball, do you think the ball just stops relative to the solar system, or does it continue moving at 100000 km/h, plus the speed it was traveling relative to you?
You are dragging the ball around your head, you are not dragging it round the solar system or around the galaxy. It’s own inertia does that.
Exactly as I said. That is the proof of my claim that the sun “drags” the planets along with it.
The rest of your hypotheticals are far beyond what I said.
Then you need to define what you mean by “drag”. To me it implies that the sun is pulling the planets along, and that they wouldn’t move round the galaxy if they were not being dragged by the sun.
Drag, pull, exert a gravity force that modifies the straight path it would follow, due to Newton’s 1st law of motion, into a circular motion.
Remember, the earth does not move in a flat x-y plane. It’s motion consists of x-y-z components.
You could say the sun drags the planets in the sense that it’s gravity is currently pulling them towards it. Personally though I find that a confusing use if the word, given that orbital drag usually means the drag from the atmosphere.
But that is different to saying the sun drags the planets around the galaxy. That gives a misleading impression the the sun is causing the orbit of the planets round the galaxy, when in fact Newton’s 1st and the “drag” from the galactic mass are all that is required to explain their orbits.
As I said to ToldYouSo, what I think you may mean is that the sun’s gravity keeps the planets close together as they go round the galaxy, rather than drifting apart. But I still think it’s misleading to equate that with it dragging the planets around the galaxy.
Maybe I’m just being pedantic, but a lot of these videos are arguing that the laws of physics are wrong and that they prove some mystical pattern to the universe.
Not true. If the Sun and its gravity disappeared instantly, each of the planets would still be experiencing the same gravitational force that the Milky Way created on the Sun, which currently binds it to its 225-250 million year orbital period around the galactic center.
However, the planets would also have their tangential and radial velocities associated with their individual elliptical orbits around the Sun, so each would “sling” away from their elliptical (i.e.,curved) path on a different, independent vector (ignoring the miniscule inter-planetary gravitational effects) following the Sun’s hypothetical disappearance. Somebody other than me would have to work out whether or not that residual velocity for each planet would be sufficient to place it on an escape trajectory exiting the Milky Way, assuming the most favorable vector with respect to the Milky Way’s inertial frame-of-reference for such escape.
Not really true, if you understand basic physics. The Milky Way, while incredibly massive, is vastly distant from Earth. The force of gravity diminishes with the square of the distance between objects, so the sheer distance significantly weakens the Milky Way’s pull on Earth. Although technically the Milky Way exerts a gravitational pull on everything within it, including each of the solar system planets, it is negligible compared to Sun’s attraction on the planets.
The simplest way to see this is true is to consider that Kepler’s Laws, based solely on the gravitational attraction of the Sun, very accurately describe the nearly perfect elliptical paths the planets have in their orbits. If the Milky Way galaxy had any significant gravitational attraction at the orbital distance of our solar system from its center-of-mass, then there would a noticeable distortion in planetary orbits when the planets were somewhat between the Sun and galactic center and when they were somewhat outboard of the Sun and the galactic center. There simply is no such orbital distortion of any significance.
Therefore, the planets “orbit the galaxy” only because they are strongly gravitationally bound to the Sun, which in turn is very weakly bound to the center-of-mass of the Milky Way.
How weak is “weakly bound”? We can estimate that from Newton’s law of gravitation, rearranged to the expression that a = F/m = GM/r^2, where M is the overall mass of the Milky Way inboard of the Sun’s average orbital distance and r is the average distance from the Sun to the galactic center.
In MKS units, we have:
G = 6.67e-11 N⋅m²/kg² = 6.67e-11 m^3/(kg*sec^2)
M = 1.5e12 solar masses total (this includes the estimated dark matter . . . for simplicity and conservatism, I’ll just assume all of this is inboard of the Sun’s orbital radius) = (1.52e12)*(1.99e^30) = 3.02e42 kg
r = 26,000 light-years = 2.46e20 m
1 N = 1 kg*m/sec^2
1 g = 9.8 m/sec^2
Therefore, a = (6.67e-11)*(3.02e42 kg)/(2.46e20)^2 = 3.3e-9 m/sec^2 = 3.4e-10 g, or less than one-billionth of one Earth gravity. Yeah, that’s pretty weakly bound!
To put that in perspective, the Sun’s gravitation attraction at the orbital distance of planet Neptune is about 0.0011g, or about 3 million times stronger.
“Although technically the Milky Way exerts a gravitational pull on everything within it, including each of the solar system planets, it is negligible compared to Sun’s attraction on the planets.”
Of course, that’s why planets orbit the sun.
“If the Milky Way galaxy had any significant gravitational attraction at the orbital distance of our solar system from its center-of-mass, then there would a noticeable distortion in planetary orbits when the planets were somewhat between the Sun and galactic center and when they were somewhat outboard of the Sun and the galactic center.”
Yes, because the orbit of each planet is tiny compared to that of the galaxy.
“Therefore, the planets “orbit the galaxy” only because they are strongly gravitationally bound to the Sun”
And that’s where I disagree. Each planet has to have the same amount of acceleration from the Galactic mass as the sun. It’s the same with the moon and the earth and the sun. Both the moon and the earth have the same acceleration towards the sun, it’s just that they are also orbiting each other. It would make no sense to say that the moon couldn’t orbit the sun if the earth didn’t exist.
“or less than one-billionth of one Earth gravity. Yeah, that’s pretty weakly bound!”
But still enough to allow an object to orbit the galaxy. My point is that gravity doesn’t care if that object is the sun, the earth or a feather, the orbit is the same. It’s probably more correct to say the Barycenter of the solar system orbits the galaxy, and that is close to or inside the sun, but I still can;t see how that translates to the sun drags the planets around the galaxy.
Ahhh . . . I can see the source of your conceptualization confusion of what’s orbiting what. On the one hand (as in your above statements), you consider that the planets orbit the Sun, but then you turn around and mainly argue that, no, the planets really do obit the galaxy (by which I infer you mean properly orbit the Milky Way’s center-of-mass). You can’t have it both ways.
But there is a straightforward analysis to see which description of the true orbits of the solar system planets must be correct. Kepler’s laws for the orbits around a central mass state that the semi-major axis of a planet’s orbit is inversely related to its average orbital velocity. So, do the solar system planets have orbital semi-major axes that are in the range of our solar system dimensions:
Planet — Semi-major Axis (AU)
Mercury — 0.39
Venus — 0.72
Earth — 1.00
Mars — 1.52
Jupiter — 5.20
Saturn— 9.54
Uranus —19.22
Neptune — 30.06
or instead do they all have pretty much the same semi-major axes for their orbits as determined by their average distance from the galactic center: some 1.7 x 10^9 AU away (noting that even a 30 AU variation is insignificant against such a large value)?
Since the planets have varying average orbital velocities (from a low for Neptune at about 5.4 km/sec to a high for Mercury at about 46 km/sec) relative to the Sun, if we superimpose those velocities on top of the Sun’s average velocity in its orbit relative to the galactic center (about 225 km/sec), we then find that there is an average orbital speed variation of the planets relative to the galactic center of 225 +/- 5.4 km/sec to 225 +/- 46 km/sec worst-case (i.e., for the planets revolving around the Sun with their Sun-referenced semimajor axes aligned with the Sun’s tangential direction of motion as it revolves around the galactic center), with the “+/-“ reflecting that orbital motion ascending to periapsis is (relatively) opposite the direction descending to apoapsis in an inertial frame of reference. So, IF the planets are truly (not apparently) orbiting the galactic center and not the Sun we would then have maximum inertial average speed variations of about (5.4+46) = 51 km/sec, or about 23% of 225 km/sec, amongst the planets relative to the galactic center . . . yet the semi-major axes of ALL of these would be essential fixed at constant value (variation of less than 1 part in 10^7) due to their extreme distance from the galactic center . . . these two situations combined are totally incompatible with Kepler’s laws.
Thus, in a very real physical sense—and respecting how “appearances can be deceiving”—one must conclude that the planets are truly in orbit around the Sun and only appear to orbit the Milky Way center due to being gravitationally bound to the Sun and its motion relative to the galactic center.
“You can’t have it both ways.”
I think this is going to depend on exactly how you define orbit. I think you can say the the moon robots the Earth and also indirectly orbits the sun, and all orbit the galaxy.
What I think you can say is that the gravity of the sun keeps all the planets close together as we orbit the galaxy. If the sun had little mass the planets would probably drift apart as they orbit the galactic centre. But I just think it’s misleading to say that means the sun is dragging the planets around the galaxy.
“or instead do they all have pretty much the same semi-major axes for their orbits as determined by their average distance from the galactic center”
I’m not sure what point you are making here. There are two different orbits, one round the sun and the other round the galaxy.
“these two situations combined are totally incompatible with Kepler’s laws.”
Kepler’s are simplifications or reality. They only work exactly when you have two bodies. The earth wobbles as it orbits the sun due to orbiting the moon. If the moon was the same mass as the earth both would be rotating round a point midway between the two. Would you then say neither was orbiting the sun?
That the illustration was not sophisticated enough to show even approximate planetary orbits does not preclude what it portrays as being ridiculous. Any competent student of astronomy, orbital mechanics, or celestial navigation would realize that.
Hint: even crudely done with a 2D representation of 3D motion, that animation clearly portrays the planets of the solar system breaking out of their current, generally coplanar alignment (the ecliptic) and then moving in non-elliptic arcs at widely varying distances from the Sun, in direct ignorance of the conservation of the angular momentum vector for each planet as each is gravitationally bound to orbit the Sun. Conservation of angular momentum is key to orbital mechanics.
The same thing that caused the other warm periods of the last 5000 years.
Which is what?
Don’t know, don’t have to know.
The Null hypothesis is that whatever caused the earlier warm periods is also causing the current warm period.
It is up to you to disprove the Null hypothesis before you can claim that it must be caused by CO2.
Said the ostrich.
If you know, please enlighten the rest of us.
Are you taking the position that unless one can offer a cause for these warmings, we must assume they didn’t happen? Even if the data clearly shows them appearing.
You could just “invent” a fallacy and use that. !
Or write another Grimm Bros fairy-tale. !!
It’s the sun, stupid!
The paper this post references just said it wasn’t, stupid!
The paper is based on non-existent or faked data.
It is totally BOGUS and meaningless.
The paper is wrong.
The paper only looked at the 11 year cycle. There are other cycles.
See comments:
https://wattsupwiththat.com/2025/08/14/solar-cycles-in-150-years-of-global-sea-surface-temperature-data/#comment-4105809
and
https://wattsupwiththat.com/2025/08/14/solar-cycles-in-150-years-of-global-sea-surface-temperature-data/#comment-4105732
They’re all references to solar cycles, which the above paper debunks, or at least minimises.
So what are these ‘other’ mysterious forcings?
Wow, one solar cycle by itself is insufficient to explain all of the warming, therefore all other solar cycles that weren’t discussed in the paper must be ignored.
You really do hate doing real science, don’t you.
The paper is based on bogus and non-existent data.
It debunks absolutely nothing.
You missed…. Atmospheric water vapor, SST, cloud cover bit ?
This paper is totally BOGUS….
The SST data does not exist for much of the ocean for much of the period they pretend to do their analysis.
Please show us where the SST data was measured in say 1880, or 1920.
Even up to 2005 there was very little coverage of the SH.
The paper is wrong. The sun causes close to 100% of warming (please see the MWP, RWP MWP etc.) directly or indirectly.
The rest is not worth the conjecture.
Follow the money.
Indeed.
Or, “look over there”!
So those making billions off of the CO2 scam are all pure as the driven snow?
Who is making “billions off of the CO2 scam”?
Al Gore for one.
Mickey Mann for another
Is Michael Mann a billionaire?
What evidence is there for this?
Are you really as stupid as your posts make you sound?
Nobody said everybody involved in the global warming scam was making billions, just that there were billions involved.
Is Al Gore a billionaire and are his billions down to his comments on global warming?
Yes, most of Al Gore’s present worth comes from gaming the climate scam..
Those who build and operate wind and solar farms.
Those who’s paycheck depends on the continuation of the scam.
Do wind and solar make more money for their investors than fossil fuel companies?
Is he on record saying that it was the wind farms that made him a billionaire? Did he start from scratch with just wind farms?
Fossil fuel provides about 100 times more energy to the world than do wind and solar combined.
Wind and solar make money from government subsidies and mandates.
Fossil fuels contribute huge amounts of money to government coffers.
Only when they get subsidies.
Warren Buffet. He’s on record saying wind farm subsidies are why he built them.
Hansen
So Jim Hansen is a billionaire?
You really are a one trick pony, aren’t you.
Most of Hansen’s present worth comes from pushing the climate scam.
Same with Mickey Mann.
Same with Greta.
Big oil makes money off unreliables…shocking
we as a human race need hydrocarbons. We need them to prosper.
Governments make money from fossil fuels..
… then waste it on unreliable non-supply.
One GIVES, the other takes. !!
Leftists of course, prefer the one that takes… it is in their nature.
Yeah. We also need water. Doesn’t mean you can’t drown in it.
Nobody is drowning in oil either. In fact oil and the CO2 generated by burning it, is hugely beneficial to mankind.
Need near 1,000,000ppm H2O to drown
You have used a monumentally stupid analogy.
They also say:
Again…. show us where these measurements were made in say 1880 and 1920.
The whole paper is using bogus and made-up data
Meaningless gobbledygook.
And yet another paper promoted by WUWT turns out to be meaningless gobbledygook.
In your ignorant opinion, any paper published here is being “promoted”?
The idea that it is being put forward in order to be critiqued and or ridiculed just isn’t a possibility?
Normally when they want you to ridicule a paper they put the word “claim” in the headline.
In this case we have a 15 year old paper that is seen as supporting a common claim that warming is caused by the sun. Certainly Watts has a disclaimer at the front and suggests it might not pan out. But it’s difficult to see why it would be pulled out if retirement just so it can be ridiculed.
AW obviously knew it was bogus…
… only person deserving of ridicule are those that don’t realise that fact.
Most warming on Earth is from the sun….. that means that the paper is wrong.
… there is no evidence human CO2 has caused any warming at all.
thermometers measuring jet exhaust at airports.
On the ocean surfaces?
Ocean surface measurements do not exist for much of the ocean for the period they are looking at.
The claim has always been that CO2 accounts for almost all of the warming.
BTW, for most of that 150 years, the warming couldn’t have been caused by CO2, because CO2 wasn’t rising fast enough. The big increase in CO2 didn’t start until the 1950’s, 70 years ago.
Only over the long term. It has always been affected by short-term natural warming and cooling periods.
Interestingly, the rate of global warming from 1850 to 1950 was +0.03C per decade, which is to say ‘zero’, effectively.
From 1950-2024 it was +0.16C per decade. So your argument may not be helping you as much as you imagine.
That’s only after the data has been adjusted.
It was you who brought the data up.
Which data were you referring to if not global average temps?
I brought up the unadjusted numbers.
Not the carefully cooked numbers you were using.
“Interestingly, the rate of global warming from 1850 to 1950 was +0.03C per decade, which is to say ‘zero’, effectively.”
Only if you are looking at a bogus, bastardized Hockey Stick global chart.
Without this Big Lie, you and the other Climate Alarmists would have nothing.
You should know better. You have all the information you need to understand that the bogus Hockey Stick chart is a figment of the imagination. But you pretend it is legitimate. I know why, because it’s all you have, but don’t you feel a little guilty trying to use the bogus Hockey Stick to pull the wool over other’s eyes?
All the global surface data sets show little warming from their start dates to the mid 1950s.
In fact, there is a slight negative trend in all of them that start in 1850 right up to the mid-1930s. (So much for ‘gradual warming from the LIA’.)
Re the hockey stick nonsense: I fear you have drunk deeply from the conspiracy well. That paper has been replicated numerous times, independently.
You are in denial of reality, Tom.
It’s funny, but also tinged with pathos.
The fact that you think the “global surface data sets” represent “reality”, is quite hilarious.
They are highly contaminated by urban heat effects, airport heat, bad sites and data mal-manipulation.
Raw data from reasonable sites around the globe shows the 1930,40s similar or warmer than the period around 2000.
It’s all garbage. The hokey stick is nonsense. An artifact of programing.
The actual written, historic temperature records refute the trendline of the bogus Hockey Stick chart. The written record looks nothing like the bastardized record.
Anybody with half a brain ought to smell something fishy by looking at the written record and comparing it to the computer-generated record.
I’m thinking all our house Climate Alarmists have seen these discrepancies between the written, historic record and the bogus Hockey Stick trend line, yet they cling to claiming the Hockey Stick chart is legitimate.
We know why: Because the Hockey Stick is the only “evidence” they have of a connection between temperatures and CO2 concentrations. But it’s all contrived. And our Climate Alarmists pretend this is not so.
So our Climate Alarmists are deliberately denying reality in order to continue their attack on CO2 and Western Civilization using a bogus, bastardized Hockey Stick chart “hotter and hotter and hotter” temperature trendline that does not exist in reality.
There’s a lot of human psychology involved in Human-caused Climate Change..
Here’s reality. These original, written, regional temperature records look nothing like the bogus Hockey Stick chart. These charts show there were several periods of warming and cooling since the end of the Little Ice Age. They show the temperatures were just as warm in the recent, recorded past, as they are today. They show there is no unprecedented warming today.
And, since, these original, regional charts are ALL the Hockey Stick creators had to work with when creating the bogus Hockey Stick, Climate Alarmists should explain to us how a “hotter and hotter and hotter” Hockey Stick temperature profile can be derived from regional data that has no such profile?
Would love to hear your answer to that.
And to reiterate, the creators of the bogus Hockey Stick chart had essentially no sea surface temperature data to work with. Phil Jones, the originator of this post Little Ice Age Hockey Stick temperature trendline Lie, said most of the sea surface temperatures were “made up”. And that makes sense since there was no global sea surface data.
Here’s the charts that debunk the Bogus Hockey Stick chart temperature trend line. Tell me why this is not a true statement.
https://notrickszone.com/600-non-warming-graphs-1/
Nobody has the measurements to know what the fake, nonsensical, meaningless “global” temperature was even for most of the 20th century.
Certainly not to calculate the rate of warming.
Raw surface data from around the globe shows 1930s,40 as being a similar temperature to the early 2000’s.
And even Phil Jones at Hadley stated that most southern ocean data was “just made up”
Did you know that the most reliable atmospheric temperature set, UAH since 1979, shows no CO2 caused warming whatsoever !!
Yet another statement without meaning.
Over the longer term, the current warming is merely another blip in a 5 to 6 thousand year cooling trend.
Fascinating how you declare that only the time period that supports your religious beliefs are to be used.
Obvious candidates, but in need of supporting scientific data (proxies for each difficult to come by, and recent quantitative data not applicable for hindcasting back 150 years):
1) Long term variations in atmospheric water vapor content, TPW, affecting GHE, and/or
2) Long term variations in global areal coverage by clouds causing variations in Earth’s albedo.
What evidence have you for claims 1 & 2, please?
Evidence for your CO2 warming fantasy, please!
TFN, please look up—more importantly, understand—the difference in meaning between the word “candidates” and the word “claims”.
That is all.
We have more evidence for these, than we do for the belief that CO2 is the cause.
A few things come immediately to mind:
In short, the study finds a small solar influence on SSTs, but its methods and data raise doubts. It doesn’t challenge mainstream climate science, though it might be misused to suggest otherwise.
In the early days, a sailor would take a cansas bucket and from the bow of the ship toss it into the sea. He would then walk towards the stern, fast enough that the bucket would be able to stay in one place. When he approached the stern, he would haul the bucket back up and place a thermometer in it. (This procedure allowed the bucket time to sink, so that you were sampling water below the surface.) The thermometer was left in the bucket for (I believe) 5 minutes, so that the thermometer could stabilize.
Problems with this method:
Not all ships are the same size. For a longer ship, there was more room for the sailor to walk, so the bucket had more time to sink.
How diligent was the sailor in walking at exactly the same speed as the ship was sailing?
As ships got faster, was it possible to walk fast enough to keep the bucket stationary.
Did anyone take a measurement of the ship’s position at the time the sample was taken, or did they just estimate based on the daily reading?
Did changing weather conditions impact the sampling?
Canvas buckets are not water tight, they weep As a result the outer surface of the bucket was always wet. How much did evaporation cool the bucket while the measurement was being made?
How well trained was the sailor in how to accurately read a glass thermometer? (Read up on parallax.)
How accurate were the thermometers, were they regurally calibrated? How well cared for were they?
If the weather was bad, did the sailor actually take the measurement, or just say he did, and give you the same number as yesterday?
Over time, canvas buckets were replaced by metal ones.
How fast do metal buckets sink compared to canvas ones?
Metal buckets don’t weep the way canvas ones do. Did this impact the temperature being taken?
There are no records kept of which readings were taken with canvas vs. metal buckets.
As ships went from sail to steam and then to diesel, the bucket method was replaced by taking readings from the water being drawn in to cool the engines.
Problems with this method:
Different ships have different configurations and different drafts, so that the depth at which the water being sampled is not consistent between ships.
The draft of a ship changed as the cargo changed. This would impact the depth at which the water was sampled.
In the N. Atlantic, ships want to catch a ride on the Gulf Stream, since this speeds the trip while reducing fuel usage. In the old days, sailors would use their knowledge to try and estimate where the Gulf Stream was. Once satellites came along, daily images could tell you exactly where it was so, presumably, more ships would be successful in catching a ride on the Gulf Stream, which coincidentally, was also warmer than the rest of the N. Atlantic. This would mean that on average, the readings being taken would be warmer.
There have been other changes in shipping patterns over the decades, as various economies grew or shrank and as the types of goods being shipped changed.
Surface temperature measurements are a complete mess, and they get worse, the further back you go.
Wonder what accounts for the other >3/4 of the observed temperature trend?
From the graphs presented, what is being analyzed is the short term response of SST to solar energy. This response will come from solar energy that is absorbed near the ocean surface, primarily in the tropics.
However, much the solar energy that is incident upon the oceans is absorbed deeper into the oceans and can take many years or decades to find its way to the surface. To see the long term effects of the solar cycles on SST, one needs to use long term moving averages (10 years or more), preferably stripped of short term variations.
To effectively analyze the relationship between SSTs and solar energy input, one must look at the data over many time-frames. Much of the missing 3/4 is likely related to solar energy that is absorbed by the oceans below the surface layers, and only shows up in the surface SSTs after many years and decades.
“Much of the missing 3/4 is likely related to solar energy that is absorbed by the oceans below the surface layers, and only shows up in the surface SSTs after many years and decades.”
Salient points about ocean solar energy accumulation.
Another somewhat unseen issue is the absorbed solar radiation ocean warming effect is literally amplified by timely albedo reductions related to long and short-term TSI influences on ocean warming. The post-2000 cloud reduction can be interpreted as a positive feedback response to less solar activity until lately when the ocean responded positively to this solar cycle, producing more clouds, creating negative feedback, limiting further temperature growth as TSI falls too.
The SB equation says the 2023-24 spike doesn’t happen without the TSI jump.
Solar forcing.
The question that springs to mind for me is how does that data relate to shipping routes prior to the era of satellite data and ARGO and how accurate were the measurements of sea surface temperature going back 150 years.
Fair questions.
The rationale for the HadSST series is set out here.
HadSST4 is published monthly with all the uncertainty estimates (methods for estimating these are described in the linked paper).
The Met Office?
Had… is what you have been.
Which set do you prefer?
They all look pretty similar.
“They all look pretty similar.”
Yeah, they are all lying about the sea surface temperatures from 1850 to World War II.
They are just guessing at the sea surface temperatures for this time period, yet they and you pretend they are accurate. They are nonexistent.
The original, written, regional temperature records don’t correlate with the bogus Hockey Stick chart.
The difference between a regional chart and the bogus Hockey Stick chart? Answer: The regional land temperatures don’t include sea surface temperatures. And their temperature trendline looks nothing like the scary bogus “hotter and hotter and hotter” Hockey Stick chart.
Take away the bogus sea surface temperatures and we are left with a temperature chart that is cyclical and warms for a few decades and then cools for a few decades and the current warming is no warmer than previous warming periods after the end of the Little Ice Age.
Yes, Climate Alarmists have made a lot of unscientific noise using bogus sea surface temperatures, and are still doing so.
The original, written, regional temperature records from around the world debunk the temperature trendline of the bogus, bastardized Hockey Stick chart. They look nothing like a Hockey Stick chart.
The evidence for this is what? Thanks.
The original data. The ones that show the existence of both the medieval warm period and little ice age.
And before you whine about how we don’t trust these numbers. What we don’t trust are the accuracy claims that are being used.
We don’t trust the claimed world wide averages using these numbers, and especially the accuracy claims for those.
The evidence for this is the original, written, regional temperature data.
You look at the original data and compare it to the bogus Hockey Stick data, and they look nothing alike. One of them is wrong. Is the orginal, written data wrong, or is the bogus Hockey Stick data wrong?
The original data is cyclical. After the Little Ice Age ended around 1850, the temperatures warmed into the 1880-90’s, then the temperatures cooled for a few decades, reaching a low point in the 1910’s, then the temperatures warmed from there through the 1930’s, which reached a similar high temperature to the 1880’s, and then after the 1930’s, the temperatures cooled again down through the 1970’s, with the cooling equaling the cooling of the 1910’s, which caused some climate scientists to claim the Earth was headed into another Ice Age, but then the temperature started warming in the 1980’s up to today, with current temperatures no warmer than they were in the 1880’s or the 1930’s, which shows us there is no unprecedented warmth today, even though CO2 has increased in the atmosphere.
The Instrument-era portion of the bogus Hockey Stick chart shows the temperatures after the Little Ice Age slowly warming decade after decade, in conjunction with CO2 increases, and shows today as being the hottest time in human history.
One of these two temperature trendlines is bogus.
It shouldn’t take a genius to figure out which one.
The original, written temperature data looks nothing like the computer-generated Hockey Stick chart. And the original, written temperature data is the only date the Hockey Stick chart creators had to work with. The only data.
Where did this bogus Hockey Stick chart “hotter and hotter and hotter” temperature trendline come from? How do you get a “hotter and hotter and hotter” temperature trendline out of data that doesn’t have that trendline?
Answer: You lie and cheat and make things up to suit your personal opinion and promote a Big CO2 Lie.
Here are 600 original, written regional temperature charts from all over the world. None of them have a “hotter and hotter and hotter” temperature trendline.
https://notrickszone.com/600-non-warming-graphs-1/
Phil Jones , of Hadley Centre, admitted that most data for the southern oceans was “mostly made up”
Is that the Hadley data you want to use.
FAKE DATA. ! So typical.
Show us where ocean data for 1880 and 1920 was measured…. mostly IT WASN’T !!
They all use Phil Jones’ bogus data. That’s why they all have the same Hockey Stick trendline.
And none of them can explain why their trendline looks so different from the original, regional data.
The original, written, regional surface temperature data debunks ALL the Hockey Stick reconstructions.
You can’t legitimately get a “hotter and hotter and hotter” Hockey Stick chart trendline out of data that has no such trendline, and none of the written, regional data has such a trendline.
If you are looking at a Hockey Stick chart, you are looking at a BIG LIE that does not represent reality.
“We’ll see if this one pans out -Anthony”
This is a 15 year old paper. The data goes up to 2007. It would be interesting to see how this method pans out over the last 15 years.
So, you’re passing up a golden opportunity to say “I told you so”?
I suspect he doesn’t know, but I do believe it would be good to repeat this methodology with later data. It becomes a testable hypothesis.
Over the last 15 years we know that incoming solar radiation has had a major effect.
Much more absorbed solar radiation, leading to two major El Nino events..
UAH data shows the only warming this century has come from those two events…
.. with zero warming from 2001-2015, and cooling from 2017 – 2023.4
There is no evidence of any CO2 caused warming in the whole of the 45 years of the UAH data.
There will never be a “perfect theory of climate”, because there are too many variables working on different timescales. Even the sun’s effects on climate are multivariable. This gives the solar haters plenty of opportunity to pounce, thinking they have debunked the idea that the sun has a powerful influence on our climate. Nope. What caused the Roman or Medieval warm periods, or the LIA for example? It certainly wasn’t “carbon”, or volcanoes, or witches.
Tell me who did and how the sea surfaces were determined globally in 1857. Ben Franklin was not alive any more.
Then, the data around the end of WWII were excluded from the analysis – they did not know what to do with the fairy tale of switching temperature recording from British to American vessels.
Lastly, how can anybody draw conclusions from readings within an error bandwidth of 1.6 degrees C? I made an error analysis, which was published here at Watts Up.
A muck of an analysis.
Beginning in the mid 1800’s, sailors on some ships crossing the world’s oceans used buckets dropped overboard to retrieve surface waters and then to use thermometers to record the water temperature in those buckets.
An example of the detailed record keeping associated with such water bucket measurements obtained by the sailing ship “Bremen” during a crossing of the Atlantic ocean in April 1856 is given at https://www.ncei.noaa.gov/news/planet-postcard-bucket-full-data . Bucket sampling of surface waters from ocean-going ships continued to mid-20th century.
As to whether the sampling could be considered “global”, I sayeth not because I have no idea of the extent of ship routes and such associated water bucket sampling as performed on ships sailing around the world at that time.
BTW, Benjamin Franklin was not the only scientist in the world in the 18th century, to say nothing of the number of scientists alive in the mid-19th century, at the end of the “Age of Sail”.
From my 2018 article in What’s up:
Folland et al in the paper ‘Corrections of instrumental biases in historical sea surface temperature data’ Q.J.R. Metereol. Soc. 121 (1995) attempted to quantify the bucket method bias. The paper elaborates on the historic temperature records and variations in bucket types. Further, it is significant that no marine entity has ever followed any kind of protocol in collecting such data. The authors of this report undertake a very detailed heat transfer analysis and compare the results to some actual wind tunnel tests (uninsulated buckets cooling by 0.41 to 0.46 degree C). The data crunching included many global variables, as well as some engine inlet temperature readings. Folland’s corrections are +0.58 to 0.67 degree C for uninsulated buckets and +0.1 to 0.15 degrees for wooden ones.
Further, Folland et al (1995) state “The resulting globally and seasonally averaged sea surface temperature corrections increase from 0. 11 deg. C in 1856 to 0.42 deg. C by 1940.”
It is unclear why the 19th century corrections would be substantially smaller than those in the 20th century. Yet, that could be a conclusion from the earlier predominance of the use of wooden buckets (being effectively insulating). It is also puzzling, how these numbers correlate to Thompson’s generalizing statement that recent SSTs should be corrected via bias error “by as much as ~0.1 deg. C”? What about including the 0.42 degree number?
In considering a system error – see 3b. – the variable factors of predominant magnitude are diurnal, seasonal, sunshine and air cooling, spread of water vs. air temperature, plus a fixed error of thermometer accuracy of +/- 0.5 degree C, at best. Significantly, bucket filling happens no deeper than 0.5 m below water surface, hence, this water layer varies greatly in diurnal temperature.
Tabatha, 1978, says about measurement on a Canadian research ship – “bucket SSTs were found to be biased about 0.1°C warm, engine-intake SST was an order of magnitude more scattered than the other methods and biased 0.3°C warm”. So, here both methods measured warm bias, i.e. correction factors would need to be negative, even for the bucket method, which are the opposite of the Folland numbers.
Where to begin in assigning values to the random factors? It seems near impossible to simulate a valid averaging scenario. For illustration sake let us make an error calculation re. a specific temperature of the water surface with an uninsulated bucket.
Air cooling 0.50 degrees (random)
Deckside transfer 0.05 degrees (random)
Thermometer accuracy 1.0 degrees (fixed)
Read-out and parallax 0.2 degrees (random)
Error e = 1.0 + (0.52 + 0.052 + 0.22)1/2 = 1.54 degrees or 51 times the desired accuracy of 0.03 degrees (see also section 2.0).
Hmmm . . . your original ask was:
“Tell me who did and how the sea surfaces were determined globally in 1857.”
Having given you a straightforward answer to that (one that it now appears you already knew, so it’s curious as to why you asked . . . well, maybe not so curious after all), I see that you now want to deflect the subject over to the accuracy of SST data obtained by bucket sampling dating back to the mid-1800’s.
Continue having fun with that . . . I’ll not be joining you.
For your consideration, Google’s AI has this to say:
“By the 1770s: French and English instrument makers were building mercury thermometers that were accurate to 1/10th of a degree. This level of precision made them invaluable tools in scientific research, medicine, and industrial processes, like brewing.” (my bold emphasis added). Whether or not thermometers of such accuracy were used for bucket temperature measurements, who knows for sure?
And your error calculation “illustration” neglected to state if the computed error was equally centered (+/-) or biased high or biased low, and on what scientific bases you obtained each of the stated error contributions.
As for your stated “desired accuracy of 0.03 degrees” . . . pffthptttt! . . . whether it’s meant to be in degrees-C or in degrees-F or in degrees-AmeriTemp.
I can assure you those were lab grade instruments. Do you really think lab grade instruments were carried on ships to measure bucket temps?
Since I clearly posted:
“Whether or not thermometers of such accuracy were used for bucket temperature measurements, who knows for sure?”,
it may surprise you to know that I’m not surprised that you chose to ask your question back at me.
But let me ask YOU this, in return:
“Do your really think lab grade thermometers were used in industrial processes, such as brewing, as stated by Google’s AI?”
yes, I can certainly see how they would have been used in brewing.
as a half degree accuracy could be important to their work, and a brewer floor is a pretty stable environment (maybe hot but rarely violent and rolling).
That depends on how much of the product they had been sampling 🙂
Hmmm . . . I wasn’t aware that calibrated mercury-in-glass thermometers were particularly sensitive to any “rocking and rolling”, that is “unstable environments.”
Thanks for the update. /sarc
Well, you’ve apparently never shaken a mercury thermometer and gotten a bubble in the column, but the thing they are most sensitive to is breakage in rough environments.
Next time you try /sarc….be good at it.
They are if the product they are producing require a measurement with lab grade resolution. ISO 17025 has very strict guidelines about what is required. Many, many repair facilities use lab grade equipment and routinely send them out for calibration. Brewing may not require lab grade thermometers, but I’ll bet they don’t go to the nearest Wally World and buy the cheapest thermometer available either. The issue arises when temperatures are quoted to a resolution far beyond what run of mill thermometers resolution. Reading a bucket thermometer to 1/10th of degree would require a lab grade device, not one that is marked, at best, to the half degree.
ISO 17025 was in effect back in the late 19th century?
You missed the point entirely.
Funny, you criticize someone else for throwing out hypotheticals, then you do just that a post or two later.
Hypocrite much?
Such instruments were both expensive and delicate. They might have been present on ships designated to do scientific research. They most assuredly were not present on standard merchant men.
Calibration accuracy is not the issue. I can find you innumerable sets of hand held calipers that are ‘accurate’ or calibrated to .001″, and some will report to .0005″ but I will not accept one for measurements of parts with tolerances under .005″. There is this thing called Repeatability& Reproducibility. What is important is the measurement ‘system’ which is a combination how the gauge is used and what it is measuring. Our 6sigma processes would actually allow a gr&r up to 20% but those were exceptionally controlled machining processes, not hauling a bucket slopping it’s contents all over, up the side of your ship.
edit: and I suspect but can’t prove the thermometer accuracy quoted is probably gradations ( in this case by degrees). what that means is the reader is required to estimate the reading when it is between two numbers.
Excellently said!
+100
According to some here, if you take that caliper and measure the part 100 times, you can get an accuracy down in the nanometers.
Better yet, you don’t even have to measure the same part. So long as they all have the same model number, you can still get improved accuracy.
Nobody says that. If your calipers have a precision of 0.001″, say 0.03mm in proper units. Then at best mesuring the same thing 100 times would give you a precision if 0.003mm. or 3 micrometers. And that’s assuming all the variance is caused by random errors.
Really? Does that imply if I measure the same thing 10,000 times (not just 100) then I can at best get a precision of .0003mm, or 0.3 microns?
Who knew? /sarc
That’s why I said “at best”.
Not even “at best”. You cannot increase the resolution of a measuring device by taking multiple measurements. The only way to increase the resolution would be to have a crystal ball.
The at best is assuming random errors with no bias and no systematic errors. If there is a resolution that is lower than the random errors then you cannot increase the precision beyond that resolution.
If on the other hand there is enough variation in the measurements to “beat” the resolution limit, you definitely can increase the resolution, as has been demonstrated to you numerous times, and as even your school books make clear.
Assuming random errors is *NOT* adequate, not even if bias and systematic error are eliminated. (btw, it’s not systematic error, it is systematic uncertainty – error is not uncertainty. you can’t get *any* of this right) They have to have a Gaussian distribution as well! You are using the same meme you always do: “all measurement error is random, GAUSSIAN, and cancels”. You deny that you always assume that meme but it just comes shining through in everything you do.
Precision has nothing to do with any of this. Precision is getting the same reading for the measured property every time.
Nor can you “beat” the resolution limit unless you have a crystal ball!
You simply can’t know what you can’t know. If the resolution in the measuring device only gives you knowledge of the hundredths digit you simply don’t know what the thousandths digit is for any of the measurements. No amount of “averaging” can discern what you don’t know.
“as has been demonstrated to you numerous times”
You have *never* demonstrated how you can discern what you don’t know. Please post a picture of your crystal ball.
The SEM is *NOT* a metric of accuracy. It is a metric of sampling error. You can make that sampling error very small but that does NOT* mean your average value is accurate at all.
“it’s not systematic error, it is systematic uncertainty”
Take it up with the GUM.
“They have to have a Gaussian distribution as well!”
No they do not – and it doesn’t matter how many times you repeat this article of faith – it’s still wrong.
“Precision is getting the same reading for the measured property every time.”
“Nor can you “beat” the resolution limit unless you have a crystal ball!”
Or use statistics, just as Taylor shows.
“You have *never* demonstrated how you can discern what you don’t know.”
I have, but you have a memory problem. E.g. you can take a set of values reported to 2 decimal places. Round them all to 1 decimal place, and see how closely the average of the lower resolution values matches the average of the higher resolution values. With enough values you will find they match to at least the 3rd decimal place. Either it’s magic or you know the average to a better resolution than any of the individual measurements.
“Take it up with the GUM.”
You are cherry picking again.
GUM B.2.22
“systematic error
mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions minus a true value of the measurand
NOTE 1 Systematic error is equal to error minus random error.
NOTE 2 Like true value, systematic error and its causes cannot be completely known.
NOTE 3 For a measuring instrument, see “bias” (VIM:1993, definition 5.25).” (bolding mine, tpg)
“No they do not – and it doesn’t matter how many times you repeat this article of faith – it’s still wrong.”
I use the term “Gaussian” to represent a symmetric distribution. Non-symmetric distributions typically do *not* have equal, cancelling values on either side of the average. A fact that you ALWAYS ignore.
“closeness of agreement between indications”
What do you think *I* said? Precision is a metric for getting the same indication each time!
“Or use statistics, just as Taylor shows.”
You have STILL not studied Taylor, only cherry picked from him. Each and every example in his book where he uses the SEM carries the assumption of random, independent measurements of the SAME MEASURAND with the same instrument under the same conditions. REPEATABILITY.
You continually get on WUWT and try to convince everyone that the real world of metrology and temperature measurements fit this definition and the SEM is the measurement uncertainty. You just hijack every discussion of real world temperature measurement and try to fit it into your blackboard statistical world.
The SEM of each and every temperature data set in use today is ONLY a metric for sampling error since NONE of the data sets represent measurements of the same measurand under repeatability conditions.
Stop cherry picking from stuff you simply do not understand and stop trying to hijack threads about real world measurement by leading discussions off into statistical world where your meme of “all measurement uncertainty is random, Gaussian, and cancels” applies. It is tiresome, boring, and a total waste of bandwidth for everyone that has to read it.
“You are cherry picking again.”
This is your stock response whenever you’re caught out. In this case insisting that I was wrong to talk about systematic error and claiming the correct term was systematic uncertainty. The fact that the GUM specifically says the term systematic uncertainty should not be used, is blamed on cherry-picking.
“I use the term “Gaussian” to represent a symmetric distribution.”
And you are wrong to do so, and you have no excuse because I’ve corrected you on this before. If you are going to accuse me of claiming all distributions are Gaussian, it’s no defense to then say that you didn’t actually mean Gaussian but something else. This Humpty Dumpty logic you keep employing is just your excuse for lying.
Regardless, you are still wrong. Distributions do not have to be symmetric for the SEM equation or the CLT to apply.
As I’ve said before, you are confusing symmetry with the mean of the distribution. If the distribution is of random errors, then if the mean is zero it does not matter if the distribution is symmetric or not. If the mean is not zero, then you have a systematic error – but this can happen regardless of the symmetry of the distribution. A Gaussian distribution with mean 1, will be symmetric but have a systematic error. And it’s possible for a non-symmetric distribution to have a mean of zero.
“What do you think *I* said? Precision is a metric for getting the same indication each time!”
That’s what I though you said, and it’s wrong. It’s a metric for how close repeated measurements are, not for getting the same value each time.
“REPEATABILITY.”
This whole discussion has been about using the same calipers on the same object under conditions of repeatability.
But regardless, you are just not understanding the SEM. It’s about taking a random sample of different things. Just because you can use it to take a sample of measurements of the same thing to get a more precise result, does not make it invalid for other purposes.
And again, that has nothing to do with the point that Taylor specifically tells you that averaging can increase the resolution. You would know this if you had actually done all the exercises as you claim.
“Each and every example in his book where he uses the SEM carries the assumption of random, independent measurements of the SAME MEASURAND with the same instrument under the same conditions.”
Not true. Look at Taylor 4.5,
“This is your stock response whenever you’re caught out. In this case insisting that I was wrong to talk about systematic error and claiming the correct term was systematic uncertainty. The fact that the GUM specifically says the term systematic uncertainty should not be used, is blamed on cherry-picking.”
Can you read at all? GUM:
“NOTE 2 Like true value, systematic error and its causes cannot be completely known.
NOTE 3 For a measuring instrument, see “bias”
You can’t know the systematic error. For a measuring instrument you should talk about bias, and bias in a field measuring device can’t be known either – it has to be handled using UNCERTAINTY!
You ARE wrong to talk about systematic error – PERIOD, EXCLAMATION POINT. It only exists in your your head and on your statistics blackboard!
“If you are going to accuse me of claiming all distributions are Gaussian, it’s no defense to then say that you didn’t actually mean Gaussian but something else. This Humpty Dumpty logic you keep employing is just your excuse for lying.”
Pure malarky! I didn’t say I *meant* something else. I *said* I use the term Gaussian to represent symmetric distributions. Your lack of reading comprehension skills is just pathetic.
“Regardless, you are still wrong. Distributions do not have to be symmetric for the SEM equation or the CLT to apply.”
And now you are using the red herring argumentative fallacy. 1. The CLT does *not* apply when you have one measurement. 2. Neither does a standard deviation. 3. And resolution has nothing to do with either.
The CLT *does* apply to non-symmetric distributions. But single measurements do not form a distribution! Therefore the CLT is meaningless in such a case.
One measurement cannot form a standard deviation either!
And we’ve been down the path of the SEM applying when you have one sample. When you have only one sample you *have* to assume a population standard deviation or use the standard deviation of the sample. If you use the standard deviation of the single sample then you *have* to justify your belief that it perfectly represents the population – a difficult thing to do.
You keep trying to shoehorn reality and measurements of reality into your blackboard statistical world and just keep failing.
Taylor specifically states that you can’t increase resolution because of significant digit rules and measurement uncertainty. I’ve given you the reference multiple times – go look at his Section 2.2.
And here we see the classic Gormanesk tactics. He can never be wrong about anything, so when he makes a claim which is demonstrably wrong, he quickly changes the subject.
In this case he claims that the correct term is “systematic uncertainty” and not “systematic error”, which is proven false by the GUM defining and using the term systematic error and saying systematic uncertainty should not be used. So rather than saying, “oops, maybe I made a mistake in comment. Thanks for the heads up, I’ll try to be less dogmatic in future.” He instead goes on a rant about how it’s impossible to know the exact value if a systematic error, which has nothing to do with the mistake he originally made. This leads to lots of all caps shouting, and Tim now claiming
“You ARE wrong to talk about systematic error – PERIOD, EXCLAMATION POINT. It only exists in your your head and on your statistics blackboard!”
This despite all the times he claimed I assume all errors are random and I was ignoring systematic errors.
Again, if he actually read the GUM he would see several occasions where they talk about systematic errors. FULL STOP, BANG. See example H1.
“I use the term Gaussian to represent symmetric distributions. ”
You cannot accuse someone of thinking that all distributions are Gaussianand then claim you only meant Gaussian as representative if a particular type of distribution. That would be like I repeatedly accused you of thinking all animals were dogs, then turning round and saying I’m using “dog” to mean all four legged animals. It would still be a lie.
If you meant “symmetric distribution” why not say it?
“And now you are using the red herring argumentative fallacy.”
Every accusation is a confession. As Tim will now demonstrate.
” 1. The CLT does *not* apply when you have one measurement. 2. Neither does a standard deviation. 3. And resolution has nothing to do with either. ”
Three statements that have zero to do with my point that the SEM and CLT apply to non-symettric distributions. We were talking about taking multiple measurements but he’s now talking about a single measurement. And he’s still wrong. The SEM of a single measurement is just the population standard deviation.
“Taylor specifically states that you can’t increase resolution because of significant digit rules and measurement uncertainty. I’ve given you the reference multiple times – go look at his Section 2.2. ”
If 2.2 said anything if the sort then you would be able to provide the exact quote. That section says nothing about the resolution of an average, and if it did say what you claim it says, then Taylor is contradicting himself later, when he specifically says that the number of Sig figs in an average can be greater than those in the individual measurements. E.g. exercise 4.17 (b)
I have given the quote to you MULTIPLE times. You remain willfully ignorant.
The very second sentence in 2.2 says: “First, because the quantity ẟx is an estimate of an uncertainty, obviously it should not be stated with too much precision.”
Taylor goes on:
——————————————-
“If we measure the acceleration of gravity g, it would be absurd to state a result like
(measured g) = 9.82 +/- .02385 m/s^2.
The uncertainty in the measurement cannot conceivably be known to four significant figures.”
——————————————–
He then gives this example:
——————————————-
Once the uncertainty in a measurement has been estimated, the significant figures in the measured value must be considered. A statement such as
measured speed = 6050.78 +/- 30m/s
is obviously ridiculous. The uncertainty of 30 means that the digit 5 might really be as small as 2 or as large as 8. Clearly the trailing digits 1, 7, and 8 have no significance at all and should be rounded.
—————————————–
“That section says nothing about the resolution of an average”
What in Pete’s name do you think he is saying when he says the trailing digits of 1, 7, and 8 have no significance? The resolution of the measuring device is modulated by the measurement uncertainty and that includes the average of the data!
“E.g. exercise 4.17 (b)”
Are you being deliberately stupid? Chapter 4 is about multiple measurements of the same measurand! With *NO*, that means 0 (zero) systematic uncertainty!
Please tell everyone why you think that has anything to do with the measurement of a global temperature where you have single measurements of different things by different devices which have unknown systematic uncertainty!!!!!!
“The very second sentence in 2.2 says: “First, because the quantity ẟx is an estimate of an uncertainty, obviously it should not be stated with too much precision.””
“If we measure the acceleration of gravity g, it would be absurd to state a result like
(measured g) = 9.82 +/- .02385 m/s^2.
The uncertainty in the measurement cannot conceivably be known to four significant figures.”
“Once the uncertainty in a measurement has been estimated, the significant figures in the measured value must be considered. A statement such as
measured speed = 6050.78 +/- 30m/s
is obviously ridiculous. The uncertainty of 30 means that the digit 5 might really be as small as 2 or as large as 8. Clearly the trailing digits 1, 7, and 8 have no significance at all and should be rounded.”
All that wasted bandwidth, and you still can;t find a single quote that actually says that resolution cannot be increased by averaging.
“What in Pete’s name do you think he is saying when he says the trailing digits of 1, 7, and 8 have no significance?”
He means exactly what he says. If the uncertainty is ±30, then any of the digits after the tens column have no significance.
“The resolution of the measuring device is modulated by the measurement uncertainty and that includes the average of the data!”
Now you are just putting words in his mouth. Where does he say in that section anything about “the average of the data.”. The rule is the uncertainty of the result determines the number of digits you should report. Averaging, in many cases, reduces the uncertainty, and that allows you to report more digits.
“Chapter 4 is about multiple measurements of the same measurand!”
Which is what we’ve been talking about. 100 measurements of the same thing. But the fact you think the same rule does not apply when measuring different things is your problem.
“With *NO*, that means 0 (zero) systematic uncertainty!”
Remember at the start when I said I was describing the best case scenario, meaning assuming all errors were random? Of course you don’t, because you ignored the actual point of my comment and just reacted in the way you always do – triggered by my name.
You’d waste far less bandwidth if you didn’t insist in writing whole paragraphs in bold text. It might make you think more important, but the text is much heavier and uses up your limited bandwidth.
As to the gist of your wild eyed rant – I’ve explained and demonstrated how the average of a sample of individual measurements taken at random can have a higher resolution than the individual measurements, many times. But it’s pointless, because your inability to admit you are wrong about anything just results in you misunderstanding any point I make.
“All that wasted bandwidth, and you still can;t find a single quote that actually says that resolution cannot be increased by averaging.”
The issue is that you simply can’t read and comprehend what you are reading.
If your instrument indicates a measurement out to the units digit, i.e. its resolution, but your uncertainty is in the tens digit YOU CAN’T CHANGE THE RESOLUTION BY AVERAGING so you can know the units digit! You don’t know the units digit, you will NEVER know the units digit, it will forever be in the realm of the UNKNOWN.
“He means exactly what he says. If the uncertainty is ±30, then any of the digits after the tens column have no significance.”
He says you can’t *KNOW* the units digit. He says “the digit 5 might really be as small as 2 or as large as 8″. That means you can’t KNOW the units digit! Again, you simply can’t seem to comprehend what you read. You are unable to apply what you read to the real world!
“Where does he say in that section anything about “the average of the data.”.”
Where in Pete’s name do you think the 6050.78 came from? A single measurement? or multiple measurements of the same measurand? Does it matter when the uncertainty is in the tens digit? You are *still* unable to relate what you read to the real world.
“Which is what we’ve been talking about.”
Again, this whole forum is about TEMPERATURE measurement in the real world. Why is that so hard for you to understand? When you try to force the discussion into your blackboard statistical world of 100 measurements of the same thing by the same instrument under the same conditions you are hijacking the entire forum and wasting everyone’s bandwidth!
“Remember at the start when I said I was describing the best case scenario”
The “best case” excuse is just your attempt to hijack the forum and force it into your blackboard statistical world. There is *NO* best case when it comes to global temperature measurement data sets.
“I’ve explained and demonstrated how the average of a sample of individual measurements taken at random can have a higher resolution than the individual measurements, many times. But it’s pointless,”
This is nothing more than the “numbers is numbers” meme of blackboard statisticians. You’ve demonstrated NOTHING of use in the real world of global temperature measurements around the globe being jammed into a “global average”. All you’ve demonstrated is that you 1. know nothing of metrology, 2. that you are willfully ignorant of metrology, and that 3. that you simply can’t relate to the real world the rest of us live in.
I see you’;ve found a bit more bandwidth to waste.
“He says you can’t *KNOW* the units digit. He says “the digit 5 might really be as small as 2 or as large as 8″. That means you can’t KNOW the units digit! Again, you simply can’t seem to comprehend what you read.”
The irony. He says you can’t know the tens digit. That’s the 5 that may be anywhere between 2 and 8. He specifically says the units digit, “1”, is not significant.
“Where in Pete’s name do you think the 6050.78 came from?”
From out of Taylor’s head. It’s am arbitrary figure used to illustrate a point. It’s just referred to as “measured speed”. It could just as easily be any made up measurement.
“This is nothing more than the “numbers is numbers” meme of blackboard statisticians.”
All my example used real world temperature data, e.g. CRN or CET. You on the other hand have given zero real-world demonstration of what you claim. Not a single example using real data that demonstrates how the average of multiple measurements leads to an average that is only correct to the measured resolution. All you do is yell “can’t, Can’t, CAN’T”, and try to present backboard statistical rules, that don’t actually exist.
Statistical descriptors are *not* crystal balls. If you don’t know the value then you don’t know the value. The statistical descriptor known as the “average” can’t tell you what you don’t know – at least in the physical real world. That’s the whole concept of uncertainty – “measurement best estimate +/- measurement uncertainty”. You can’t legitimately “estimate” beyond what you don’t know.
Until you lean that blackboard “numbers is just numbers” is ignoring real world limitations you’ll never understand measurement uncertainty. Uncertainty is not error. Error is not uncertainty. Resolution is a limitation on what you can know. No amount of averaging can remove that limitation.
“ Not a single example using real data that demonstrates how the average of multiple measurements leads to an average that is only correct to the measured resolution.”
Taylor gave you the example. You just refuse to comprehend what he is saying. If your uncertainty is in the tens digit you can *NOT* know the units digit even if your measurement resolution is in the units digit. It’s called measurement uncertainty. You are still doing nothing but trying to perpetrate a fraud on those depending on your physical measurements – claiming you can know what is part of the Great Unknown, no different than a carnival fortune teller.
Predictable, but still disappointing. No attemp to provide “real world” evidence for you claim, just a long rant through your usual clichés. And the only evidence being to reference a text book, at best blackboard statistical evidence. Except you still haven’t provided any evidence that Taylor says averaging cannot increase resolution, whilst ignoring the two clear examples where he says it can.
Look at NIST TN 1900 Example 2 and see if you can figure out why the temperatures have two decimals yet the average and interval values have only one decimal!
Read thru a number of the other examples to see why the standard deviation is used as the standard uncertainty.
“Look at NIST TN 1900 Example 2 and see if you can figure out why the temperatures have two decimals yet the average and interval values have only one decimal!”
For exactly the same reason it’s been the last 200 times you’ve asked. It’s determined by the uncertainty. The expanded uncertainty is ±1.8 to two sfs with the final digit being in the tenths of a degree, so the result is reported to the nearest tenth of a degree. And in case you still haven’t noticed this 0.1°C is still a higher resolution than the 0.25°C resolution of the original measurements.
“Read thru a number of the other examples to see why the standard deviation is used as the standard uncertainty.”
Because that’s the definition of standard uncertainty. But if the measurand is an average, that standard deviation is the standard deviation of the mean, or standard error of the mean. As in your TN1900, the standard deviation of the measurements is 4.1°C, the standard uncertainty is 4.1/√N = 0.87°C.
I thought that averaging added resolution. Why is the standard deviation not 4.09? Why is the SDOM not 0.87? Why isn’t the average 25.59?
NIST surely knows what it is doing.
You didn’t show any math to show how this. Your assertion does not indicate what the input quantities are. There are two options for making a measurand the result of an average of unique input quantities.
Option 1
m = X₁ + X₂ + … + Xₙ
where X₁ = x₁/n, X₂ = x₂/n, Xₙ = xₙ/n
Option 2
m = X₁
Where X₁ = x₁/n + x₂/n + …+ Xₙ/n
Whoops, they end up the same even though Option 1 has multiple input quantities while Option 2 has only one input quantity.
What is the Type B uncertainty for each of the measurements. Let’s assume an ASOS expanded uncertainty as specified by NOAA, i.e., 1.8°F. Wel’ll divide by 2 to get the standard uncertainty of 0.9. What is the uncertainty of “n”? It is a counting number and has zero uncertainty. Therefore we have:
u(x₁/n) = √[u(x₁)² + u(n)²] = √[(0.9)² + 0²] = 0.81
This will be the same for each input quantity, so we get for a 30 day month..
u(m) = √(30×(0.81)²) = 4.44
I wonder why NIST decided to use the random variable method to calculate the mean and uncertainty.
“Why is the standard deviation not 4.09? Why is the SDOM not 0.87? Why isn’t the average 25.59?”
They are all that and more, (in fact TN1900 quotes the SDOM as 0.872), but you know – significant figures and all that. There isn’t much point in quoting to more than a couple unless you are going to use them as part of a calculation.
“Whoops, they end up the same”
You do love to over complicate things. I assume what you are trying to say is that you can look at this in two ways. A random sample of daily values taken from a population of all possible daily values, or as 22 measurements of the same thing (the average temperature for that month) where the variation in the daily values is caused by measurement uncertainty. TN1900 uses the later, in order to justify using GUM 4.2.3 and G.3.2.
Personally, I think this is stretching the idea of measurement uncertainty a bit, but it makes no difference how you interpret the exercise. The result is the same SD / √N.
“What is the Type B uncertainty for each of the measurements.”
Largely irrelevant.
“Wel’ll divide by 2 to get the standard uncertainty of 0.9.”
Why are you using Fahrenheit. This example is using the correct units. Your quoted uncertainty would be 0.5°C.
“u(x₁/n) = √[u(x₁)² + u(n)²] = √[(0.9)² + 0²] = 0.81”
You just have to be trolling at this point. You’ve had it explained so many times there’s is no excuse for this amount of ignorance.
x₁/n is a division, you have to use the rules for division, and that means adding the relative uncertainties. The result is that the relative uncertainty of x₁/n, is the same as the relative uncertainty of x₁ – which means that the absolute uncertainty of u(x₁/n) = u(x₁) / n. As should be obvious to anyone who understands proportions, or has read Taylor.
And, by the way:.
√[(0.9)² + 0²] ≠ 0.81.
“This will be the same for each input quantity, so we get for a 30 day month..”
There are only 22 values in this month.
u(τ) = √(22×(0.5/22)²) = 0.5/√22 = 0.107°C
This would be the uncertainty of the exact mean of those 22 values. But NIST is treating the actual temperatures as random variables and finding the uncertainty of the mean of a supposed probability distribution for that month. Hence the uncertainty of 0.872°C.
If you wanted to combine these two uncertainties you would add them in quadrature.
√(0.107² + 0.872²) = 0.879°C.
As is generally the case, when you combine uncertainties with a large difference in size, the smaller one tends to become irrelevant. And in any case, combing the two is wrong, as the standard deviation of the temperatures already includes the variation caused by the the measurement uncertainty.
“Three statements that have zero to do with my point that the SEM and CLT apply to non-symettric distributions. We were talking about taking multiple measurements but he’s now talking about a single measurement.”
NO, this entire thread is about the measurement uncertainty associated with the measurement of temperature. *YOU* keep trying to lead the discussion off into your statistical world.
Multiple measurements of a measurand, namely temperature, simply doesn’t apply. They don’t exist. Only single measurements of different measaurands exist. That means the SEM doesn’t apply. That means the CLT doesn’t apply. That means that your meme of all measurement uncertainty is random, Gaussian, and cancels doesn’t apply either.
Learn it. love it, live it. And stop wasting our bandwidth trying to pretend otherwise.
“NO, this entire thread is about the measurement uncertainty associated with the measurement of temperature. ”
Learn to read the thread. The comment I was responding to was:
If you want to apply that to global temperature measurements then resolution is pretty much irrelevant, as I’ve explained to you many times.
“That means the SEM doesn’t apply.”
You still don’t understand what sampling means. Single measurements of different things is exactly what sampling is, and it’s exactly when the SEM applies.
“Learn it. love it, live it.”
I do.
As usual, you are cherry picking.
I give you this from the GUM
———————————————————
3.2.3 Systematic error, like random error, cannot be eliminated but it too can often be reduced. If a systematic error arises from a recognized effect of an influence quantity on a measurement result, hereafter termed a systematic effect, the effect can be quantified and, if it is significant in size relative to the required accuracy of the measurement, a correction (B.2.23) or correction factor (B.2.24) can be applied to
compensate for the effect. It is assumed that, after correction, the expectation or expected value of the error arising from a systematic effect is zero.
NOTE The uncertainty of a correction applied to a measurement result to compensate for a systematic effect is not the systematic error, often termed bias, in the measurement result due to the effect as it is sometimes called. It is instead a measure of the uncertainty of the result due to incomplete knowledge of the required value of the correction. The error arising from imperfect compensation of a systematic effect cannot be exactly known. The terms “error” and “uncertainty” should be used properly and care taken to distinguish between them.
——————————–
See the bolded part of the Note.
You continue to confuse error and uncertainty.
Systematic error can’t be known. You’ve already been given the quote from the GUM concerning that. All you can do is specify the UNCERTAINTY of any correction you make to compensate for the systematic effect.
You are beating a dead horse. You are a troll. You continue to remain willfully ignorant – the worst kind of ignorance.
Stop wasting your precious bandwidth.
“You continue to confuse error and uncertainty.”
You’re the one insisting I called systematic error, uncertainty.
“Systematic error can’t be known.”
It can not be known exactly. Hence the uncertainty. Hence the GUM days to correct for known uncertainty then apply an uncertainty factor to account for the uncertainty in the correction.
You keep arguing with yourself. None of this has anything to do with my comments.
You are lost in the woods. What you are saying is that:
Do ASOS or CRN stations add a correction factor to the measured temperature? How is the uncertainty of the correction determined? For that matter, how is the correction factor applied?
Doing what you discuss is done in labs and in controlled environments where repeated measurements can be done in repeatability conditions along with calibration prior to each measurement.
With single measurements, you cannot divine systematic uncertainty using statistics because it is a constant that appears on each measurement. It does not modify the probability distribution, it only shifts the distribution by changing the value of the mean by a constant amount.
“You are lost in the woods. What you are saying is that:
”
Here’s what the GUM says
“With single measurements, you cannot divine systematic uncertainty using statistics because it is a constant that appears on each measurement.”
You keep mistaking your own lack of ability for everyone else’s. You might not be able to do it, but other’s can.
Your statements make no sense.
Random errors don’t exist. Type A uncertainty is determined by a probability distribution not by analyzing possible errors. The variance of the measurements in that distribution define the Type A uncertainty.
There is no “beating” one uncertainty with another. Uncertainties from different categories add, always!
If you have a digital device that has one decimal digit and take multiple readings of a reference and all readings are the same, then your device is very precise. If you get different readings, then your device is not very precise and the uncertainty is determined by the range of readings.
Resolution is a stand-alone uncertainty category. It is based on the fineness of markings on the device.
Ummmm, would that imply that resolution ceased being a meaningful parameter with the advent of electronic readouts on measurement instruments?
The display is the equivalent of markings on the device.
ROTFL . . .
One can visually interpolate between markings (i.e., measurement delineations) on a physically-inscribed measuring device, such as rulers, calibers, micrometers and thermometers.
One cannot visually interpolate between the step changes in the least significant digit on an electronic panel display on a measurement device.
Try it, and see.
Here is a document from NWS. Guess what it shows?
Two types of thermometer. One has 2°F markings and the other 1°F.
It also says.
Here is another NOAA document.
What is the resolution uncertainty of an integer value?
You need to be informed that most scientists in the world do NOT follow NWS or NOAA documents as to their “specifications” for reading and recording temperatures, in either field use or in laboratory environments.
You see, scientists frequently need to measure and record a temperature to 0.1 K or better resolution, in some some extreme cases extending down to milli-Kelvins.
As but one example commonly see here on WUWT, UAH/Dr. Spencer report monthly global and regional average lower atmosphere temperature anomalies (based on satellite MSU-derived measurements) to 0.01 deg-C precision (see most recently https://wattsupwiththat.com/2025/08/03/uah-v6-1-global-temperature-update-for-july-2025-0-36-deg-c ). Not that I agree that such implies accuracy to +/- .01 deg-C.
Also NIST-traceable resolutions for current instrument measurement capabilities are:
— high-accuracy RTD digital thermometers to 0.01°F (or 0.005°C) resolution for temperatures between -148°F and 158°F (-100°C and 70°C)
—Standard Platinum Resistance Thermometers (SPRTs) to high accuracy (0.01°C) across a wide range (-200°C to 500°C)
—for some thermocouple types, like K and J, measurements with a precision up to ±0.1°C are possible.
So, in direct answer to your question (with its included references), I would not have guessed that the NWS and NOAA have agreed that temperature determination and recording to the nearest 1 deg-C or even to the nearest 1 deg-F is an acceptable practice given the capabilities of modern instrumentation!
P.S. The resolution uncertainty of an integer value, if based on one or more physical measurements or purely mathematic values, each good only to an integer value, is obviously plus/minus one integer . . . IMHO, that’s a pretty stupid question.
LOL. Show us some weather station data using LIG that is recorded to 0.1k (0.2°F). The original premise was LIG bucket thermometers. Why don’t you address that issue and what resolution was available.
Because it is totally irrelevant as to most scientists deciding whether or not to follow NWS and NOAA documents that, according to you, imply/specify one need only record temperatures to the “nearest whole degree”, which is the issue to which I directed my comment that YOU quoted.
It seems that maybe there are two separate “Jim Gorman”s posting here.
If it is “irrelevant” to climate scientists, why do they use the recorded value from 1850 to ~1980? If they don’t “follow” NWS/NOAA documents, where do they find the records?
“according to me” – I gave you the manual title that you could have easily looked up.
Here is the title again.
PART2A. MANUAL WEATHER STATIONS: MEASUREMENTS;INSTRUMENTS
And the link.
https://novalynx.com/manuals/nfes-2140-part2a.pdf
Here is the manual for recording temperature again.
COOPERATIVE STATION OBSERVATIONS
3.4.5 HOW TO READ AND RECORD I TEMPERATURES
Here is the link to the manual.
https://repository.library.noaa.gov/view/noaa/47558/noaa_47558_DS1.pdf
I implied nothing. You are to lazy to do your own research, not a good practice. Here is the text from the manual.
. . . and that’s why I replied “Really?”
bellman has never learned basic metrology, even after years of having it explained to him.
Tim Gorman can never accept he is wrong about anything even after years of demonstrating how wrong he can be.
You simply can’t accept that your logic implies that you can measure out to the millionths digit using a meter stick marked in centimeters if only you take enough measurements and average them.
Even a six year old child would have a problem believing this.
Proclaiming someone wrong, over and over again, doesn’t make him wrong.
Especially when he isn’t.
“Even a six year old child would have a problem believing this.”
A six year old child would have a problem believing the Earth goes round the sun. Stop using six year olds as your authority.
Seriously though, your entire argument is fallacious. The logical possibility of doing something doesn’t trump the practical impossibility of doing it. The practical impossibility of taking something to an infinite degree does not mean the logic is wrong.
Logic says that with a long enough lever you can lift the Earth. But in reality that’s impossible. This does not mean you can claim that levers do not work.
Go study your metrology books again.
Precision is how well a device provides a similar reading each time a measurand is measured. Resolution basically describes the ability to discern a reading.
The two are somewhat intertwined. You can’t have better precision than the resolution allows. However precision can be worse than the resolution.
“Go study your metrology books again.”
You mean all the ones that if you measure the same thing multiple times the uncertainty of he average is the uncertainty of the individual measurements divided by the root of the number of observations. Yes they are simplistic.
“However precision can be worse than the resolution.”
Which is a good thing when taking an average. However the original post and the claim about being able to take 100 readings and get an accuracy to the nanometer was not based on resolution. The claim was that it was accurate to 0.001″ and could report to 0.0005″. I took the rather larger tolerance value of 0.005″ as the uncertainty.
“However the original post and the claim about being able to take 100 readings and get an accuracy to the nanometer was not based on resolution. The claim was that it was accurate to 0.001″ and could report to 0.0005″.”
That was *not* the claim *YOU* made.
Your claim was that with 100 measurements you could obtain 0.0005″ resolution from an instrument with 0.001″ resolution.
You said: “If on the other hand there is enough variation in the measurements to “beat” the resolution limit,”
You can’t beat the resolution limit, not unless you have a crystal ball the rest of us don’t have.
“calipers have a precision of 0.001″
Precision and resolution are not the same thing. This has been explained to you multiple times but never seems to sink in. You just absolutely refuse to learn basic metrology.
“ssuming all the variance is caused by random errors.”
You keep saying you don’t assume that all measurement uncertainty is random, Gaussian, and cancels. Yet here you are, once again, applying that meme. You just can’t help yourself.
If the distribution of the readings is not Gaussian then being random does *NOT* cancel out the measurement uncertainty.
“recision if 0.003mm. or 3 micrometers”
SEM = SD/sqrt(sample size)
For the SEM to be 0.003mm the SD would have to be 0.03mm:
(0.03/10) = 0.003
A resolution of 0.03mm is *NOT* the standard deviation of anything. Resolution is simply not the standard deviation of the measurements. It is not valid to use in calculating the SEM.
If the distribution of the readings is not Gaussian then the SEM is not even a valid estimate of measurement uncertainty.
If the resolution of the measurements is 0.03mm then you simply do not know anything about what is in the thousandths digit for any single measurement. You cannot determine what is in the thousandths digit using averaging or the SEM, that would be no different than asking the fortune teller at the carnival what the value in the thousandths digit is.
Again, for the umpteenth time, the SEM only gives you an estimate of what the population average is. That estimate can’t have any more resolution or accuracy than the measuring device provides.
The SEM is a metric for SAMPLING ERROR, not for the accuracy of the average. The accuracy of the average can be very poor if the accuracy of the measurements are very poor. Adding more inaccurate measurements into the average won’t increase the accuracy of the average, it will only increase the inaccuracy of the average.
Just for a fun pedant point regarding calipers (and, by extension micrometers), most* of the variation in readings is asymmetric. An outside reading won’t be low, and an inside reading won’t be high.
[*] The exception is a dimension which is right on the resolution half-width, which could, under the right conditions, be rounded down (or up, for an internal measurement).
That’s not the only asymmetry. Because of “slop” in the mechanism, closing the instrument down to a smaller value will result in a different reading than opening the instrument up to a larger value. That “slop” is typically not symmetric.
Not going through all of your insulting rant. You really need to follow up the comment chain to see what I said, and what I was responding to. But this one is getting really tired:
“If the distribution of the readings is not Gaussian then the SEM is not even a valid estimate of measurement uncertainty.”
You really need to try to understand the maths. The equation SEM = SD / √N has absolutely nothing to do with the shape of the distribution distribution. All that’s required is that the random variables are independent and identically distributed. Then it just follows from variances adding.
Now, it can be useful is the distributions are Gaussian, as adding two independent Gaussian distributions always results in a Gaussian distribution, which allows you convert the SEM into a probability distribution. But that’s where the CLT comes in. Even if the population distribution is not Gaussian the sampling distribution will tend to a Gaussian as N increases.
Not a single thing you’ve said here applies to the real world where different instruments measuring different things have their readings combined into a concatenated data set.
You can’t even understand what I said about the SEM. I didn’t say the SEM has anything to do with the shape of the distribution. I SAID the SEM is a metric for sampling error, it is *NOT* a metric for the accuracy of the population average. I also said you can’t use the resolution as a substitute for the standard deviation of either a sample or the population the way you did.
Resolution/sqrt(sample size) is *NOT* anything, especially the measurement uncertainty! And this is how you got from 0.03 to 0.003 with 100 measurements.
All you ever do on here is dig a hole for yourself. And then waste everyone’s bandwidth trying to use pettifogging to make everyone think you didn’t say what you said.
“And then waste everyone’s bandwidth trying to use pettifogging to make everyone think you didn’t say what you said.”
Your lack of self reflection is outstanding. All I said in this thread was that nobody claimed you could take 100 measurements with these calipers and get an “accuracy down in the nanometers”. I then gave a back of the envelope calculation to show the sort of precision (not accuracy) you could get “at best”.
You and your brother then jumped into the conversation, ignored the context, and just start yelling your usual insults and attacks on things I never said. And now you turn round and complain I’m wasting your time.
“I didn’t say the SEM has anything to do with the shape of the distribution.”
I quoted what you said:
“I also said you can’t use the resolution as a substitute for the standard deviation of either a sample or the population the way you did.”
Again, I made no mention of resolution. I used the original posters claim that the “accuracy” of the calipers was 0.001″, and used that as an indication of the uncertainty in order to demonstrate it’s impossible to get nanometer accuracy with the average of 100 measurements of the same thing.
“Resolution/sqrt(sample size) is *NOT* anything, especially the measurement uncertainty!”
And nobody has claimed it is.
“All I said in this thread was that nobody claimed you could take 100 measurements with these calipers and get an “accuracy down in the nanometers””
By dividing the resolution by the square root of 100. 1. resolution is not a standard deviation. 2. SD divided by the square root of the sample size is *not* resolution.
” I then gave a back of the envelope calculation to show the sort of precision (not accuracy) you could get “at best”.”
By substituting the SEM as a metric of resolution. It isn’t. You simply did not prove, even “at best”, that you can improve resolution by taking multiple measurements.
And you are *still* trying to pettifog in order to obscure that you didn’t actually prove anything.
“You and your brother then jumped into the conversation, ignored the context, and just start yelling your usual insults and attacks on things I never said. And now you turn round and complain I’m wasting your time.”
Malarky! Both of us simply pointed out that the SEM is not related to resolution and that you can’t improve resolution by taking multiple measurements.
Then, as usual, you started throwing crap at the wall hoping something would stick. And then you got upset when we pointed out your explanations were crap and a waste of time and bandwidth.
” I used the original posters claim that the “accuracy” of the calipers was 0.001″, and used that as an indication of the uncertainty “
More pettifogging. 0.001″ was given as the RESOLUTION, not the accuracy. You confused resolution, accuracy, and precision into a mess because you’ve *never* taken anything concerning metrology seriously. You just keep pushing the meme that “all measurement uncertainty is random, Gaussian, and cancels” so you can use the SEM as the measurement uncertainty. And then you tried to equate resolution and SEM.
And you *still* can’t admit that resolution is not the same thing as the SEM. You refuse to admit that you can’t increase resolution by taking multiple measurements in the same way you can reduce the SEM by using an increased sample size.
Get out of your blackboard statistical world and stop wasting everyone’s bandwidth trying to fit global temperature data into being multiple measurements of the same thing using the same instrument under the same environment.
“By dividing the resolution by the square root of 100. ”
I did not do that. I divided the assumed uncertainty by root-100. Nobody mentioned resolution in this hypothetical discussion about a hypothetical set of calipers until you hijacked the thread.
I’ve said repeatedly that if the resolution of the calipers is lower than the variation in the observations the resolution will be a systematic error, and can not be reduced by averaging. This will be the case when you are using the same calipers on the same object under the same conditions and the resolution of these calipers is 0.001″,. Then you will get 100 identical measurements, with no way of knowing where between ±0.0005″ the true value lies.
“I did not do that. I divided the assumed uncertainty by root-100.”
Malarky! Precision is *not* measurement uncertainty. Precision is not resolution. Resolution is not measurement uncertainty. What was given was resolution of 0.001″ which you converted to a resolution of 0.03mm.
You then divided that 0.03mm resolution limit by 10 (sqrt 100) to get a smaller value. That number 100 came from your sample size of 100. Sample size is only a factor when calculating the SEM, not when calculating “resolution”.
All you did was highlight for everyone that you simply continue to grasp the basic concepts of metrology.
“I’ve said repeatedly that if the resolution of the calipers is lower than the variation in the observations the resolution will be a systematic error,”
More garbage. It’s been pointed out to you over and over and over that all of the metrology experts cover the fact that systematic bias is *NOT* amenable to statistical analysis. Yet you continue to say they are wrong and you are right in that *YOU* *can* determine systematic bias in measurements using statistics. You even say the GUM is wrong when it says you can’t know systematic ERROR because you can’t know what the true value is. You’ve been told time after time ad infinitum that Error is *not* Uncertainty – yet you continue to confuse the two.
“This will be the case when you are using the same calipers on the same object under the same conditions and the resolution of these calipers is 0.001″,.”
More garbage! This also requires you to ASSUME that all uncertainty is from random fluctuation and that any systematic effects equal 0 (zero). This is an assumption that simply doesn’t apply in the real world of field measurements where you are *not* measuring the same thing each time and you are only making a single measurement each time.
You keep wanting to shoehorn the real world into your pristine blackboard statistical world. All in the faint hope of convincing people that the real world of global temperature measurements is correct in assuming that all measurement uncertainty is random, Gaussian, and cancels – the meme that is so stuck in your brain that you can’t even recognize when you are applying the meme.
“What was given was resolution of 0.001″ which you converted to a resolution of 0.03mm.”
Please stop lying about me. Here’s my exact quot. At no point do I mention the resolution.
You still don’t get that the purpose was to show an upper limit on the uncertainty that could be obtained with 100 measurements, and to demonstrate that it would be impossible to get to nanometer levels of uncertainty.
Precision is *NOT* uncertainty! How many times does that have to be pointed out to you before it finally makes sense?
“You still don’t get that the purpose was to show an upper limit on the uncertainty that could be obtained with 100 measurements”
You can’t get to uncertainty from either precision or resolution. Assuming the precision or resolution is the standard deviation of the measurements is simply wrong. Dividing by sqrt(100) *only* makes sense if you assume the precision or resolution *is* the standard deviation of the measurements. *AND* you also have to assume no systematic effects in the measurements.
You are *still* trying to shoehorn real world metrology into your blackboard statistical world where you can assume that all measurement uncertainty is random, Gaussian, and cancels. You can deny that you do that all you want but it just shies through in everything you assert concerning measurements.
“Precision is *NOT* uncertainty! How many times does that have to be pointed out to you before it finally makes sense?”
Perhaps it would help it make sense if you explained just what you mean, rather than assert your mindless tropes over and over and over.
Precision is synonymous with random uncertainty. You can define both in terms of the expected standard deviation if an infinite number of measurements.
“Assuming the precision or resolution is the standard deviation of the measurements is simply wrong. ”
From the VIM
“Precision is synonymous with random uncertainty. You can define both in terms of the expected standard deviation if an infinite number of measurements.”
NO! They are *not* the same. As usual, you have *not* studied this at all. You are just trying to push your uninformed opinion.
Here is what the research assistant from duckduckgo gives:
“Precision in metrology refers to the consistency and repeatability of measurements, indicating how close multiple measurements of the same quantity are to each other, regardless of whether those measurements are accurate. A precise measurement system yields similar results under unchanged conditions, but it can still be imprecise if the results are consistently far from the true value.” (bolding mine, tpg)
Here is an example:
Measurand: voltage across a resistor varying from 12.14v to 12.16v
Voltmeter 1: resolution – tenths of a volt. measurement uncertainty +/- 0.2v. This readings can be very precise, always reading 12.1v for each and every reading. It simply can’t resolve random fluctuations in the hundredths digit. Standard deviation of the readings equals 0.
Voltmeter 2: resolution – hundredths of a volt. measurement uncertainty +/- 0.02v. This readings can be very unprecise, giving different readings for different measurements. Readings could be 12.14v to 12.16v if the measurement uncertainty is ignored.
You confuse precision of the measurement system with the precision of the measurand. See the bolded text above.
Resolution of the measuring system has nothing to do with the random flucuations in the measurand. A precision measuring system can better track the random fluctuations in the measurand than can an imprecise measuring system. But it is the DATA that exhibits the random fluctuations, not the measuring system.
A measuring device that is imprecise should have that imprecision included in its measurement uncertainty interval. But that has nothing to do with resolution, e.g. 0.001″. You simply can’t get from resolution to measurement uncertainty. Resolution *can* be a contributing factor to measurement uncertainty but it is *more* applicable to choosing the right instrument for the purpose. Trying to use an instrument whose resolution is in the tenths digit to track random fluctuations in the hundredths digit is *not* using the proper instrument. It simply isn’t fit for purpose.
“Measurement precision is usually expressed numerically by measures of imprecision, such as standard deviation, variance, or coefficient of variation under the specified conditions of measurement.”
Did you actually bother to understand the context behind this statement? It’s obvious you didn’t.
From: https://www.sciencedirect.com/topics/medicine-and-dentistry/measurement-precision
“PrecisionMeasurement precision is related to random measurement error (VIM 2.19) and is a measure of how close results are to one another. The term precision is used differently in measurement science and in common language. When we talk about measurement results within the analytical community precision expresses spread, but in common language it is synonymous with accuracy (closeness of agreement between a measured quantity value and a true quantity value of a measurand (VIM 2.13)). Measurement results cannot be corrected to remove the effect of random error but the size of the random error can be reduced by making replicate measurements and calculating the mean value.
Measurement precision is expressed numerically using measures of imprecision such as the standard deviation calculated from results obtained by carrying out replicate measurements on a suitable material under specified conditions (Figure 3). Examples of specified conditions are: repeatability conditions, intermediate precision conditions (also called within-laboratory reproducibility conditions), or reproducibility conditions (for details on these terms, see Section ‘Method Validation’).” (bolding mine, tpg)
The measurement precision is based on the results of the measurements, not on the resolution of the measuring device!
Your quote is from VIM 2.15, Note 1. The actual definition of measurement precision from 2.15 is: “closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions”
As usual, you have been caught cherry picking rather than actually spending the time needed to understand the concepts.
You just continue to waste everyone’s bandwidth with garbage. You are a troll.
“They are *not* the same”
Maybe I should have said closely related rather than synonymous.
“Here is what the research assistant from duckduckgo gives”
Stop using your imaginary friends as authorities, and if you must, please quote the exact prompt you used. The fact your random word generator never mentioned uncertainty suggests you asked the wrong question.
Here’s what I get when I ask DDG “is measurement precision related to random measurement uncertainty”
…
Again, you simply can’t relate what you are reading to the real world.
“random measurement uncertainty affects that consistency”
You simply can’t relate the words “random measurement uncertainty” to the real world.
“The precision of a measurement is often quantified by the spread of repeated measurements,”
In metrology a PRECISION instrument is one that gives the same measurement of the same measurand under the same conditions. The entire AI definition you quote is based on that restriction: multiple measurements of the same thing by the same instrument under the same conditions.
You are STILL failing to comprehend what you are reading. The words “spread of repeated measurements,” simply goes in one of your ears and out the other.
“Yes, measurement precision is closely related to random measurement uncertainty.”
Let me bold this: MEASUREMENT PRECISION. Not “instrument precision” and not “instrument resolution”! You simply can’t know what you can’t know – and averaging is *NOT* a crystal ball that allows you to know what you can’t know!
“More garbage! This also requires you to ASSUME that all uncertainty is from random fluctuation and that any systematic effects equal 0 (zero). ”
If you read what I said, you will see that’s exactly what I did assume.
“This is an assumption that simply doesn’t apply in the real world of field measurements”
That’s why I said “at best”.
“where you are *not* measuring the same thing each time and you are only making a single measurement each time.”
The entire premis of this thread was were measuring the same thing and taking 100 measurements.
“t’s been pointed out to you over and over and over that all of the metrology experts cover the fact that systematic bias is *NOT* amenable to statistical analysis. ”
If that were true then all of these meteorology experts are clearly idiots. I’m nothing like an expert, but I can think of several ways to analyse systematic bias statistically.
“You even say the GUM is wrong when it says you can’t know systematic ERROR because you can’t know what the true value is. ”
More lies. I’m pretty sure I’ve never claimed you can “know” an exact systematic error. That’s why it’s uncertain.
“If that were true then all of these meteorology experts are clearly idiots. I’m nothing like an expert, but I can think of several ways to analyse systematic bias statistically.”
Everyone in metrology is an idiot.
GUM D.4
“D.4 Error
A corrected measurement result is not the value of the measurand — that is, it is in error — because of imperfect measurement of the realized quantity due to random variations of the observations (random effects),
inadequate determination of the corrections for systematic effects, and incomplete knowledge of certain physical phenomena (also systematic effects). Neither the value of the realized quantity nor the value of the
measurand can ever be known exactly; all that can be known is their estimated values.”
First, you would have to be able to separate out random variation from systematic effects. IMPOSSIBLE TO DO IN THE REAL WORLD OF FIELD MEASUREMENTS.
Second, you would need to know the true value in order to evaluate the impact of the systematic effects. IMPOSSIBLE TO DO IN THE REAL WORLD OF FIELD MEASUREMENTS.
Bevington says on Page 3 of his book:
“The accuracy of an experiment, as we have defined it, is generally dependent on how well we can control or compensate for systematic errors, errors that will make our results different from the “true” values with reproducible discrepancies. Errors of this type are not easy to detect and not easily studied by statistical analysis.”
In fact, he doesn’t even discuss statistical methods for analyzing systematic effects in the rest of his book. For the very reason given in the quote!
I will repeat one more time. You are not living in the real world. In the real world where you have one measurement of a measurand, e.g. the air temperature at 10:02AM at one field location, there is nothing to create a distribution of values that can be analyzed statistically. You can separately measure the temperature every second of the day and you will still only have one measurement of a different measurand every single second. You won’t have 100 measurements of the same measurand forming a distribution of values that can be analyzed for an SEM, mean, or standard deviation.
You are once again trying to shoehorn real world temperature measurements into your blackboard statistical world by making assumptions that ignore the limitations of the real world.
You are wasting everyone’s bandwidth. You are a troll, pure and plain.
“You are wasting everyone’s bandwidth.”
Hint, if you want to avoid wasting your bandwidth, try sticking to the point rater than going on epic an hominem filled rants.
“If you read what I said, you will see that’s exactly what I did assume.”
Why do you insist on coming on here and contaminating discussions of real world measurements with garbage fit only for a blackboard in statistical world?
“The entire premis of this thread was were measuring the same thing and taking 100 measurements.”
Which is nothing but a waste of everyone’s bandwidth. It has no place in the real world of field measurements of temperatures around the globe.
Does he even understand that resolution is a Type B uncertainty? Doesn’t sound like it.
He does.
bellman doesn’t even understand uncertainty, let alone Type A from Type B.
That’s only true if you use the same calipers to measure the same part over and over again. Measuring lots of different parts doesn’t improve your accuracy in the slightest.
“Calibration accuracy is not the issue. I can find you innumerable sets of hand held calipers that are ‘accurate’ or calibrated to .001″, and some will report to .0005″ but I will not accept one for measurements of parts with tolerances under .005″. “
Well, you are entitled to your opinion.
On the other hand, it appears you have never visited a modern calibration lab in any large manufacturing facility in any first world county on Earth and talked to the experts therein.
One can commercially purchase “Grade 00” or “Grade K” classification (or ASME B89.1.9 or Federal Spec GGG-G-15C equivalents) precision metrology gage blocks and pin gages
(see, for example:
https://www.penntoolco.com/precise-carbide-81-piece-rectangular-gage-block-set-grade-2-a-303-502 , or
https://www.higherprecision.com/products , or
https://westportcorp.com/collections/meyer-gage-pin-set-class-x )
that are guaranteed to be accurate to 0.0001 inch or better, traceable to NIST standards. These are commonly and frequently used to periodically confirm the accuracy of the calipers, micrometers and surface table metrology-stands-with-dial-indicators that are in general daily use within such facilities.
Of course, you would have to have some practical knowledge of how precision machine and/or metrology shops operate today to know this.
BTW, in aerospace manufacturing, it is not at all unusual for engineers and designers to dimension parts to a manufacturing tolerance ot +/- .001 inch, and to have those as-built parts pass receiving inspection.
The point is that (Vernier and digital) calipers almost always read high on external measurements and low on internal measurements under field conditions.
It’s an inherent design limitation.
It is extremely difficult to get the jaws of (Vernier or digital) calipers perpendicular to the piece in 2 axes. Inside or outside calipers can be located better spatially, but then have the problems of joint play, backlash and transferring measurements.
1 thou isn’t terribly good. That’s 1960s automotive tolerances. And they weren’t measured with Vernier calipers.
You’ve been around haven’t you? High pressure diesel pumps on large horsepower tractors have better tolerances than that. It’s also one reason they have replaceable cylinder liners. Engine tolerance in big blocks designed to run at 9000 rpm’s need very tight tolerances.
Growing up, we didn’t the ones with changeable anvils and non-rotating spinles. The high dollar ones with those stayed in dad’s clean room.
That’s one of my more useful (but more expensive) pastimes than mucking around online.
It’s more bush mechanics with a bit of machining, but it’s quite enjoyable.
I’m not at the Jim’s Automotive Machine Shop or Paul Henshaw level, but I cringe every time a Jennings Motorsport video appears on UselessTube.
Piston pumps (injector pumps or oil pumps) in those sizes would be working with clearances of a few tenths, or the pressure and volume would be all over the place.
Well, if one is at all familiar with parts used in automotive design and production, he/she would be aware of the sub-mil tolerances required for “interference press fits” and for “sliding fits” employed on many engine parts.
For example, Bosch states that automotive and truck fuel injectors have a manufacturing sliding fit tolerance of 1 micron (0.00004 inch), the finest mechanical tolerance of any mechanical component on the engine! Even a fuel injector’s stroke tolerance is in the range of ±3 to 5 microns.
Depending on the “class” of press fit, the tolerances for combined interference between the two mating parts (most often a parallel cylinder inserted into circular hole) are in the range of 0.0001 to 0.00001 inch (a tenth-mil to one-one hundreth mil). See: https://en.wikipedia.org/wiki/Engineering_fit . Such are also typical of “heat-shrink-asseembly” interference tolerance following assembly cooldown.
Humans simply could not build many of today’s machines if machinists could not fabricate, and manufacturing inspectors could not inspect, manufactured parts to much better than 0.001 inch accuracy.
Do you remember me saying:
Here is a page discussing honig piston rod bushings to fit pistol pins. The post on the 1938 Pontiac is interesting. Fine resolution in measurements is not new.
https://www.practicalmachinist.com/forum/threads/sizing-connecting-rod-bushings.392937/
You have not mentioned doing any of this work with your own hands. It makes one wonder if you have actual experience. I worked in my father’s shop from the time I could carry a coal bucket. We worked on Farmall/IH farm equipment up to semi tractors. It’s where I learned “do it right the first time”.
Don’t forget the pressure applied to the measurand by the caliper heads. That’s quite difficult to make equal from measurement to measurment.
“That’s quite difficult to make equal from measurement to measurment.”
Well, that not usually the case for metals (even fully annealed pure copper) if the dimension of the object being measured is above 0.1 inches or so; that is, it is considered as incompressible, not flexible. It may or may not be true for materials such a plastics, elastomers, and foams depending on their modulus of elasticity.
Competent machinists and metrology inspectors know to minimize the force applied when contacting the surfaces of the object they are measuring, whether using calipers, micrometers, or other hand mechanical measurement devices, so as to avoid any significant elastic or plastic deformation of an objects surfaces at the time of dimensional measurement.
Of course, modern CMM (Coordinate Measuring Machines) that most large manufacturing businesses—and even quite a few small businesses— are now using in their metrology “labs”/inspection stations apply just the lightest contact force based on sensing when motion of the sensing ball encounters resistant (i.e., makes surface contact).
There really is no surface deformation “difficulty” there for metals, glasses and most polymers.
“Well, that not usually the case for metals (even fully annealed pure copper) if the dimension of the object being measured is above 0.1 inches or so; that is, it is considered as incompressible, not flexible.”
Once again you don’t know what your talking about. It’s not flexibility of the part, it’s flexibility and compression of the caliper blade and head. It’s a built in source of uncertainty within a caliper that makes it unsuitable for tight tolerance applications.
There is a reason micrometers have clutch handles.
“Competent machinists and metrology inspectors know to minimize the force applied when contacting the surfaces of the object they are measuring, whether using calipers, micrometers, or other hand mechanical measurement devices, so as to avoid any significant elastic or plastic deformation of an objects surfaces at the time of dimensional measurement.“
Minimizing the force applied is *not* the same as applying equal force across all measurements. If equal force is not applied then it will induce variation in the reading, likely asymmetric variation since “feeling” the force being applied is subject to a detection threshold for mist people.
None of this applies to the issue at hand- namely the measurement uncertainty of temperature measurements at different locations with different environments done by different devices.
Quit googling or using AI. Here is a page from Mitutoyo. Find the description of constant force thimbles. Remember, constant force is needed to insure the thimble ends up at the same marking on every measurement even on rigid material.
https://www.mitutoyo.com/webfoo/wp-content/uploads/Product-Fundamentals.pdf
I was trying to keep it simple, assuming the piece doesn’t deform. Deformation of the jaws can be an issue as well.
Calipers are good for getting into the ballpark, then you start getting serious. Micrometers do have torque compensation mechanisms. Using them correctly is a different matter…
BTW, it’s amazing what a difference wiping the measuring faces and workpiece can make when measuring to tenths of a thou.
That comment reflects your lack of practical experience using calipers to perform precision metrology. Almost all modern calipers (whether using a inscribed physical scale or a more-modern circular gage or electronic digital readout) have a wide segment (as opposed the “knife edge” portion) on their jaws to easily obtain perpendicularity of the jaws to the surfaces, curved or flat, being measured.
Furthermore, any person properly trained in the use of calipers knows to gently rock the jaws side-to-side when contacting the surfaces to be measured, whether curved or flat, so as to be sure to measure to minimum (i.e., the correct) dimension across the object . . . such practice can essentially eliminate any tilt-induced measurement error.
As all competent machinists and metrology inspectors know, there is no difficulty at all in correctly using vernier or digital calipers.
You’ve never really used a micrometer to measure crankshaft or rod journals have you?
Unless you have a anvil of the same size as the journal you have to make sure sure both the anvil and spindle are at the high points and with the correct pressure. Everyday in simple mechanics shops these readings are taken, just like LIG thermometer readings in an old Cotton Shelter. Those are FIELD readings, not lab readings in a controlled environment.
I’ll say it again, show us an early 20th century LIG thermometer recording sheet from some old decrepit weather station with entries to 0.1°F.
and it’s called ‘mic’ing the journal’ for a reason. Also, there is an industry standard term for a field reading…’FRO’ -FOR REFERENCE ONLY which means it is not valid for verification or approval.
That’s nice.
You use your Temu calipers, and I’ll stick to my Mitutoyo 0.0001″ micrometers.
I don’t use a steel rule for precision metrology, either.
and did you really use “calipers” and “precision metrology” together?
+ 💯
Calipers?
Competent machinists grab a micrometer.
COMPETENT machinists, as well as COMPETENT metrology inspectors, use the measurement device most suited to the job, switching freely between calipers and micrometers as needed.
For example, I’m just now wondering how useful a typical shop (“field use”) micrometer would be to a lathe machinist needing to check dimensions, absent NC control on his/her lathe, while in the process of cutting a male gland for a Parker 2-113 O-ring, which requires a groove width of 0.140 +.005/-.000, a groove depth of 0.080 ± .002, and a maximum groove eccentricity of .002 (all units in inches), given that the diameter of the typical micrometer’s anvil and spindle are 0.250”. (And yeah, I know there are specialty blade micrometers that can be purchased).
As a rule of thumb, micrometers are pretty useless for verifying grove depths and widths typically associated with engine-component O-ring glands and piston rings.
If one wants to hear practical, real-world experience regarding verifying “critical” piston groove dimensions to .001” precision using only calipers and a steel rule, check out this video: https://www.youtube.com/watch?v=12fx1cvoFek&t=30s .
At the time hack of 2m09s into the video, you can clearly see the guy is using a Mitutoyo digital caliper with readout resolution of either .0005 inch or .0001 inch.
🙂
That’s what we’ve been telling you. Calipers to get within a few thou, then switch to the good stuff.
You partially answered your own question. Depth mic + feeler gauges.
Notice how loose those tolerances are. You would come pretty close just relying on the DRO or even wheel index markings after validating the first couple with the correct measuring devices.
In a volume production setting, it would be done with go/no-go gages.
Probably grind the tool tip to width in the old days of HSS
Those Total Seal videos are aimed at home mechanics.
Did you notice he rounded the depth up from 0.169 to 0.170?
“That’s what we’ve been telling you.”
Yep. I make mid-priced jewelry of heirloom quality. I typically use a .1mm caliper for pre-work or non-precise production. When setting expensive stones I will use a Walmart Momo micrometer with a .01mm resolution or a Fowler caliper (.02mm) or a Starrett EC-799 with a .01mm resolution. Just depends on what I need. These are all low to mid-priced tools. High priced jewelers will typically have much more expensive devices.
That’s interesting, Tim. How well do your Starrett measurements compare to the micrometer?
Are you asking about accuracy? The micrometer almost never gives a larger measurement than the Starrett, e.g 8.10 mm vs 7.95mm for the Starrett. Which is good. You don’t want to make your mounting larger in aperture than the stone is. You want your device to always read low so you can sneak up on the actual aperture needed. I don’t actually use the micrometer very much. The calipers allow you to use the jaws to mark on the silver or brass rather than using dividers to transfer the measurement to the material like you do with the micrometer. The micrometer comes in handy for measuring the diameter of burs used for drilling, e.g. cone burs/round burs/bud burs/etc. I’ve drilled out pieces of wood to keep the burs in with masking tape along the edge where I can mark the diameters. As the burs wear down you can move them down the series and continue to extend their useful life.
Thanks. This particular discussion started with trusting calipers to within better than 0.005″.
That’s around 0.006″ difference, with a cheapie micrometer and quality calipers.
Jewellery requires patience, fine motor control and artistic ability, so that rules me out 🙁
That!s why I only do mid-priced stuff. Patience I have. Eyesight and hand-eye not so much. I only use “primitive” tools. No lasers. No argon. Files, pin vises, and pliers. Itty bitty files! 1.5mm stones are about as small as I can go.
Don’t let Tim kid you! All he really measures is how big of a hammer and anvil is needed! 🤠🤓
Given that I have 20+years of manufacturing engineering experience in automotive and aerospace, and am quite familiar with manufacturing tolerances in tenths and microns, you should take your own advice about experts and stop digging this hole. I am also quite familiar with the real operational world of precision machine shops and lab metrology. The part you are blithely ignoring is the inspection control requirements for that 1 tenth spec. Because i will guarantee you that it won’t be the same value on the night shift machinists bench in January as it will be on the leads desk on some august afternoon and any caliper verified and zeroed off of it will reflect that drift as error. That’s just the start of the measurement system process the uncertainties grow from there..
Lab certification very seldom survives transport to the field location let alone ongoing use at the field location. The tighter the lab calibration tolerance the more likely that lab calibration won’t survive field use.
“Lab certification very seldom survives transport to the field location let alone ongoing use at the field location.”
Hmmm . . . really???
I wonder what NASA would say about that assertion, given their extensive reliance on pre-launch calibration of an extraordinary diverse range of high precision and high accuracy instruments they use on spacecraft, including landers on astronomical bodies other than Earth. You know, lab re-certification not being possible once a spacecraft is launched, and all that!
Ditto for NOAA.
Ditto for the FAA.
Ditto for NIST certifications provided to commercial businesses.
Ditto for the DoD and its wide range of use of precision measurement and guidance devices, including GPS reliance on use of highly accurate and stable atomic clocks and highly accurate positional and velocity determinations based on radar or lidar.
In need not continue.
You’re finally right about something….you need not continue to demonstrate your ignorance.
Sensitive or critical devices that cannot be easily returned to a lab have field calibration procedures against either an included calibration standard or some known fixed natural phenomena. These procedures also include…get this….an allowable variance before the device is considered no longer fit for use.
Not sure this applies to temp measurement devices.
only application to temp measurement is recognizing that calibration precision of the gauge is often meaningless and has no bearing on fitness for purpose. Measurement process variation governs the value of the data is application dependent.
I agree that calibration precision of the temperature measurement devices is meaningless since calibration to the microclimate is never done.
But that lack of calibration to the microclimate certainly has a bearing on the fitness of purpose of the data that is generated.
The usefulness of temperature data is governed by it being an intensive property which applies across all the applications used by climate science.
“Sensitive or critical devices that cannot be easily returned to a lab have field calibration procedures against either an included calibration standard or some known fixed natural phenomena.”
True in some cases, but clearly not all.
Straightforward example:
During a rocket launch to space, the rocket uses many pre-launch-calibrated sensors and instrument systems that deliver information critical to successful performance, such as temperatures, pressures, acceleration, inertial measurement system (IMU) orientation, and RF communication/telemetry frequencies. No real-time (i.e., during-flight) calibrations are used on such rockets . . . there simply isn’t the time/bandwidth for such, and besides history has shown that such is totally unnecessary. For individual sensors, or subsystems, outright failure is more often the problem than is calibration shift over such a short time window.
Now, you were saying something about demonstrating ignorance . . .
NASA would tell you that docking procedures require real-time adjustments because speed and spatial measurements have enough measurement uncertainty that it isn’t a “measure it and forget it” process. Same for orbital burns. Fine adjustment burns are required for that ad well.
The DOD will tell you the same thing. Even precision munitions are not set it and forget it.
I bring up the subject of pre-launch calibrations of instruments and atomic clocks carried by spacecraft, and your respond with docking procedures and related measurement uncertainty, of which instrument calibration is only one part . . . and often not the largest!
That is not even a good attempt at deflection, but do carry on.
You are the one that seems to think that lab calibration carries on to field use.
To my assertion that lab calibration of an instrument doesn’t survive installation in the field
You said:
“I wonder what NASA would say about that assertion, given their extensive reliance on pre-launch calibration of an extraordinary diverse range of high precision and high accuracy instruments they use on spacecraft, including landers on astronomical bodies other than Earth. You know, lab re-certification not being possible once a spacecraft is launched, and all that!”
The point is that measurement uncertainty is *always* a part o any measurement outside of a lab or calibration prior to each measurement. Even lab certification specifies the uncertainty of the calibration!
I replied that ALL of the organizations you mentioned know my assertion is true. If it wasn’t then you wouldn’t need real-time control systems to handle docking of a spacecraft to a space station. You wouldn’t need supplemental engine burns to correct orbital positioning, etc. You wouldn’t need pilots in airliners.
“ instrument calibration is only one part”
And it doesn’t survive field installation no matter what you think.
Well, then, let me correct your assumption: I know FOR A FACT that lab calibrations commonly carry over to field use.
Actually, let me go further: I know for a fact that in many applications, particularly those in aerospace, sensors/instruments that have been calibrated by the manufacturer do not undergo recalibration during their design life.
The calibration stability of sensors/instruments is usually an integral part of the sensor/instrument qualification testing process, and may even be part of QC acceptance testing at the piece-part or higher subassembly level (based either on lot acceptance testing, or using 100% receiving acceptance testing).
Been there, done that.
If it were otherwise, the United States alone would likely need 2 or 3 orders-of-magnitude more calibration labs than currently exist.
From Hubbard and Lin, 2005, “On the USCRN air temperature system”
“Compared to the RMY and PMT temperature systems
in the field, the USCRN PRT temperature sensor system is capable of reaching accuracies of ±0.2° to ±0.3°C at a 95% confidence level on a basis of monthly average if one of two comparative sensors, RMY and
PMT, is used as an absolute reference.”
The lab calibration of the USCRN measurement station can be perfect but in the field it will still only have a ±0.2°C to ±0.3°C measurement uncertainty.
Measurement systems that never undergo calibration protocols on a recurring basis?
I know for a fact that every instrument in a modern military fighter jet, e.g. altimeter, engine temperatures, radar, undergo routine calibration at scheduled intervals. In fact, the altimeter in IFR qualified civilian aircraft have to be recalibrated every 24 months. See “§ 91.411 Altimeter system and altitude reporting equipment tests and inspections”. Measuring systems of more critical elements are re-calibrated more often.
The air temperature trendologists need tiny values of “uncertainty”, 10 mK or less, for their averages of averages of data that typically has combined uncertainties of 1-2°C (or higher). So they go through metrology texts like the GUM looking for loopholes or anything that might allow them to claim averaging reduces measurement uncertainty.
This is a step down from just ignoring the subject completely.
The air temperature trendologists are WILLFULLY ignorant of the concepts of metrology – the *worst* kind of ignorance.
Well, it was YOU that previously posted
“I can find you innumerable sets of hand held calipers that are ‘accurate’ or calibrated to .001″, and some will report to .0005″ but I will not accept one for measurements of parts with tolerances under .005″.
There is no doubt that today competent, professional machinists and metrology inspectors can find (and use) “innumerable” hand-held calipers that are accurate, and periodically calibrated/veriied, to better than +/- .001 inch.
“There is no doubt that today competent, professional machinists and metrology inspectors can find (and use) “innumerable” hand-held calipers that are accurate, and periodically calibrated/veriied, to better than +/- .001 inch.”
Jeez, calipers for tenths now??? The only thing there is no doubt about is that you’ve never been closer to a machine shop than your google chat, let alone turned a handle.
Certainly they can, and do, use calipers…to reliably measure to a range of about .005. Under .005 is a guess dependent on inspector skill and technique. If you ask them to certify a dimensional tolerance between .001 and .005 on a QC acceptance sheet you find those “skilled and professional machinists and inspectors” reach for a micrometer. Under .001, they fixture the part and go for a dial gauge.
Example for you…..aluminum housing used on 777x, : material 7075 AL. Dimension: 2″ diameter OD landing, Tolerance+0.0000/-.0007. The measuring process uncertainty, for a fixtured part with a clamped Mitutoyo digital MICROMETER was such that the shop couldn’t verify a capable process and REQUIRED an environmentally controlled QC lab with the part fixtured in a Leitz CMM. Micrometer measurement produced a CpK of roughly .8, CMM verification demonstrated it was actually. 1.4. The difference was measurement process variation.
The folks here cherry pick and probably don’t realize that the entire process must be repeatible and even then uncertainty enters the equation.
Heck, some “folks” here don’t even know that Cpk (the correct capitalization) as applied to metrology is a statistical measure that assesses a measurement process’s ability to consistently (i.e., repeatedly) produce output within specified limits (i.e., uncertainty).
Cpk <1.0: the process is likely not capable of meeting the required specifications.
Cpk>1.33: the process is considered capable and and has good margin against error, meaning it can consistently produce output within specifications.
Control charts aren’t exactly useful for time series of temperatures.
Here is a graph I did a long time ago. It does show that temps aren’t all over the place.
(my bold emphasis added to your post)
Hah!
I know for a fact that competent machinists at aerospace firms use calipers to measure parts they are manufacturing to tolerances of .001” or better. This everyday use “in the field” is separate from any metrology lab or receiving inspection station. Of course such machinists also know they need to verify the accuracy of the caliper readout periodically by using their own calibration blocks or pin gages, or by running those calipers through the metrology lab for a “certified” re-calibration. Most aerospace manufacturing business require periodic certifications of the accuracy/repeatability of measuring devices used by machinists on a 3-month to 6-month cycle, sometimes extending to as long as one year.
ISO 9001 doesn’t specify a single, fixed frequency for calibrating calipers and micrometers used by aerospace machinists. Instead, it requires organizations to establish a calibration program based on many factors.
Any competent machinist or user of modern day calibers (e.g., hobbyist) knows that one can readily purchase hand-held digital scale calipers with readout resolutions to 0.01mm / 0.0005 inch and repeatability to 0.01mm / 0.0005 inch. Fox examples, see:
— https://www.tequipment.net/Mitutoyo/Digimatic-Caliper-M/Digital-Calipers (priced at about USD $240), and
— https://www.starrett.com/details?cat-no=EC799B-8/200 (priced at about USD $330).
As for your example where for a stated part tolerance range of +.0000/-.0007 (assumed to be in units of inches), I am not at all surprised that a micrometer clamped on a surface table had a lower CpK than a Leitz CMM. Price ratio? . . . even assuming the caliper had readout resolution of .0001 inch, between new micrometer and new Leitz CMM would be, oh, about $200:$100,000, assuming the cheapest entry-level Leitz model.
Ooops, corrections needed to my above post:
Start of second paragraph:
“Any competent machinist or user of modern day
caliberscalipers . . .”And at end of my last sentence:
“. . . even assuming the
calipermicrometer had readout resolution of .0001 inch, between new micrometer and new Leitz CMM would be, oh, about $200:$100,000, assuming . . .”My bad.
Funny how the accuracy of the devices you referenced is ±0.001″. You should read everything instead of cherry picking.
Tell how that spec gives a higher uncertainty than the resolution alone.
I referenced the resolutions and repeatability in response to Gino’s concern about the use of calipers to measure dimensions below .005″. I never mentioned accuracy, but perhaps you missed this.
But, hey, thanks for pointing out that the calipers for which I gave reference URLs were stated to be accurate to ±0.001″. While you find that “funny”, it will certainly come as a shock to Gino based on his previous comments.
Also the as-shipped “stated” accuracy of ±0.001″ does not rule out such calipers being later calibrated to higher accuracy, say ±0.0005″, if one wants to spend the money and time to do so.
As for you last demand, simple . . . contact the seller for that info.
Well, with your 20+ years of experience, you appear to be unaware that almost all modern metrology labs/manufacturing parts inspection stations that need to measure to .001″ or better accuracy are in rooms having HVAC to control that laboratory/station inside temperature to a very narrow range so as to eliminate thermal expansion/contraction of the part and the measuring devices as sources of error.
The internationally recognized standard reference temperature for dimensional measurements is 20°C (68°F).
Typical inspection room temperature stability requirements are to maintain ±1°C (±1.8°F) over time and space. But some labs, especially for the highest precision work (e.g., dimensional calibration to NIST standards), aim for even tighter control, such as 68°F ± 0.06°F in “curtained-off” or otherwise isolated areas that contain the metrology surface tables and measurement devices and instrumentation (including modern CMMs).
Furthermore, for parts that need to be measured to .0001″ or better accuracy, there typically is a minimum amount of time required for the part to sit idle in the inspection area to allow equilibration with the room’s temperature before a dimensional measurement or series of measurements are taken and recorded.
“Well, with your 20+ years of experience, you appear to be unaware that almost all modern metrology labs/manufacturing parts inspection stations that need to measure to .001″ or better accuracy are in rooms having HVAC to control that laboratory/station inside temperature to a very narrow range so as to eliminate thermal expansion/contraction of the part and the measuring devices as sources of error.”
Better than you. You’re quoting shit I’ve know since before you were…..well you’ve got goodchat. At one of my companies my office was tied to the lab air handling systems but with out a thermostat so it received full cooling ALL the time. Good thing I was friends with the facilities team because I got them to insert a perforated sheet into the air return to reduce flow to my section. So far all you’ve done is show your intelligence is artificial.
FYI, labs are maintained at a ASTM temperature control. For tolerances down to about .005 range (caliper level), floor station readings are frequently accepted after proving process capability. Tolerances ranging .005 to .002 are handled by micrometer or dial gauge at floor stations, depending on part configuration, and an inspection sheet is sent with the batch to the lab and verified with sampling to the appropriate AQL. The role of the quality engineer and manufacturing engineer is to review those kinds of reports against the measured process stability and determine whether the variance is critical or the inspection can be downgraded to floor level.
On the other hand, true precision machine shops control the ENTIRE shop to a fixed temp (68-72 depending on the outside environment) to maintain manufacturing precision. This isn’t for a calibration standard it’s a reduction in environmental uncertainties.
And you’ve now discovered that true measurement precision to tenths requires thermal equilibrium which is expressed in millionths (most steel ranges ~6-9×10^-6 in/inF). A non idiot would recognize that the resolution scale of (AT BEST) .0005, poor stiffness, and heavy technique dependency of something like blade calipers is orders of magnitude worse than that. I am not holding out hope for you.
So, very accurate mercury thermometers were available. And from this you conclude that such thermometers were being used on all ships to take these measurements?
Do you have any evidence to support that belief, or were you just trying to change the subject?
The accuracy of the SST measurements is not a distraction, it is the issue.
And no, the errors were not centered, they all resulted in the bucket being cooler than the actual sea water.
MarkW,
You posted in your response to me:
I’m sorry to see that you, like some others, have an apparent reading comprehension problem. If you had bothered to trace this discussion upthread to my comment posted at 11:13 am on August 14, 2025, you would have perhaps realized that I clearly stated:
“Whether or not thermometers of such accuracy were used for bucket temperature measurements, who knows for sure?”
Furthermore, I repeated that statement in a subsequent thread post I made at 8:16 pm on August 14, 2025.
So, in reality I concluded nothing like what you attribute to me.
You also posted:
Again, if you had bothered to trace this discussion upthread to the OP made by potsniron at 7:18 am on August 14, 2025, you will see that the issued raised therein was:
“Tell me who did and how the sea surfaces were determined globally in 1857 . . . how can anybody draw conclusions from readings within an error bandwidth of 1.6 degrees C? I made an error analysis, which was published here at Watts Up.”
potsniron was asking how sea surface temperature were determined globally, not what uncertainties were associated with such because he already “knew” the answer to that (as he subsequently detailed in his lengthy post of 9:13 am on August 14, 2025).
Careless much?
bellman has the meme of “all measurement uncertainty is random, GAUSSIAN, and cancels” so cooked into his brain that he can’t get it out. There is no room for asymmetric measurement uncertainty in his statistical universe.
gorman has this idea that I keep saying all measurement uncertainty is Gaussian, despite the fact I keep having to explain to him that I don’t, giving examples where this is not true, and explaining why it does not matter. He just keeps repeating the same phrase over and over, yet seems to think I’m the one with the meme.
It doesn’t help that he doesn’t understand what Gaussian means.
And for how long might the water retrieval bucket have sat out on the warm ship’s deck under a hot sun before a thermometer was placed into the water it contained?
And of course, there is no possibility of determining post-facto if a given thermometer used to measure that water temperature read erroneously high or erroneously low, nor by how much.
That’s probably not the case in the tropics or sub-tropics (in late spring through early autumn).
Water temperature is usually lower than the air temperature, which is why people brave the sharks and box jellies for a cooling dip.
The temperatures still wouldn’t be centered.
The difference between air temperature and water temperature by latitude and month would be interesting chart.
Air temperatures at higher latitudes would be further below water temperatures compared to the amount air temperatures are above sea temperatures in the tropics.
Another interesting chart would be the differences in measured water temperatures over 24-hour periods.
“Another interesting chart would be the differences in measured water temperatures over 24-hour periods.”
Especially a 3D one showing the differences by depth.
pffthpttt – if one desires an accuracy of 0.1 degree the instrument should be at least three times as accurate, i.e. to 0.03 degrees. That’s very basic for any type of measurement. And that is not related to Celsius or Fahrenheit or your toes checking the water. Centering or biasing an error is totally irrelevant, when one adds systemic and random errors together. It is an error for you to look for such errors.
By the way, I still remember fondly playing with the mercury from a broken fever thermometer. However, I doubt they let sailors use such two hundred or so years ago.
Please convey your information to Google’s AI bot—indeed to all databases used by artificial intelligence on planet Earth—where I’m sure it will be given the consideration it deserves.
Don’t mix up accuracy and resolution. USCRN stations have a resolution of 0.1°C, but an uncertainty of ±0.3°C. Part of the reason is that there are more uncertainty categories than resolution such as repeatability, reproducibility, drift, etc.
That +/- 0.3°C uncertainty is instrument uncertainty at installation. It doesn’t include microclimate uncertainties or instrument drift over time due to station infrastructure changes (e.g. paint degradation from UV).
I meant what I said. If you want accuracy of 0.1 degrees in reading a temperature you cannot use a thermometer which is accurate to 0.1 degrees, never mind a resolution to 0.01 degrees. That gives you right away an error band of 0.2 degrees. Surely, one needs to add random errors, like parallax, etc. Sloppy is sloppy, never mind a pristine appearance.
In a long post above, I give a list of some of the known issues with bucket sampling.
” … found in the global SST over the 14 cycles from 1854–2007″
153 years / 14 cycles = about 11 years/cycle.
“The multidecadal trend of response to solar forcing”
How did we go from 11 years to “multidecadal”? Not saying it’s wrong just that some words are needed. I don’t think most people hear 11 years and think “oh, that’s multidecadal”.
There were no worthy TSI reconstructions in 2010 – and their solar forcing estimate is too low.
From the paper, at least one wrong hypothesis is stated:
“The hypothesis that it is the solar peak years that
causes the La Niña–like response in the equatorial Pacific
(van Loon and Meehl 2008; van Loon et al. 2007)
and that one or two years later the response switches to
an El Niño–like pattern (Meehl and Arblaster 2009)
may still be correct, and it appears to be supported by
modeling results as reported in Meehl et al. (2009).” – my emphasis
They don’t even know if it’s correct.
It is wrong because over the previous nine solar cycles the eastern equatorial Pacific has followed an up and down pattern governed by solar activity (this cycle makes #10), and it doesn’t matter that a La Niña crops up following an El Niño in these solar max time periods.
The average tropical step-up from the solar minimum to the solar maximum year is about +1°C, followed by an asymmetric tropical SST step-down from the solar max into the next solar minimum.
The odds of these steps happening without solar forcing are 1.6*10^19:1, ie impossible.
There’s really a simple test to ascertain whether or not solar energy has an effect upon the temperature. Take two dates, June 21st and August 21st; which for the northern hemisphere June 21st of course is the summer solstice or the longest day of the year. [the following is according to GROK]. June 21st, Sacramento sees 14:53 fourteen hours and 53 minutes of sunlight, August 21st Sacramento sees 13:26 thirteen hours and 26 minutes, almost an hour and a half less sunlight. However August 21st temperatures average about 4° F hotter than June 21st. On June 21st Earth is about 92.96 million miles from the Sun, on August 21st, Earth is 94.5 million miles from the Sun. On August, Earth is about 1.67% closer to the Sun than on June 21st.
And Sacramento sees about 4° F warmer when Earth is 1.67% closer to the Sun.
These days in August are the canicular days
In the U.K. the coldest month tends to be February and the warmest August, a 6-8 week lag from the Solstices.
That’s not really true. On average the coldest month is January, and the warmest July.
Per wiki, the mean daily minimum for January is 1.49C, while for Feb it is 1.23C so from that it is quite reasonable to say that February reaches the coldest temperatures.
That’s for minimum temperature. Mean temperature is colder in January.
What else would one use when comparing how cold it gets?
The question wasn’t “how cold it gets”, but which are the coldest and warmest months. That would imply to me the coldest overall, including the day time temperatures.
In any even, the differences are small, and I doubt significant. You cannot definitively say that in the UK Februaries tend to be the coldest month.
Even using just TMin, from the MO data for the UK – 73 years had a colder February than January, whereas 67 had a colder January, and 2 were equal.
For TMean, 67 years had a colder February than January, whereas 74 had a colder January, with 1 tie.
It depends on what the definition of “is” is 🙂
If you’re getting up before dawn to bring the cows in for milking, the coldest is the minima.
There you go being all practical and stuff.
I found this “Around the coasts, February is normally the coldest month, but inland there is little to choose between January and February as the coldest month.” from https://www.calendar-uk.co.uk/frequently-asked-questions/how-cold-does-it-get-in-the-uk
Although I cannot find the equivalent for the warmest month, but I suspect that it’s August for coastal areas and July/August inland.
The reason for this that I remember reading many moons ago is the influence of the sea, including the Gulf Stream.
This is one of the reasons why trying to “average” coastal and inland temperatures together is plain garbage. You are forming a multi-modal distribution where the average is basically useless for real world purposes.
The U.K. is a small entity and if averaging coastal and inland temperatures is meaningless, then doing the same for areas the size of the 48 contiguous states or Australia or France is even less meaningful.
Makes sense given the sea temperatures. But for the UK as a whole, July is generally warmer than August, and I’d say it was a statistical tie between January and February for the coldest.
Comparing not just 1 pair but rather a huge number of pairs could make the method more trusty – except to get results with good precision you would need accurate thermometers everywhere, and the accurate thermometer data would need to go back more than 20 years.
Any readings at all from much of the southern ocean only go back to 2005.
The accuracy of the methods of reading what did exist, anywhere, before then, is highly dubious.
There is absolutely no way that accurate enough “global” ocean SST data exists to carry out the sort of analysis this paper pretends to do.
Nobody is denying that solar energy has an effect on global temperature.
Absorbed solar radiation is in fact a major cause of ocean warming.
There is no evidence that CO2 has any effect on the global climate whatsoever.
Look at soil temps first. Soil absorbs insolation. In cold soil diffusion will subtract more than hot soil. A major lag factor in heating and cooling.
GROK found three papers in 2025 on the same topic, all with equivocal results. This paper from 2009 does seem to show, as noted below, the solar cycle lags the found SST cycle – the tiny tail wagging a HUGE dog.
More to the point is the quite nice data from CERN demonstrating that the solar cycle influences cloudiness through cosmic ray modulation. The response is prompt and inverse. The CERN director allowed the publication of the data (DOI: 10.1038/s41467-017-02082-2), but gagged the climate analysis.
Also, Leif Svalbard argued against the solar connections as I recall? He always had good observations and was worthy of respect, IMO.
sorry, Svalgaard…
didn’t mean to imply he was an island!
That’s just sad. a 12 hour wait in moderation. !!
An FN get to post continual idiotic crap.
Is he a protected troll like Richard Green and AlanJ ??
What has WUWT become.. woke leftism and censorship is rife??
Apophenia.
AMO temperature anomalies are in phase with solar cycles during a cold AMO phase, and anti-phase with solar cycles during a warm AMO phase. The AMO is always warmer during each centennial solar minimum.
http://www.woodfortrees.org/graph/esrl-amo/from:1880/mean:13/plot/sidc-ssn/from:1880/normalise