The Verdict of Instrumental Methods

Pat Frank

LiG Metrology, Correlated Error, and the Integrity of the Global Surface Air Temperature Record has passed peer-review and is now published in the MDPI journal, Sensors (pdf).

The paper complements Anthony’s revolutionary Surface Stations project, in that the forensic analysis focuses on ideally located and maintained meteorological sensors.

The experience at Sensors was wonderfully normal. Submission was matter-of-fact. The manuscript editor did not flee the submission. The reviewers offered constructive criticisms. There was no defense of a favored narrative. There was no dismissive language.

MDPI also has an admirable approach to controversy. The editors, “ignore the blogosphere.” The contest of ideas occurs in the journal, in full public view, and critical comment must pass peer-review. Three Huzzahs for MDPI.

LiG Metrology… (hereinafter LiG Met.) returns instrumental methods to the global air temperature record. A start-at-rock-bottom 40 years overdue forensic examination of the liquid-in-glass (LiG) thermometer.

The essay is a bit long and involved. But the take-home message is simple:

  1. The people compiling the global air temperature record do not understand thermometers.
  2. The rate or magnitude of climate warming since 1900 is unknowable.

Global-scale surface air temperature came into focus with the 1973 Science paper of Starr and Oort, Five-Year Climatic Trend for the Northern Hemisphere. By 1983, the Charney Report, Carbon Dioxide and Climate was 4 years past, Stephen Schneider had already weighed in on CO2 and climate danger, Jim Hanson was publishing on his climate models, ice core CO2 was being assessed, and the trend in surface air temperature came to focused attention.

Air temperature had become central. What was its message?

To find out, the reliability of the surface air temperature record should have been brought to the forefront. But it wasn’t. Air temperature measurements were accepted at face value.

Errors and uncertainty were viewed as external to the instrument; a view that persists today.

LiG Met. makes up the shortfall, 40 years late, starting with the detection limits of meteorological LiG thermometers.

The paper is long and covers much ground. This short summary starts with an absolutely critical concept in measurement science and engineering, namely:

I. Instrumental detection limits: The detection limit registers the magnitude of physical change (e.g., a change in temperature, DT) to which a given instrument (e.g., a thermometer) is able to reliably respond.

Any read-out below the detection limit has no evident physical meaning because the instrument is not reliably sensitive to that scale of perturbation. (The subject is complicated; see here and here.)

The following Table provides the lower limit of resolution — the detection limits — of mercury LiG 1C/division thermometers, as determined at the National Institute of Standards and Technology (NIST).

NIST 1C/division Mercury LiG Thermometer Calibration Resolution Limits (2σ, ±C)

a. root-sum-square of resolution and visual repeatability. b. Uncertainty in an anomaly is the root-sum-square of the uncertainties in the differenced magnitudes.

These are the laboratory ideal lower limits of uncertainty one should expect in measurements taken by a careful researcher using a good-quality LiG 1C/division thermometer. Measurement uncertainty cannot be less than the lower limit of instrumental response.

The NASA/GISS air temperature anomaly record begins at 1880. However, the largest uncertainties in the modern global air temperature anomaly record are found in the decades 1850-1879 published by HadCRU/UKMet and Berkeley BEST. The 2s root-mean-square (RMS) uncertainty of their global anomalies over 1850-1880 is: HadCRU/UKMet = ±0.16 C and Berkeley BEST = ±0.13 C. Graphically:

Figure 1: The LiG detection limit and the mean of the uncertainty in the 1850-1880 global air temperature anomalies published by the Hadley Climate Research Unit of the University of East Anglia in collaboration with the UK Meteorological Office (HadCRU/UKMet) and by the Berkeley Earth Surface Temperature project (Berkeley BEST).

That is, the published uncertainties are about half the instrumental lower limit of detection — a physical impossibility.

The impossibility only increases with the decrease of later uncertainties (Figure 6, below). This strangeness shows the problem that ramifies through the entire field: neglect of basics.

Summarizing (full details and graphical demonstrations in LiG Met.):

Non-linearity: Both mercury and especially ethanol (spirit) expand non-linearly with temperature. The resulting error is small for mercury LiG thermometers, but significant for the alcohol variety. In the standard surface station prior to 1980, an alcohol thermometer provided Tmin, which puts 2s = ±0.37 C of uncertainty into every daily land-surface Tmean. Temperature error due to non-linearity of response is uncorrected in the historical record.

Joule-drift: Significant bulb contraction occurs with aging of thermometers manufactured before 1885, and is most rapid in those made with lead-glass. Joule-drift puts a spurious 0.3-0.7 C/century warming trend into a temperature record. Figure 4 in LiG Met. presents the Pb X-ray fluorescence spectrum of a 1900-vintage spirit meteorological thermometer purchased by the US Weather Bureau. Impossible-to-correct error from Joule drift makes the entire air temperature record prior to 1900 unreliable.

The Resolution message: All of these sources of error and uncertainty — detection limits, non-linearity, and Joule drift — are inherent to the LiG thermometer and should have been evaluated right at the start. Well before making any serious attempt to construct a record of historical global surface air temperature. However, they were not. They were roundly neglected. Perhaps most shocking is professional neglect of instrumental detection limit.

Figure 2 shows the impact of the detection limit alone on the 1900-1980 global air temperature anomaly record.

Land surface temperature means include the uncorrected error from non-linearity of spirit thermometers. Sea surface temperatures (SSTs) were measured with mercury LiG thermometers only (no spirit LiG error). The resolution uncertainty for the global air temperature record prior to 1981 was calculated as,

2sT = 1.96 ´ sqrt[0.7 ´ (SST resolution)2 + 0.3 ´ (LST resolution)2]

= 1.96 ´ sqrt[0.7 ´ (0.136)2 + 0.3 ´ (0.195)2] = ±0.306 C, where LS is Land-Surface.

But global air temperature change is reported as an anomaly series relative to a 30-year normal. Differencing two values requires adding their uncertainties in quadrature. The resolution of a LiG-based 30-year global temperature normal is also 2s = ±0.306 C. The resolution uncertainty in a LiG-based global temperature anomaly series is then

2s = 1.96 ´ sqrt[(0.156)2 + (0.156)2] = ±0.432 C

Figure 2: (Points), 1900 – 1980 global air temperature anomalies for: panel a, HadCRUT5.0.1.0 (published through 2022); Panel b, GISSTEMP v4 (published through 2018), and; Panel c, Berkeley Earth (published through 2022). Red whiskers: the published 2s uncertainties. Grey whiskers: the uniform 2 s = ±0.432 C uncertainty representing the laboratory lower limit of instrumental resolution for a global average annual anomaly series prior to 1981.

In Figure 2, the mean of the published anomaly uncertainties ranges from 3.9´ smaller than the LiG resolution limit at 1900, to 5´ smaller at 1950, and nearly 12´ smaller at 1980.

II. Systematic error enters into global uncertainty. Is temperature measurement error random?

Much of the paper tests the assumption of random measurement error; an assumption absolutely universal in global warming studies.

LiG Met. Section 3.4.3.2 shows that differencing two normally distributed data sets produces another normal distribution. This is an important realization. If measurement error is random, then differing two sets of simultaneous measurements should produce a normally distributed error difference set.

II.1 Land surface systematic air temperature measurement error is correlated: Systematic temperature sensor measurement calibration error of proximately located sensors turns out to be pair-wise correlated.

Matthias Mauder and his colleagues published a study of the errors produced within 25 naturally ventilated HOBO sensors (gill-type shield, thermistor sensor), relative to an aspirated Met-One precision thermistor standard. Figure 3 shows one pair-wise correlation of the 25 in that experimental set, with correlation r = 0.98.

Figure 3: Histogram of error in HOBO number 14 (of 25). The StatsKingdom online Shapiro-Wilk normality test (2160 error data points) yielded: 0.979, p < 0.001, statistically non-normal. Inset: correlation plot of measurement error — HOBO #14 versus HOBO #15; correlation r = 0.98.

High pair-wise correlations were found between all 25 HOBO sensor measurement error data sets. The Shapiro-Wilk test, has the greatest statistical power to indicate or reject the normality of a data distribution and showed that every single measurement error set was non-normal.

LiG Met. and the Supporting Information provide multiple examples of independent field calibration experiments, that produced pair-wise correlated systematic sensor measurement errors. Shapiro-Wilk statistical tests of calibration error data sets invariably indicated non-normality.

Inter-sensor correlation in land-surface systematic measurement field calibration error, along with non-normal distributions of difference error data sets, together falsify the general assumption of strictly random error. No basis in evidence remains to permit diminishing uncertainty as 1/ÖN.

II.2.1 Sea-Surface Temperature measurement error is not random: Differencing simultaneous bucket-bucket and bucket-engine-intake measurements again yields the measurement error difference, De2,1. If measurement error is random, a large SST difference data set,De2,1, should have a normal distribution.

Figure 4 shows the result of a World Meteorological Organization project, published in 1972, which reported differences of 13511 simultaneously acquired bucket and engine-intake SSTs from all manner of ships, at low and high N,S latitudes and under a wide range of wind and weather conditions. The required normal distribution is nowhere in evidence.

Figure 4: Histogram of differences of 13511 simultaneous engine-intake and bucket SST measurements during a large-scale experiment carried out under the auspices of the World Meteorological Organization. The red line is a fit using two Lorentzians and a Gaussian. The dashed line marks the measurement mean.

LiG Met. presents multiple independent large-scale bucket/engine-intake difference data sets of simultaneously measured SSTs. The distributions were invariably non-normal, demonstrating that SST measurement errors are not random.

II.2.2 The SST measurement error mean is unknown: The semivariogram method, taken from Geostatistics, has been used to derive the shipboard SST error mean, ±emean. The assumption again is that SST measurement error is strictly random, but with a mean offset.

Subtract emean, and get a normal distribution of with a mean of zero, and an uncertainty diminishing as 1/ÖN.

However, LiG Met. Section 3.4.2 shows that the semivariogram analysis doesn’t produce ±emean, but instead ±0.5Demean, half themean of the error difference. Subtraction does not leave a mean of zero.

Conclusion about SST: II.2.1 shows measurement error is not strictly random. II.2.2 shows ignorance of the error mean. No grounds remain to diminish SST uncertainty as 1/ÖN.

II.2.3 The SST is unknown: In 1964 (LiG Met. Section 3.4.4) Robert Stevenson carried out an extended SST calibration experiment aboard the VELERO IV oceanographic research vessel. Simultaneous high-accuracy SST measurements were taken from the VELERO IV and from a small launch put out from the ship.

Stevenson found that the ship so disturbed the surrounding waters that the SSTs measured from the ship were not representative of the physically true water temperature (or air temperature). No matter how accurate, the bucket, engine-intake, or hull-mounted probe temperature measurement did not reveal the true SST.

The only exception was an SST obtained using a prow-mounted probe, but iff the measurement was made when the ship was heading into the wind “or cruising downwind at a speed greater than the wind velocity.”

Stevenson concluded, “One may then question the value of temperatures taken aboard a ship, or from any large structure at sea. Because the measurements vary with the wind velocity and the orientation of the ship with respect to the wind direction no factor can be applied to correct the data. It is likely that the temperatures are, therefore, useless for any but gross analyses of climatic factors, excepting, perhaps, those taken with a carefully-oriented probe.

Stevenson’s experiment may be the most important investigation ever carried out of the veracity of ship-derived SSTs. However, the experiment generated scant notice. It was never repeated or extended, and the reliability question of SSTs the VELERO IV experiment revealed has generally been by-passed. The journal shows only 5 citations since 1964.

Nevertheless, ship SSTs have been used to calibrate satellite SSTs probably through 2006. Which means that earlier satellite SSTS are not independent of the large uncertainty in ship SSTs.

III. Uncertainty in the global air temperature anomaly trend: We now know that the assumption of strictly random measurement error in LSTs or SSTs is unjustified. Uncertainty cannot be presumed to diminish as 1/ÖN.

III.1 For land-surface temperature, uncertainty was calculated from:

  • LiG resolution (detection limits, visual repeatability, and non-linearity).
  • systematic error from unventilated CRS screens (pre-1981).
  • interpolation from CRS to MMTS (1981-1989).
  • unventilated Min-Max Temperature System (MMTS) sensors (1990-2004).
  • Climate Research Network (CRN) sensors self-heating error (2005-2010).

Over 1900-1980, resolution uncertainty was combined in quadrature with the uncertainty from systematic field measurement error, yielding a total RMS uncertainty 2s = ±0.57 C in LST.

III.2 For sea-surface temperature uncertainty was calculated from Hg LiG resolution combined with the systematic uncertainty means of bucket, engine intake and bathythermograph measurements scaled by their annual fractional contribution since 1900.

SST uncertainty varied due to the annual change in fractions of bucket, engine-intake and bathythermograph measurements. Engine intake errors dominated.

Over 1900-2010 uncertainty in SST was RMS 2s = ±1.38 C.

III.3 Global: Annual uncertainties in land surface and sea surface again were combined as:

2sT = 1.96 ´ sqrt[0.7 ´ (SST uncertainty)2 + 0.3 ´ (LST uncertainty)2]

Over 1900-2010 the RMS uncertainty in global air temperature was found to be, 2s = ±1.22 C.

The uncertainty in an anomaly series is the uncertainty in the air temperature annual (or monthly) mean combined in quadrature with the uncertainty in the selected 30-year normal period.

The RMS 2s uncertainty in the NASA/GISS 1951-1980 normal is ±1.48 C, and is ±1.49 C in the HadCRU/UEA and Berkeley BEST 1961-1990 normal.

The 1900-2010 mean global air temperature anomaly is 0.94 C. Using the NASA/GISS normal, the overall uncertainty in the 1900-2010 anomaly is,

2s = 1.96 ´ sqrt[(0.755)2 + (0.622)2] = ±1.92 C

The complete change in air temperature between 1900-2010 is then 0.94±1.92 C.

Figure 5 shows the result applied to the annual anomaly series. The red whiskers are the 2s quadratic annual combined RMS of the three major published uncertainties (HadCRU/UEA, NASA/GISS and Berkeley Earth). The grey whiskers include the combined LST and SST systematic measurement uncertainties. LiG resolution is included only through 1980.

The lengthened growing season, the revegetation of the far North, and the poleward migration of the northern tree line provide evidence of a warming climate. However, the rate or magnitude of warming since 1850 is not knowable.

Figure 5: (Points), mean of the three sets of air temperature anomalies published by the UK Met Office Hadley Centre/Climatic Research Unit, the Goddard Institute for Space Studies, or Berkeley Earth. Each anomaly series was adjusted to a uniform 1951-1980 normal prior to averaging. (Red whiskers), the 2s RMS of the published uncertainties of the three anomaly records. (Grey whiskers), the 2s uncertainty calculated as the lower limit of LiG resolution (through 1980) and the mean systematic error, combined in quadrature. In the anomaly series, the annual uncertainty in air temperature was combined in quadrature with the uncertainty in the 1951-1980 normal. The increased uncertainty after 1945 marks the wholesale incorporation of ship engine-intake thermometer SST measurements (2s = ±2 C). The air temperature anomaly series is completely obscured by the uncovered uncertainty bounds.

IV. The 60-fold Delusion: Figure 6 displays the ratio of uncovered and published uncertainties, illustrating the extreme of false precision in the official global air temperature anomaly record.

Panel a is (LiG ideal laboratory resolution) ¸ Published. Panel b is total (resolution plus systematic) ¸ Published.

Panel a covers 1850-1980, when the record is dominated by LiG thermometers alone. The LiG lower limit of detection is a hard physical bound.

Nevertheless, the published uncertainty is immediately (1850) about half the lower limit of detection. As the published uncertainties get ever smaller traversing the 20th century, they get ever more unphysical; ending in 1980 at nearly 12´ smaller than the LiG physical lower limit of detection.

Panel b covers the 1900-2010 modern period. Joule-drift is mostly absent, and the record transitions into MMTS thermistors (1981) and CRN aspirated PRTs (post-2004). The comparison for this period includes contributions from both instrumental resolution and systematic error.

The uncertainty ratio now maxes out in 1990, with the published version about 60´ smaller than the combined instrumental resolution plus field measurement error. By 2010, the ratio declines to about 40´ because ship engine-intake measurements make an increasingly small contribution after 1990 (Kent, et al., (2010)).

Figure 6: Panel a. (points), the ratio of the annual LiG resolution uncertainties divided by the RMS mean of the published uncertainties (2s, 1850-1980).  Panel b. (points), the ratio of the annual total measurement uncertainties divided by the RMS mean of the published uncertainties (2s, 1900-2010). Inset: the fraction of SSTs obtained from engine-intake thermometers and hull-mounted probes (a minority). The drop-off of E-I temperatures in the historical record after 1990 accounts for the declining uncertainty ratio.

V. The verdict of instrumental methods:

Inventory of error and uncertainty in the published air temperature record:

NASA/GISS: incomplete spatial coverage, urban heat islands, station moves.

Hadley Centre/UEA Climate Research Unit: random measurement error, instrumental or station moves, changes in instrument type or time-of-reading, sparse station data, urban heat island, bias due to changes in sensor exposure (screen type), bias due to changes in methodology of SST measurements.

Berkeley Earth: non-climate related noise, incomplete spatial coverage, and limited efficacy of their statistical model.

No mention by anyone of anything concerning instrumental methods of analysis, in a field completely dominated by instruments and measurement.

Instead, one encounters an analysis conveying no attention to instrumental limits of accuracy or to the consequences attending their technical development, or of their operational behavior. This, in a field where knowledge of such things is a pre-requisite.

Those composing the air temperature record display no knowledge of thermometers. Perhaps the ultimate irony.

No appraisals of LiG thermometers as instruments of measurement despite their preponderance in the historical temperature record. Nothing of the very relevant history of their technical evolution, of their reliability or their resolution or detection limits.

Nothing of the known systematic field measurement errors that affect both LiG thermometers and their successor temperature sensors.

One might expect those lacunae from mathematically adept science dilettantes, who cruise shallow numerical surface waters while blithely unaware of the instrumental depths below; never coming to grips with the fundamentals of study. But not from professionals.

We already knew that climate models cannot support any notion of a torrid future. Also, here and here. We also know that climate modelers do not understand physical error analysis. Predictive reliability: a mere bagatelle of modern modeling?

Now we know that the air temperature record cannot support any message of unprecedented warming. Indeed, almost no message of warming at all.

And we also now know that compilers of the air temperature record evidence no understanding of thermometers, incredible as that may seem. Instrumental methods: a mere bagatelle of modern temperature measurement?

The climate effects of our CO2 emissions, if any, are invisible. The rate or magnitude of the 20th century change in air temperature is unknowable.

With this study, nothing remains of the IPCC paradigm. Nothing. It is empty of content. It always was so, but this truth was hidden under the collaborative efforts of administrative embrace, partisan shouters, character assassins, media propagandists, and professional abeyance.

All those psychologists and sociologists who published their profoundly learned insights into the delusional minds, psychological barriers, and inadequate personalities plaguing their notion of climate/science deniers are left with egg on their faces or in their academic beards. In their professional acuity, they inverted both the order and the perceivers of delusion and reality.

We’re faced once again with the enormity of contemplating a science that has collapsed into a partisan narrative; partisans hostile to ethical practice.

And the professional societies charged with embodying physical science, with upholding ethics and method — the National Academies, the American Physical Society, the American Institute of Physics, the American Chemical Society — collude in the offense. Their negligence is beyond shame.

5 35 votes
Article Rating
1.2K Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
June 29, 2023 6:18 am

Repeated measurements of different things, as with the temperature at different times and places, cannot be more accurate than any one individual measurement.
So claims otherwise are tendentious.

Reply to  Tom Halla
June 29, 2023 6:49 am

Those measurements are then “adjusted” in the process of assembling the anomaly records. It is ludicrous to assume or assert that those adjustments are random, or that they correct the inaccuracies of the measurements.

Reply to  Ed Reid
June 29, 2023 7:09 pm

removing structural Bias is Standard in all Science.

Reply to  Steven Mosher
June 30, 2023 2:50 am

“Correcting” temperature records in standard in Climate “Science”.

Monckton of Brenchley
Reply to  Tom Halla
June 29, 2023 9:15 am

Pat Frank has done it again! His ground-breaking 2019 paper on the effect of propagation of error in a single initial condition in the wretched models showed that any temperature prediction that fell between -12 and +12 K was statistically meaningless. I had the honor to be present at the annual planetary-emergencies meeting of the World Federation of Scientists at which Pat first presented his findings. He was shouted down by the rabble of profiteers of doom, but bravely stood his ground. His paper on the failure of clahmatawlagiests to understand such elementary statistical techniques as error propagation was submitted to 13 successive journals, 12 of which rejected it with spectacularly half-witted “reviews” that amounted to little more than “This paper does not accord with the Communist Party Line and, since the Communist Party Line cannot be wrong, the paper must be wrong and must not be printed.” Fortunately, 13th time lucky, the paper was reviewed by (among others) Professor Karl Wunsch, who is a believer in the official narrative but, unlike nearly all other believers, retains enough intellectual honesty to know when the Party Line must be amended, at least so as to prevent the Party from looking more ridiculous than it already does. He could find no fault with the paper, and recommended publication. Though there was some strikingly inept whining from the usual paid trolls for climate Communism, both in comments here and in the wider blogosphere, Pat’s paper of 2019 stands unrefuted to this day. It was, until now, the most important paper published in global warming studies.

Now the second shoe drops with Pat’s latest paper. First of all, warmest congratulations to him for getting it past the gatekeepers and into print in a leading journal. He has very fairly stated that there are physical indications that the world is warming somewhat, but has concluded that those who measure global land and sea surface temperature know as little about the physical characteristics of thermometers as they do about statistical precision. As Pat rightly points out, if data are ill-resolved as well as incorrectly sampled (as at sea) with inconstantly-performing instruments (as on land), and then have erroneous statistical analyses applied to the meaningless results, all the predictions fall well within the correctly-evaluated uncertainty envelope and, therefore, tell us nothing – absolutely nothing whatsoever – about the amplitude of global warming in recent times. He is correct to conclude that one cannot use terrestrial temperature records as the basis for attempting to derive global temperature anomalies, still less a trend therein. Already the paid trolls are piling in with their pseudo-statistical gibberish, as usual divorced from the reality that if each temperature measurement is itself meaningless it is not possible to derive any realistic trend in such meaningless pseudo-measurements (except by inadvertence) whatever technique of statistical prestidigitation is adopted. All that can legitimately be pleaded is that the weather worldwide seems by direct observation of potentially-temperature-correlated physical indicators to have become warmer to an unknown and thus far net-beneficial degree.

Pat’s latest paper, which he kindly sent me a couple of days ago, is a most delightful read. And, like his head posting here, it is expressed with a powerful eloquence rare among the scientific community. I look forward to seeing how the trolls try to attack his fascinating real-world finding that the glass used in thermometer bulbs changes its characteristics over time. The trolls would be well advised to stay away from Pat on this, for materials science of this kind is one of the specialisms of the Stanford Linear Accelerator Laboratory, where Pat is now a most distinguished Emeritus Professor. As usual, the Marxstream media will follow their usual tactic: since the paper is damaging to the Communist Party Line but cannot legitimately be attacked scientifically they will simply refuse to publish it, so that almost no one ever gets to hear of it. However, I shall see to it that this second ground-breaking paper is put into the hands of the governments that I quietly advise on the climate question.

With this paper, what little scientific credibility the official climate-change narrative once possessed has collapsed. The world will in due course have very good cause to thank Pat Frank for saving us from the darkness and abject poverty to which the profiteers of doom would callously subject the countries of the West and – revealingly – only the countries of the West.

Reply to  Monckton of Brenchley
June 29, 2023 9:50 am

As I see it, this new paper also implicates – as meaningless – any of the observation-based estimates of a climate system response (whether ECS or TCR) to CO2 “forcing” using such time series records of temperature anomalies. This is not meant as a criticism of any of those investigators.

Brock
Reply to  David Dibbell
June 29, 2023 11:42 am

The earth’s energy imbalance is the most important number, since it tells us directly how far away from equilibrium we are. There is a belief by many that we can control the EEI by controlling CO2 emissions. What’s interesting is the EEI appears to be quite insensitive to changing incoming radiation, according the the CERES satellite data. In fact, it would not be unreasonable to conclude that the climate sensitivity is essentially zero. The earth is deciding what EEI it wants and not even a variation in the sun’s luminosity can change that. By extension, neither can CO2.

Reply to  Brock
July 2, 2023 4:29 am

Thanks for your reply. Sorry for the late response.
“The earth’s energy imbalance is the most important number, since it tells us directly how far away from equilibrium we are.”

What is that number, and what is the uncertainty associated with it? I know what NASA says. Not too sure about that 10 milliwatts per square meter precision or the +/- 100 milliwatts per square meter uncertainty. [ https://ceres.larc.nasa.gov/science/ ]

Consider that the planet as a whole experiences a surface warming and cooling response of about 3.8C on an annual cycle – because absorption of solar radiation in the NH and SH is not symmetric. How far away from equilibrium are we? That cyclic surface temperature response is displayed here from the output of a reanalysis model.
https://climatereanalyzer.org/clim/t2_daily/

I agree that “In fact, it would not be unreasonable to conclude that the climate sensitivity is essentially zero.” Or perhaps put it this way – The surface temperature response to incremental increases in non-condensing GHGs cannot be reliably distinguished from zero by any means we have available to us.

Bob Armstrong
Reply to  David Dibbell
July 3, 2023 1:07 pm

I have yet to see an analysis of the ~ 4.3c peak variation in equilibrium temperature which must occur from peri- to ap- helion around our orbit . It’s confounded with hemispherical differences , but can be calculated . Your 3.8c variation is the first I’ve seen citing the annual variation , but I’d like to see an analysis factoring the orbital contribution .

Reply to  Bob Armstrong
July 3, 2023 3:04 pm

I use the “3.8C” value because of Figure 10.2 in Javier Vinos’ recent book. I haven’t progressed that far actually reading his book yet to see if there is a deeper analysis.
https://judithcurry.com/wp-content/uploads/2022/09/Vinos-CPPF2022.pdf

By eyeball, the reanalysis web page I linked gives about the same value.

Reply to  Monckton of Brenchley
June 29, 2023 11:14 am

Thanks for your kind words, Christopher, and the accurate summary.

You’ve been a good friend and a comrade-in-arms for years, in the struggle for science and sanity. So you deserved first notice.

Still, the promotion to Professor wasn’t necessary. 🙂 Scientific Staff Emeritus is all I am.

Reply to  Monckton of Brenchley
June 29, 2023 12:59 pm

Here, here! Well stated, CMoB!

Reply to  karlomonte
June 30, 2023 12:57 am

Dear karlomonte,

You probably mean: hear, hear!

(I wonder if there is an ISO standard for that?)

All the best,

Bill

Reply to  Bill Johnston
June 30, 2023 4:02 am

Wrong.

And your dig on ISO standards is lame, even by your standards (pun intended).

youcantfixstupid
Reply to  Monckton of Brenchley
June 29, 2023 1:33 pm

“However, I shall see to it that this second ground-breaking paper is put into the hands of the governments that I quietly advise on the climate question.”

I don’t have such a mailing list but I’m thinking an abstract of this paper & Pat’s previous one should be sent to every government official even remotely connected to acting on the ‘climate change’ scam. But also to every ‘climate scientist’ and anyone who might review their papers. After having knowledge of these 2 papers if any supposed ‘climate scientist’ tries to publish a paper that doesn’t address the propagation of errors in any model they use and the measurement error inaccuracies in data is committing outright fraud. Not just a bit of ‘professional misconduct’…fraud.

Reply to  youcantfixstupid
June 30, 2023 2:52 am

You grossly overestimate the intellectual prowess and scientific understanding of Government bureaucrats.

Reply to  Graemethecat
June 30, 2023 10:03 am

Not to mention their unwillingness to hear or see anything outside of their ideaology-driven mindset. It’s difficult to work out who might actually be receptive to this with a voice that might be heard.

Reply to  Monckton of Brenchley
June 29, 2023 4:29 pm

Thunderous applause for this message Monckton, at least from this quarter. The temperature record is not fit for purpose and neither are the climate models!

Reply to  Monckton of Brenchley
June 30, 2023 2:44 pm

With this paper, what little scientific credibility the official climate-change narrative once possessed has collapsed.

Does this mean no more “pause” articles or realitymeters? If it’s impossible to know what the trend is how can you compare it to the models.

Monckton of Brenchley
Reply to  Bellman
July 1, 2023 4:23 am

The pathetic Bellman, who lacks the courage to publish in its own name, is perhaps unfamiliar with the difference between the surface and space, where the UAH satellite measurements are taken. The microwave sounding units on the satellites do not rely on glass-bulb mercury thermometers but on platinum- resistance thermometers. We use the satellite record, not the defective terrestrial record, for the Pause analyses.

The Pause may well end with the new El Niño.

Reply to  Monckton of Brenchley
July 1, 2023 5:07 am

Yet the satellite and surface data agree much more than they disagree. Some here insist the uncertainty in the UAH data is much bigger than that claimed here for the surface data. Maybe Pat Frank could say what he thinks the uncertainty is for UAH.

Reply to  Bellman
July 1, 2023 6:53 am

Some here insist the uncertainty in the UAH data is much bigger

A distortion bordering on a lie. But what else should be expected from a trendologist?

Reply to  karlomonte
July 1, 2023 7:06 am

So what do you think the monthly uncertainty of UAH is now?

In the past I think you’ve quoted figures of over 1°C, and you’ve called me a Spencer groupie when I thought it was probably a bit more certain.

Reply to  Bellman
July 1, 2023 8:11 am

Another lie.

You don’t really care what a real number is because it might upset your pseudoscience notions of tiny “uncertainties”,.

Reply to  karlomonte
July 1, 2023 8:23 am

Sorry, you are correct it was actually ±3.4°C.

“A lower estimate of the UAH LT uncertainty would be u(T) = ±1.2°C; combined with a baseline number this becomes: u(T) = sqrt[ 1.2^2 + 1.2^2 ] = 1.7°C, and U(T) = 2u(T) = ±3.4°C”

https://wattsupwiththat.com/2021/12/02/uah-global-temperature-update-for-november-2021-0-08-deg-c/#comment-3401727

Reply to  Bellman
July 1, 2023 8:27 am

Oh look bellcurvewhinerman has little ol’ me stashed in his enemies files!

*preens*

And you still can’t read with comprehension.

Reply to  karlomonte
July 1, 2023 8:34 am

Oh look, the troll calls me a lier, then whines when I quote his own words back at him.

Have you heard of this thing called the internet? It allows you to do searches through other people’s comments. Maybe you should remember that next time you deny saying something.

Reply to  Bellman
July 1, 2023 9:05 am

The truth is you don’t care what a real number might be; your task is to denigrate anything that puts the precious GAT trend lines in doubt. This is the realm of pseudoscience. Thus you have to nitpick what Pat Frank writes regardless of relevancy.

So very Stokesian.

And you still can’t read with comprehension.

Nick Stokes
Reply to  karlomonte
July 1, 2023 9:38 am

“And you still can’t read with comprehension.”
As he quoted:
“A lower estimate of the UAH LT uncertainty would be u(T) = ±1.2°C; combined with a baseline number this becomes: u(T) = sqrt[ 1.2^2 + 1.2^2 ] = 1.7°C, and U(T) = 2u(T) = ±3.4°C”

It’s hard to see how that means anything other than that the uncertainty is ±3.4°C.

Reply to  Nick Stokes
July 1, 2023 12:08 pm

Oh look! Nitpick can’t read either.

No surprise here.

Reply to  Bellman
July 1, 2023 9:30 am

What interests me is the fact that they ADDED THE INDIVIDUAL UNCERTAINTIES of temp and baseline (in quadrature). Why is the same thing not done in climate science when computing anomalies??

Reply to  karlomonte
July 1, 2023 8:29 am

Oh, and

“So he can argue with all the other Spencer groupies like yourself?”

https://wattsupwiththat.com/2022/06/01/uah-global-temperature-update-for-may-2022-0-17-deg-c/#comment-3527275

Reply to  Bellman
July 1, 2023 9:06 am

Heh more enemies files.

Got any more? This is amusing.

Reply to  Bellman
July 1, 2023 12:18 pm

T. Mo (2017) Postlaunch Calibration of the NOAA-18 Advanced Microwave Sounding Unit-A

Figure 1 shows the temperature specification error is ±0.3 C.

Post-launch error was about ±0.2 C for 250 days during 2005. From Table II, MSU avg. NedT: specification, 0.43±0.27; measured, 0.25±0.17.

Reply to  Pat Frank
July 1, 2023 2:36 pm

Does that mean the UAH monthly and annual average should be that large?

Reply to  Bellman
July 1, 2023 2:47 pm

Stokes: “ERRORS ALL CANCEL!”

Reply to  karlomonte
July 1, 2023 3:07 pm

But Frank believes uncertainties do not cancel if they are caused by resolution. That’s why I’m asking if he applies the same logic to UAH. I’m hoping so as it means the end of all the pause nonsense.

Reply to  Bellman
July 2, 2023 5:20 am

If the instruments are the same, LIG in this case, then how can the resolution uncertainty cancel?

Once again, you are showing, for everyone to see, that you have absolutely no understanding of the concept of uncertainty!

Reply to  Tim Gorman
July 2, 2023 6:48 am

I’m not going to keep explaining this to you. You claim to have done all the exercises in Taylor, bit seem to have forgotten the ones demonstration how this can happen.

But in this case you are ignoring the point. I’m not asking about surface temperature, but about UAH. If your logic is correct then it isn’t possible to reduce the uncertainty in the satellite data. If you can’t do that then Monckton is wrong to claim that UAH data does not have the same issues is claimed by Frank.

Reply to  Bellman
July 2, 2023 6:52 am

“Pipe down, class! Da expert ix tellin’ y’all like it is!”

Reply to  karlomonte
July 2, 2023 8:02 am

Still waiting for your expert opinion on the uncertainty of UAH.

Reply to  Bellman
July 2, 2023 8:50 am

[Free clue, feeling generous this AM]

And what you continually to fail to grasp is that whatever I think or believe about this number is totally irrelevant.

Reply to  karlomonte
July 2, 2023 9:07 am

Finally something we can both agree on.

Reply to  Bellman
July 2, 2023 9:27 am

The WHOOOSH sound was the clue going over your head at Mach 5.

[no doubt the clue will go directly into your stupid enemies files]

Reply to  Bellman
July 2, 2023 12:58 pm

[but] seem to have forgotten the ones demonstration how this can happen.”

Not with the lower limit of instrumental detection.

Reply to  Bellman
July 2, 2023 11:29 am

The UAH uncertainties I posted are from calibrations. They’re not instrumental detection limits, which are categorically different. Pay attention.

Reply to  Pat Frank
July 2, 2023 12:25 pm

He can’t.

Reply to  Bellman
July 3, 2023 11:06 am

Bellman July 1, 2023 5:07 am

Yet the satellite and surface data agree much more than they disagree.

The main surface datasets disagree with both each other and the satellite data …

comment image

w.

Reply to  Willis Eschenbach
July 3, 2023 2:00 pm

But as I said, they agree more than they disagree. I’m talking particularly of the monthly and annual variations, which your very smoothed graph hides.

I’m also a bit puzzled why HadCRUT5 is showing less warming than UAH, assuming the red and black line is UAH.

Reply to  Bellman
July 3, 2023 2:17 pm

Here are the three main ones set to the same base period. HadCRUT and GISS are nearly identical, but UAH is noticeably slower.

20220703wuwt1.png
Reply to  Bellman
July 3, 2023 2:44 pm

Here’s a scatter plot of UAH6 and GISS monthly anomalies. The correlation coefficient is 0.83.

20230703wuwt3.png
Reply to  Bellman
July 3, 2023 2:55 pm

And there’s the same but showing the claimed ±1.6°C 2σ uncertainty range.

20230703wuwt4.png
Reply to  Bellman
July 3, 2023 3:01 pm

Finally, here’s the same showing annual values.

Correlation coefficient is 0.91.

20230703wuwt5.png
bdgwx
Reply to  Willis Eschenbach
July 3, 2023 5:06 pm

In his paper Frank claims the 2σ uncertainty of the global average temperature anomalies is around ±2 C (see figure 19). You’re graph shows no where near that level of disagreement.

BTW…I’m confused about UAH in your graph. The absolute values of UAH TLT are around -10 C or about 25 C lower than the surface temperature so why is the red-black line assuming it is UAH TLT so high up in the graph?

Reply to  bdgwx
July 3, 2023 6:28 pm

All the surface air temperature compilations use the same data, bdgwx. Resemblance doesn’t mean physically correct.

The only valid UAH-surface temperature comparisons would employ raw data.

Reply to  Pat Frank
July 3, 2023 7:21 pm

But UAH is not surface data. The satellite data is the closes thing I can think of to an independent comparison with surface data. The uncertainty in the difference between the two should be greater than the uncertainty of the surface data alone.

Reply to  Bellman
July 3, 2023 10:57 pm

They’re both temperature records. Whether they’re compared depends on what one desires to know.

In any case, uncertainty is not error..

The magnitude of error in each record, the surface and UAH, is not known. Subtracting the records won’t tell you about error.

But if you *do* subtract them, the uncertainty (not the error) in each records adds in quadrature to condition the difference.

Reply to  Pat Frank
July 4, 2023 3:29 am

He’s been told all of these many, many times. Gave up long ago that there is any hope he might ever understand.

Reply to  Pat Frank
July 4, 2023 5:14 am

In any case, uncertainty is not error.

As I’ve been trying to tell you. That’s why the uncertainty of the mean is not the same as the mean uncertainty. If everything has the same uncertainty, it does not mean that every measurement will have the same error.

The problem is that many here take the mantra “error is not uncertainty” as an excuse for saying the uncertainty can be anything you wish. They’ll say the uncertainty could be ±500°C, and it’s impossible to prove them wrong because “error is not uncertainty”.

But in my view, it doesn’t matter how you define uncertainty – either as the deviation of errors around the true vale, or as an interval that characterizes the dispersion of the values that
could reasonably be attributed to the measurand – you are still making a claim about how much difference there is likely to be between any measurement and the measurand. If you say there is a 95% confidence interval of ±2, then you are claiming that around 5% of the time you might get a measurement that differs from the measurand by more than 2.

If you make 500 measurements and it turns out that none of them where more than 2 from the correct value then you might wonder if your uncertainty interval is a little too large. If it turns out that all your values only differed from the true value by a a few tenths of a degree, it seems pretty clear that your uncertainty was far too big.

Reply to  Bellman
July 4, 2023 5:53 am

You do realize that every one of those 500 readings have their own uncertainty. With LIG, was the meniscus properly read, resolution uncertainty, systematic uncertainty, what the GUM calls “influence effects”, etc. Electronic thermometers have their own set of uncertainties. Ultimately, each reading has its own distribution surrounding it. Variances do add, thereby expanding the deviation.

With temperatures, many months have skewed distributions. NIST in TN 1900 assumes a Student t distribution that has longer tails, will more faithfully represent the uncertainty range after expansion. Knowing that there are skewed distributions, using simple arithmetic averaging is a joke unto itself. The GUM requires that a mean is proper only with symmetric distributions. It never details how to handle skewed distribution in computing uncertainty.

Here is a good question for you. How do you calculate uncertainty from a skewed distribution of measurements?

Dr. Frank has mentioned that climate science from the beginning has never done a scientific analysis of uncertainty in measurements nor of how to combine various locations together properly.

From the beginning someone said let’s just take a simple average and assume it is correct. Thn somewhere along the line, someone decided what they were seeing wasn’t making sense, so they concluded, without evidence, that pas temperature data needed correction. What’s worse, they made changes so that measurements that were changing due to different “influence effects” all matched so LONG RECORDS could be manufactured. No science whatsoever.

Reply to  Jim Gorman
July 4, 2023 6:00 am

He doesn’t care, at all. His task is a political one.

Reply to  Jim Gorman
July 4, 2023 7:58 am

As usual with you I’ve no idea what any of this has to do with the point I was making, but there a few interesting points.

With temperatures, many months have skewed distributions. NIST in TN 1900 assumes a Student t distribution that has longer tails, will more faithfully represent the uncertainty range after expansion.”

I’m sure I’ve told you this before, but you are misunderstanding this. The point of using a Student t distribution has nothing to do with the data being skewed. On the contrary it assumes the data is Gaussian. TN1900 states that they are assuming the daily maximums are a normal distribution, though with such a small sample it’s impossible to say either way. They even point out you might get a different uncertainty if you don’t assume a normal distribution and talk about the procedure developed by Frank Wilcoxon to estimate the different uncertainty.

The point of a Student t distribution is to compensate for a small sample size – specifically caused by having to use the sample standard deviation, which will be an uncertain estimate of the population deviation.

Here is a good question for you. How do you calculate uncertainty from a skewed distribution of measurements?

A number of ways – but I’m not an expert on any of this.

For a start, the CLT suggests that the sampling distribution of a skewed distribution will still tend towards normality as sample size increases. If your sample size is large enough the skewed nature of the data doesn’t matter. (Obliviously you need to decide on what “large enough” means.)

Then there are specific special case statistical methods that can be applied to special cases (but I couldn’t tell you what any are.)

You might also be able to apply a transform to the data to make it normal.

And I suspect this is more usual nowadays, there are various Monte Carlo, and resampling techniques. This is what I think NT 1900 recommends, and avoids a lot of the statistical issues. Most of the statistics we are talking about where developed in the 19th and early 20th century, and are based on simplifying equations in order to make it possible to do the math. With computers a lot of this is less useful and it’s possible to just simulate taking multiple samples and seeing what the standard deviation of the means is.

Reply to  Bellman
July 4, 2023 8:37 am

For a start, the CLT suggests that the sampling distribution of a skewed distribution will still tend towards normality as sample size increases.”

The CLT REQUIRES multiple samples in order to build a distribution of sample means that tends to Gaussian. *NOT* your “single sample” can define a distribution!

Reply to  Bellman
July 4, 2023 6:00 am

But in my view, it doesn’t matter how you define uncertainty 

Another face-plant. Fortunately real metrologists care nothing for your view.

Reply to  Bellman
July 4, 2023 8:12 am

you are still making a claim about how much difference there is likely to be between any measurement and the measurand.

No.

Your statement shows a complete lack of understanding of uncertainty.

Ironically, after you agreed that uncertainty is not error, right there you went on to portray uncertainty as error.

Uncertainty is an ignorance width. You’ve heard that many times, Bellman. Clearly it’s never penetrated.

Reply to  Pat Frank
July 4, 2023 8:37 am

No.

Then say how you define uncertainty. In particular what would you mean by a 95% level of uncertainty?

Ironically, after you agreed that uncertainty is not error, right there you went on to portray uncertainty as error.

No. I said that uncertainty can be defined in terms of error, not that it is error.

Uncertainty is an ignorance width. You’ve heard that many times, Bellman. Clearly it’s never penetrated.

Because it’s hand waving. What are you ignorant about? What does a 95% ignorance width mean? It just feels like a phrase used to avoid understanding that probabilities are involved, and as a way of avoiding having to justify any conclusions.

Reply to  Bellman
July 4, 2023 10:58 am

Because it’s hand waving.

Roy, C.J. and W.L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Engineer., 2011. 200(25-28): p. 2131-2144.

“Aleatory (random) uncertainties in model inputs are treated as random variables, while epistemic (lack of knowledge) uncertainties are treated as intervals with no assumed probability distributions.” (my emphasis)

Helton, J.C., et al., Representation of analysis results involving aleatory and epistemic uncertainty. Int’l J. Gen. Sys., 2010. 39(6): p. 605-646

“epistemic uncertainty deriving from a lack of knowledge about the appropriate values” (my emphasis)

[1]   Vasquez, V.R. and W.B. Whiting, Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods. Risk Analysis, 2006. 25(6): p. 1669-1681

“[S]ystematic errors are associated with calibration bias in the methods and equipment…
“[E]ven though the concept of systematic error is clear, there is a surprising paucity of methodologies to deal with the propagation analysis of systematic errors. The effect of the latter can be more significant than usually expected. Important pioneering work describing the role of uncertainty on complex processes has been reported by Helton (1994, 1997) and references therein, Paté–Cornell (1996), and Hoffman and Hammonds (1994). Additionally, as pointed out by Shlyakhter (1994), the presence of this type of error violates the assumptions necessary for the use of the central limit theorem, making the use of normal distributions for characterizing errors inappropriate.” (my emphasis)

You’re ignorant of the subject, Bellman, you don’t know you’re ignorant, you don’t want to know you’re ignorant, you reject the discussions of professionals, you’re foolishly aggressive about promoting your ignorance, and you evidently think that everyone else is ignorant and aggressive in the same ways you are.

All of that makes you a nearly hopeless case. Perhaps to be ameliorated after a long bout of honest and very critical introspection.

Reply to  Pat Frank
July 4, 2023 1:17 pm

Thanks. Though the first paper is behind a paywall and the other two links are dead.

I don’t disagree with anything said in your quotes, but they don’t answer my question – what do you mean by a 95% ignorance interval? What does the 95% mean if it’s not referring to a 95% of a measure being within the interval of the true value?

Saying we will never know the true value so the question is irrelevant, just leaves it as your word against anyone else’s; an unfalsifiable hypothesis.

Look at the passage from Bevington that Tim Gorman was talking about. There it’s just a simple estimate of the standard uncertainty interval, but he points out that the interval should mean that 7 out of 10 times your measureemnt should fall within the interval, and that exaggerate the interval to be on the safe side.

Reply to  Bellman
July 4, 2023 2:44 pm

You’ve been given the answer to this question in the past, but of course you pooh-poohed and rejected it.

Reply to  karlomonte
July 4, 2023 3:24 pm

You’ve been given the answer to this question in the past

Which was?

Reply to  Bellman
July 4, 2023 6:02 pm

Standard (GUM 4) versus expanded (GUM 6) uncertainty:

6.2 Expanded uncertainty
6.3 Choosing a coverage factor

Reply to  karlomonte
July 4, 2023 6:21 pm

Which is exactly what I would say. I’ve no idea why you keep claiming I disagree with this at all. You have the standard uncertainty, and multiply it by a coverage factor to get the expanded uncertainty.

Reply to  Bellman
July 4, 2023 4:06 pm

The links to Helton and Vasquez terminate in periods. Apologies for the oversight. Remove the periods and the links work.

You asserted my statement that, “Uncertainty is an ignorance widthis handwaving.

It isn’t. it is analytically exact and has considerable literature precedent.

A 95% uncertainty (ignorance width) means statistically there’s ~95% chance that the physically correct value is somewhere between the two bounds. But there is no way to know where.

Uncertainty bounds wider than any possible physically reasonable value means that the ignorance is total. No quantitative information is available about correct value, at all.

For example, the diagnosis of total ignorance follows from the ±15 C 1σ uncertainty bound around the centennial air temperature projections made using climate models of the CMIP5 generation.

The ±15 C ignorance width is far larger than any physically reasonable air temperature expected for any year of the 22nd century (a concept beyond the grasp of every single climate modeler I have encountered). CMIP5 air temperature projections are physically meaningless.

leaves it as your word” Leaves it as demonstrated by the quantitative analysis within LiG Met.

Unless you know a way to correct every single air temperature measurement taken between 1850 and 2023 for the impact of local environmental variables, a way to correct for unknown Joule-drift, and a way to remove the lower limit of detection from all LiG thermometers in use prior to 1981.

Know any way to do that? Neither do I. That leaves us with unknowable.

Bevington is about random error.

Reply to  Pat Frank
July 4, 2023 4:36 pm

Your responses make so much sense. I wish I had studied more about the subject.

My earliest experience was from working on engines and other equipment. Here is a nice video about overhauling an old tractor engine. Notice the exact measurement for valve guide insertion. Then the exact measurements and angles on the valve seats.

My lab training was in chemistry, physics, and EE labs in college.

My real world experience came from designing and building high performance HF transmitters and receivers. A real eye opener about high quality test equipment to make sensitive measurements. Resolution limit is very apparent when dealing in microvolts.

Reply to  Jim Gorman
July 4, 2023 4:50 pm
Reply to  Pat Frank
July 4, 2023 5:47 pm

A 95% uncertainty (ignorance width) means statistically there’s ~95% chance that the physically correct value is somewhere between the two bounds. But there is no way to know where.”

Which is what I would have expected it to mean. This still leaves the question of why you won’t consider that a test that suggests the physically correct value is not outside the bounds ~5% of the time doesn’t suggest a flaw in the analysis.

Reply to  Bellman
July 4, 2023 5:02 pm

what do you mean by a 95% ignorance interval?”

You’ve been given this multiple times and you can’t seem to remember it.

You even spit out what Bevington said but you refuse to understand it!

He did *NOT* say that it exaggerates anything. You just make crap up and expect people to believe it!

Once again, here is *exactly* what Bevington said so that others won’t assume you are correct.

“A study of the distribution of the results of repeated measurements of the same quantity can lead to an understanding of these errors so that the quoted error is a measure of the spread of the distribution. However, for some experiments it may not be feasible to repeat the measurements and experimenters must therefore attempt to estimate the errors based on an understanding of the apparatus and their own skill in using it. For example, if the student of Example 1.1 could make only a single measurement of the length of the table, he should examine his meter stick and the table and try to estimate how well he could determine the length. His estimate should be consistent with the result expected from a study of repeated measurements; that is, to quote an estimate for the standard error, he should try to estimate a range into which he would expect repeated measurements to fall about seven times out of ten. Thus he might conclude that with a fine steel meter stick and a well-defined table edge, he could measure to about +/- 1mm or +/- .001m. He should resist the temptation to increase this error estimate, “just to be sure.”

Several points should be gleaned by anyone that can read (that lets out bellman).

  1. This only works for repeated measurements of the same thing, as Bevington takes pains to state.
  2. If you can only make one measurement, as with temperature measuring stations, then you have to estimate the uncertainty of the measurement using judgement and knowledge of the apparatus.
  3. The uncertainty interval should *NOT* be exaggerated as bellman accuses Bevington of saying.

Once again, bellman shows that he simply cannot read and understand simple written English. He doesn’t accept that statistical analysis of measurements only works when you have totally random error generated from multiple measurements OF THE SAME THING. If there is any systematic bias then statistical analysis will not be able to identify it and the accuracy of the mean suffers because of it.

We all know that bellman, bdgwx, Stokes, and most of the climate science cadre simply do not believe this. They have their own truth – “all error is random, Gaussian, and cancels, even systematic bias associated with multiple measurements of different things.

We all also know that bellman is going to make a fool of himself trying to refute this simple truths of metrology. I can’t wait.

Reply to  Tim Gorman
July 4, 2023 7:22 pm

He did *NOT* say that it exaggerates anything. You just make crap up and expect people to believe it!

What are you on about now? Who said anything about exaggerating anything. I merely pointed out that Bevington says you should not exaggerate the interval.

Once again, here is *exactly* what Bevington said so that others won’t assume you are correct.

Which is the same section I quoted. You seem to be reading much more into this than I meant.

“Several points should be gleaned by anyone that can read (that lets out bellman).”

This might be ironic if it turns out you are misunderstanding anything.

This only works for repeated measurements of the same thing, as Bevington takes pains to state.

In this case the says, in the passage you just quoted

However, for some experiments it may not be feasible to repeat the measurements and experimenters must therefore attempt to estimate the errors based on an understanding of the apparatus and their own skill in using it. For example, if the student of Example 1.1 could make only a single measurement of the length of the table …

The uncertainty interval should *NOT* be exaggerated as bellman accuses Bevington of saying.

You’ve accused me of saying this many times in the last few posts and it’s such a bizarre misreading. All I said was to take note of the sentence where Bevington explicitly says you SHOULD NOT exaggerate. Why you think that means I’m suggesting Bevington is saying the opposite is incomprehensible.

If you can’t figure out why I pointed to that sentence, then you really aren’t paying attention to this article.

Reply to  Bellman
July 5, 2023 7:00 am

So you didn’t say:

bellman: “Look at the passage from Bevington that Tim Gorman was talking about. There it’s just a simple estimate of the standard uncertainty interval, but he points out that the interval should mean that 7 out of 10 times your measureemnt should fall within the interval, and that exaggerate the interval to be on the safe side.” (



Reply to  Tim Gorman
July 5, 2023 8:06 am

Sorry. I hadn’t spotted that embarrassing mistake. It should have said “and the you should not exaggerate the interval…”

No excuses for my lax typing. But it would have been easier if you’d asked me about that comment there, rather than shouting about it in a different thread.

Reply to  Pat Frank
July 4, 2023 4:42 pm

“Aleatory (random) uncertainties in model inputs are treated as random variables, while epistemic (lack of knowledgeuncertainties are treated as intervals with no assumed probability distributions.” (my emphasis)”

Thanks for confirming what I have been saying for two years. I never found a confirming reference. Uncertainty intervals have no probability distribution. If they did you could guess at the true value – like bellman always wants to do.

He doesn’t understand that uncertainty means you don’t know. Why he can’t accept that is beyond me.

Reply to  Tim Gorman
July 4, 2023 6:08 pm

Uncertainty intervals have no probability distribution.

I’m not sure that is what the paper is saying.

Aleatory uncertainties definitely have a probability distribution. What the do in the paper is to not assume a probability distribution for the epistemic uncertainties. Not assuming doesn’t mean it doesn’t have one it’s just you are not assuming you no what it is.

We will represent epistemic uncertainty as an interval-valued quantity, meaning that the true (but unknown) value can be any value over the range of the interval, with no likelihood or belief that any value is more true than any other value.

But it still effectively means you are assuming a uniform distribution.

Reply to  Bellman
July 5, 2023 6:40 am

If you don’t know what it is then how do you know it even exists? How do you analyze it if you don’t know the distribution?

It doesn’t appear that you have any better understanding of aleatory uncertainty any more than you do of measurement uncertainty.

Assume I am rolling a die you can’t see. I give you the value of 4 rolls – 2, 3,4, 5. What size die am I rolling? You can eliminate d2, d3, and d4 dies because you would never get a 5 from them. But how do you know if it is a d6, d8, d10, d20, or even a d100? Each of those is a possible generator of those values. That is aleatory uncertainty and there is *NO* probability function or distribution that you can use to allow you to determine which size die its. Thus you have an unknown that you can’t resolve, an uncertainty.

Your uncertainty interval would range from d6 to infinity (assuming you could build a die that large). It is not a uniform distribution because there is only ONE DIE being rolled. It is the true value with 100% probability of being the true value. The probability of all the other dies being the true value is zero. The problem is that you don’t know which die is the true value. The probability distribution telling you the probability for each size die is unknown – which means saying it is a uniform distribution is nothing more than a subjective way for you to make sense of it! But it has no meaning in the real world. And it *still* doesn’t allow you to resolve the uncertainty – it all just remains an unknown! The numbers on a d6 *do* have a uniform probability distribution – each of them is equally likely to be the “true value” on the next roll. A far different thing.

Instrumental uncertainty you *can* quantify, at least within limits, based on the construction of the apparatus. And you can certainly find an average value for the instrumental uncertainty related to the physical constraints of the apparatus.

Reply to  Tim Gorman
July 5, 2023 7:55 am

That he tries to glean information about distributions or true values from uncertainty intervals indicates he still doesn’t grasp the basics.

Reply to  Tim Gorman
July 5, 2023 8:09 am

“How do you analyze it if you don’t know the distribution?”

Don’t ask me. That’s the subject of the paper.

Reply to  Bellman
July 5, 2023 9:19 am

That is *NOT* the subject of the paper! The paper is about the uncertainty INTERVAL, not the probability distribution of the possible values.

You need to admit to yourself at some point that you simply do not understand the concept of uncertainty and abandon your cult dogma. You’ll never fit your cult dogma into the reality of metrology. Neither will bdgwx. Neither will Stokes, Nor will climate science.

You need to learn a few basic truths of metrology. They are simple and easy to understand if you approach with an open mind instead of cherry-picking stuff to try and disprove the fundamental truths of uncertainty.

  1. Uncertainty is not error. Error is not uncertainty.
  2. Uncertainty does not have a probability distribution. If it did you could estimate the true value instead of it being an unknown.
  3. The uncertainty of stated values always add, they never subtract, not even if you are subtracting the stated values.
  4. The average uncertainty is not the uncertainty of the population mean.
  5. The CLT requires multiple samples to develop a distribution. Without multiple samples you cannot assume a single sample is a Gaussian distribution that describes the closeness to the population mean with a standard deviation.
  6. You cannot increase measurement resolution through averaging. The average is a statistical descriptor and not a measurement. The average therefore inherits the resolution of the measurements, just like with uncertainty.

Learn these and you’ll stop posting assertions that don’t work in the real world. Leave your universe of “all uncertainty is random, Gaussian, and cancels”.

Reply to  Tim Gorman
July 5, 2023 12:13 pm

That is *NOT* the subject of the paper!

To be clear I meant the paper the quote was from:

A comprehensive framework for verification, validation, and
uncertainty quantification in scientific computing

Reply to  Bellman
July 5, 2023 12:38 pm

So what difference does it make? You *still* don’t know anything about the subject but feel that you can make unsubstantiated claims that something about it is wrong.

Aleatory uncertainties definitely have a probability distribution.”

The aleatory uncertainties are treated as a random variable in the input to a model.

from the paper: “ aleatory – the inherent variation in a quantity that, given sufficient samples of the stochastic process, can be characterized via a probability density distribution,”

Note carefully the words “sufficient samples”. How do you get sufficient samples of temperature when you get ONE measurement of one thing?

Is this part of your magic crystal ball that can determine a distribution from one value?

Reply to  Tim Gorman
July 6, 2023 4:27 am

But how do you know if it is a d6, d8, d10, d20, or even a d100?

Are those the only options? Don;t you have a d12?

That is aleatory uncertainty and there is *NO* probability function or distribution that you can use to allow you to determine which size die its.”

Firstly, do you mean “aleatory” here. Aleatory is uncertainty caused by randomness, it’s epistemic uncertainty that’s caused by a lack of knowledge.
Secondly, of course you can use probability to determine the likelihood or probability of any given die (depending on your definition of probability). It’s what statisticians have been doing for centuries.

Assuming these were your only 8 dice then in frequentist terms the likelihood of any die is the probability that you would roll those 4 numbers. It’s inevitably going to mean the d6 is most likely, because you are more likely throw any given number on a d6 than a d8 etc.

In Bayesian terms, you need a prior, but as you are already pleading ignorance, you can just assume all of the 8 dice are equally likely. The the probability of any specific die is the probability of getting those 4 numbers with that die divided by the overall probability of getting those 4.

My quick calculations (be sure to check them as I’ could easily have made a mistake) gives the following probabilities

2: 0.00000
3: 0.00000
4: 0.00000
6: 0.68770
8: 0.21759
10: 0.08913
20: 0.00557
100: 0.00001
Quite likely it is a 6 sided die, and whilst it’s possible it might have been a d100, the odds are very small.

Reply to  Bellman
July 6, 2023 5:24 am

Since the rolls are random you cannot predict sequences. The probabilities of each individual roll do not sum to predict a sequence.

You are perfectly demonstrating the difference between aleatory and epistemic.

You can’t use probabilities to predict the next roll, it is random, no matter what size of die is involved. Each number on the die has an equal probability of being rolled. If you could predict the next roll every casino in Las Vegas would be out of business. If you can’t predict the next roll then you can’t predict any specific sequence either. Aleatory uncertainty.

You do *not* know which size of die it is. It could be any of them. Ignorance is epistemic.

Measurement uncertainty of temperature certainly has a major component of epistemic uncertainty. You simply don’t know the systematic bias let alone the random uncertainties. Ignorance.

Epistemic uncertainty is also known as systematic uncertainty, and is due to things one could in principle know but does not in practice. This may be because a measurement is not accurate, because the model neglects certain effects, or because particular data have been deliberately hidden”

What Pat has done is to pull the random curtain away from some of the components of measurement uncertainty, with respect to the measure of temperature, by defining the minimum ignorance interval. It is still ignorance.

Reply to  Tim Gorman
July 6, 2023 3:31 pm

Such a pity. You finally come up with an interesting question, and one that doesn’t involve measuring bits of wood you’ve picked out of the sewer – and when I give my answer you mover the goal posts again.

Since the rolls are random you cannot predict sequences. The probabilities of each individual roll do not sum to predict a sequence

You were not asking about predicting the next number in a sequence. You were asking about the the likelihood of the particular die you were rolling.

You can’t use probabilities to predict the next roll, it is random, no matter what size of die is involved.

The point of probability is not to predict the next roll, it’s to tell you the probability.

You do *not* know which size of die it is. It could be any of them. Ignorance is epistemic.

But as we see, we can generate a probability distribution based on the evidence. We are not completely ignorant. And we can then use that probability to work out the probability of the next throw. It’s very unlikely to be 100, becasue it’s very unlikely you are using a d100. It’s more likely to be between 1 and 6, becasue a 6 sided die is most likely.

Measurement uncertainty of temperature certainly has a major component of epistemic uncertainty.

And if you want to do an uncertainty quantification you need to estimate what they are. But just claiming they must be big, seems like wishful thinking if as you say you are ignorant of them. Any modelling then needs to see how these uncertainties will affect the outcome. This would I expect depend on whether you think all instruments have the same epistemic uncertainty or if they are all different. And if they had the same values during the base period as they do now.

It is still ignorance.

That’s how it seems to me.

Reply to  Bellman
July 6, 2023 4:27 pm

You were not asking about predicting the next number in a sequence. “

ROFL!! I didn’t give you a sequence of numbers? Wow, just WOW!

The point of probability is not to predict the next roll, it’s to tell you the probability.”

Cop-out! Completely expected. You thought you could tell what die was being used and when shown you couldn’t you punt!

But as we see, we can generate a probability distribution based on the evidence”

You said you can’t predict the next roll! That makes it what kind of uncertainty? You can run. You can pettifog. You can equivocate. But you can’t hide!

It’s very unlikely to be 100, becasue it’s very unlikely you are using a d100.”

We use a d100 in D&D role play lots of time to choose items from a table. You can’t even get this one right!

“But just claiming they must be big, seems like wishful thinking if as you say you are ignorant of them.”

Like Pat pointed out, you just don’t get it. It’s an ignorance interval. That can be huge, especially when made up of components whose ignorance interval is huge!

“Any modelling then needs to see how these uncertainties will affect the outcome”

You don’t need to model instrumental uncertainty that can be quantified as being resolution limited. You are *still* trying to get out from under having to consider uncertainty as a real thing.

Face it, instrumental uncertainty doesn’t cancel. It just adds and adds and adds – all the way through!

Reply to  Tim Gorman
July 6, 2023 4:59 pm

ROFL!! I didn’t give you a sequence of numbers? Wow, just WOW!

Stop rolling around and try to remember. You said

Assume I am rolling a die you can’t see. I give you the value of 4 rolls – 2, 3,4, 5. What size die am I rolling?

Cop-out! Completely expected. You thought you could tell what die was being used and when shown you couldn’t you punt!

I said you could tell which die was most likely to be used. Not that I knew which one was being used. Once again your ignorance is showing.

You said you can’t predict the next roll!

No idea what your problem is. You cannot know what the next roll is. You can make statements about the probability of the next roll.

We use a d100 in D&D role play lots of time to choose items from a table. You can’t even get this one right!

I’d have thought Tunnels and Trolls was more appropriate. Sorry, but you can just not be this gormless. You know full well what I said, it’s all described in the previous comment. It’s unlikely to be a d100 because you rolled 4 numbers between 2 and 5. It’s therefore much more likely you were using a smaller die. The odds of getting 4 numbers <= 20 on a d100 is about i in 500. The chances of getting 4 <= 6 is more like 1 in 80000. It’s far more likely you where using a smaller die.

Like Pat pointed out, you just don’t get it. It’s an ignorance interval. That can be huge…

Only if you don;t care that it’s meaningless. There seems to be this assumption that becasue youve started calling it an ignorance interval, reality doesn’t matter.

Face it, instrumental uncertainty doesn’t cancel. It just adds and adds and adds – all the way through!

It doesn’t bother you that Pat Frank is actually dividing by N all the way through. IKf he followed your advice the ignorance would be beyond believe. Intervals of millions of degrees.

Reply to  Tim Gorman
July 6, 2023 4:45 am

And it *still* doesn’t allow you to resolve the uncertainty – it all just remains an unknown!

At which point you are giving up on any type of uncertainty analysis. Just becasue you don’t know whnat something is doesn;t mean you can’t say how likley it is. The point of the paper you are citing is trying to calculate probabilities that something will succeed or not. They don’t just say there’s no way to know becasue it could be anything?

Reply to  Bellman
July 6, 2023 5:30 am

You just don’t get it. How do you resolve something you can’t possible know? That doesn’t mean you are giving up on uncertainty analysis – it is just the opposite in fact! You estimate your ignorance interval!

Do *YOU* know the systematic bias in the temperature measuring station at Kansas State University? Of course *YOU* do I’m sure. But how do the climate scientists know? How do they assign a probability to *any* value in the ignorance interval?

If you don’t know the systematic bias then how do you calculate the random uncertainty?

Reply to  Bellman
July 5, 2023 7:39 am

“But it still effectively means you are assuming a uniform distribution.”

Wrong.

Assignment of a box interval designates a width of ignorance. The position of the correct value within the width is utterly unknown, including even if it is within the width.

It’s best guess.

Reply to  Pat Frank
July 4, 2023 5:20 pm

Great statement. I am going to save your entire response for future use.

Reply to  Jim Gorman
July 4, 2023 6:08 pm

Ditto.

Reply to  Pat Frank
July 4, 2023 4:38 pm

It’s never going to penetrate. It would mean having to admit he’s been wrong about uncertainty since forever. So he’ll keep on making stupid assertions that a six year old could refute.

Reply to  Bellman
July 4, 2023 8:34 am

The problem is that many here take the mantra “error is not uncertainty” as an excuse for saying the uncertainty can be anything you wish”

Again, you simply can’t read, or you refuse to. I gave you what Bevington said about the uncertainty interval. As usual, you didn’t bother to read it. He says the estimate of the uncertainty interval should be such that seven out of ten measured values fall within the interval!

That is *NOT* “anything you wish”!

Jeesh, man, give it a break. Nothing, absolutely NOTHING, you have said in the past two days makes any sense at all!

you are still making a claim about how much difference there is likely to be between any measurement and the measurand.”

Since you do *NOT* know the true value then how do you come up with a difference? The uncertainty interval gives you a clue as to what the maximum and minimum differences are likely to be but they claim NOTHING about what it actually *is*.

You *STILL* don’t have the slightest clue as to the concept of uncertainty. You keep trying to force it into your basket of “error”. Uncertainty is *NOT* error. Go write that down on a piece of paper 1000 times. Maybe it will sink in but based on past experience with you IT WON’T.

If you make 500 measurements and it turns out that none of them where more than 2 from the correct value then you might wonder if your uncertainty interval is a little too large”

OMG! Then you refine your uncertainty interval estimate! Not everyone is as perfect as you are.

From the GUM: “4.3.1 For an estimate xi of an input quantity Xi that has not been obtained from repeated observations, the
associated estimated variance u2(xi) or the standard uncertainty u(xi) is evaluated by scientific judgement
based on all of the available information on the possible variability of Xi .” (bolding mine, tpg)

You are the ultimate cherry-picker. You have never actually studied anything associated with uncertainty. You cherry-pick Taylor. You cherry-pick Bevington. You cherry-pick Possolo. You cherry-pick the GUM. Not once have you *ever* actually sat down a read the tomes in detail, worked out the examples, and figured out why what you come up with for answers never match what the experts say the answers are.

Type B is a perfectly legitimate value for uncertainty. And it is based on SCIENTIFIC JUDGEMENT. Not on wacky assertions that one data point defines an unknown distribution!

Reply to  Tim Gorman
July 4, 2023 8:59 am

I gave you what Bevington said about the uncertainty interval. As usual, you didn’t bother to read it.

There are hundreds of angry comments directed at me every day. I don’t notice all of them or follow the demands of each.

In the mean time here’s his definition of uncertainty.

Uncertainty: Magnitude of error that is estimated to have been made in determination of results.

He says the estimate of the uncertainty interval should be such that seven out of ten measured values fall within the interval!

Seems like a good rule of thumb, presumably talking about a standard deviation.

That is *NOT* “anything you wish”!

Exactly, that’s good probabilistic definition of uncertainty based on the error. Bevington is not the one shouting “uncertainty is not error”. Quite the reverse.

Reply to  Bellman
July 4, 2023 9:16 am

Here’s the passage in question. It’s talking about estimating an uncertainty when you only have a single measurement.

For example, if the student of Example 1.1 could make only a single measurement of the length of the table, he should examine his meter stick and the table, and try to estimate how well he could determine the length.

His estimate should be consistent with the result expected from a study of repeated measurements; that is, to quote an estimate for the standard error, he should try to estimate a range into which he would expect repeated measurements to fall about seven times out of ten. Thus, he might conclude that with a fine steel meter stick and a well-defined table edge, he could measure to about ±1 mm or ±0.001 m. He should resist the temptation to increase this error estimate, “just to be sure.”

As I said the 7 out of 10 is talking about the standard error. And the last sentence may be something to remember.

Reply to  Bellman
July 4, 2023 9:58 am

Why is it that 70% was chosen? Before you answer, you might consider what the percentage of values that 1 sigma covers!

Reply to  Jim Gorman
July 4, 2023 10:09 am

Because, as I said, and as Bevington said he’s talking about a standard error. For a normal distribution a standard error covers about 68%.

I really which I knew where all these questions are supposed to be leading. You never seem to realize how much you are contradicting yourselves. All this was meant to be demonstrating why uncertainty has nothing to do with error, yet you are quoting someone who talks about uncertainty entirely in terms of error.

Reply to  Bellman
July 4, 2023 5:19 pm

The *ONLY* one contradicting themselves appears to be you.

From the GUM: “NOTE 3 “Experimental standard deviation of the mean” is sometimes incorrectly called standard error of the mean.”

This is *exactly* what Bevington is talking about – standard deviation of the mean based on experimental results. Terminology you simply won’t accept.

Reply to  Tim Gorman
July 4, 2023 6:51 pm

From the GUM: “NOTE 3 “Experimental standard deviation of the mean” is sometimes incorrectly called standard error of the mean.”

And they are wrong. In the larger world standard error is the correct term and standard deviation at best confusing.

This is *exactly* what Bevington is talking about – standard deviation of the mean based on experimental results.

Except in this case he specifically isn’t because he’s talking about a case where you are only taking one measuement and have to estimate the standard error.

And you are mixing up to different things again. The GUM quote is talking about the standard error of the mean. Bevingotn is talking about the standard deviation of the results.

A study of the distribution of the results of repeated measurements of the same quantity can leas to an understanding of these errors so the quoted error is a measure of the spread of the distribution.

We are not talking at this point about taking a mean to reduce uncertainty.

Reply to  Bellman
July 5, 2023 6:53 am

And they are wrong. In the larger world standard error is the correct term and standard deviation at best confusing.”

I see. So *YOU* are the final arbiter of what is wrong and right in the GUM. Have you written to the NIST to tell them that this statement is wrong? It’s your duty to correct them if they really are wrong. Are you going to shirk your duty?

“Except in this case he specifically isn’t because he’s talking about a case where you are only taking one measuement and have to estimate the standard error.”

What’s that got to do with anything? *You* can’t even tell that what Bevington is saying is that from one measurement there is no way to find the standard deviation of the sample means!

And you are mixing up to different things again. The GUM quote is talking about the standard error of the mean. Bevingotn is talking about the standard deviation of the results.”

I *know* that. Again, you are throwing crap against the wall hoping something will stick! Without a large observation population there is no way to do sampling in order to determine the standard deviation of the sample means let alone the standard deviation of the population. So you have to estimate it.

*You* said you could determine the distribution from one sample meaning you could also derive the standard deviation of that distribution. Now you are trying to say that you can’t?

Another faceplant!

Reply to  Tim Gorman
July 5, 2023 11:54 am

I see. So *YOU* are the final arbiter of what is wrong and right in the GUM.

No. I merely expressed my opinion that they are wrong on that point. Why do you think I should not be allowed to say they are wrong, but it’s fine for them to say every statistician and text book is wrong to call it the Standard Error?

Rest of hysterical rant ignored.

Reply to  Bellman
July 5, 2023 12:30 pm

You opinion doesn’t count. Can you give a rational explanation as to why it is wrong?

No one is saying that it is wrong to call it Standard Error. They are saying that it is misleading as all git out! It *is* the standard deviation of the sample means, pure and plain. Call it the Marx Index if you want.” A rose by any other name would smell just as sweet”. The “Marx Index” would still be the standard deviation of the sample means.

The fact that the name “Standard Error” has certainly confused *YOU* as to what it actually stands as mute proof that you don’t know the basics.

Reply to  Tim Gorman
July 5, 2023 5:44 pm

You opinion doesn’t count.”

So you keep telling me.

Can you give a rational explanation as to why it is wrong?

Well, for a start what they describe as a standard deviation isn’t.

The standard deviation reflects variability within a sample, while the standard error estimates the variability across samples of a population.

https://www.scribbr.com/frequently-asked-questions/standard-error-vs-standard-deviation/

Here are the key differences between the two:

Standard deviation: Quantifies the variability of values in a dataset. It assesses how far a data point likely falls from the mean.

Standard error: Quantifies the variability between samples drawn from the same population. It assesses how far a sample statistic likely falls from a population parameter.

https://statisticsbyjim.com/basics/difference-standard-deviation-vs-standard-error/

Reply to  Bellman
July 5, 2023 5:54 pm

“No one is saying that it is wrong to call it Standard Error.”

The exact words were

“Experimental standard deviation of the mean” is sometimes incorrectly called standard error of the mean.

“incorrect” implies “wrong” in my book, and saying “sometimes” is a slight when it’s almost invariably called “standard error”.

They are saying that it is misleading as all git out!

Where do they explain why they think it’s confusing. Given how many times people here have mixed up “standard deviation of the mean” with the “standard deviation”, or insist that everyone actually calls it “standard deviation of the means”, I’m not sure how this is less confusing.

The “Marx Index” would still be the standard deviation of the sample means.

Why not call it the thing everyone else calls it? And why say calling it what everyone has called it for 100 years is incorrect?

The fact that the name “Standard Error” has certainly confused *YOU* as to what it actually stands as mute proof that you don’t know the basics.

And back to the in-substantive personal insults.

And this coming from someone who still thinks the only way to know what the SEM actually is, is to take hundreds of different samples and find their standard deviation.

Reply to  Bellman
July 5, 2023 6:03 pm

Why if you think the GUM is wrong do you even use the “maths”?

Because you falsely believe it gives the answers you want.

Reply to  karlomonte
July 6, 2023 5:13 am

Why if you think the GUM is wrong do you even use the “maths”?

I know in your world view everything is either totally good or totally evil. And which is which depends on whether they agree with you or not.

But in my world there is nuance. The GUM is mostly correct, and there’s nothing I can see wrong with their equations – they are pretty standard after all. But I still find parts badly written and confusing, and in this case they throw in a snide aside about not using the term “standard error” with no explanation.

Reply to  Bellman
July 6, 2023 5:38 am

The GUM equations *are* correct – for random uncertainty associated with multiple measurements of the same thing using the same device under the same conditions.

What they write is *NOT* badly written or confusing. It only seems that way to you because you don’t (or won’t) understand the basics of metrology.

Reply to  Tim Gorman
July 6, 2023 6:18 am

And can’t.

You are correct, he worships at the altar of the God of Averages, and his guru is Nitpick Nick Stokes.

Reply to  karlomonte
July 6, 2023 6:51 am

I can’t get past the fact, al least for me, that the average is not a measurement. It may have the same value as a member of the dataset but that is coincidence. If it *was* a measurement it would have the same uncertainty as any other member of the dataset inherently and could only be quoted based on the measurement device’s resolution ability. You couldn’t increase the resolution of the average past that of its measurement device.

So the cultists want it both ways, they want it to be a measurement and they want it to not be a measurement, depending on what it needs to be for them in the moment.

Typical cognitive dissonance.

Reply to  Tim Gorman
July 6, 2023 6:56 am

Absolutely.

The JCGM had the temerity to gore his sacred SEM cow, so they have to be “wrong”.

Reply to  karlomonte
July 6, 2023 7:17 am

Yep!

Reply to  karlomonte
July 6, 2023 2:25 pm

And Pat Frank thinks they are wrong about resolution, so maybe he’s a heretic in your eyes as well.

If you find the phrase “standard error” incorrect explain why.

Reply to  Bellman
July 6, 2023 8:18 pm

RTFM

Reply to  Tim Gorman
July 6, 2023 2:42 pm

I can’t get past the fact, al least for me, that the average is not a measurement

I’ll ask again. If, for you, an average is not a measurement, why do you keep talking about the measurement uncertainty of an average?

It may have the same value as a member of the dataset but that is coincidence.

And irrelevant. You still don’t understand that an average is not the same as one physical thing. It’s the mean. A figure that characterizes a collection of things or a distribution. It is not the same as saying an average thing.

If it *was* a measurement it would have the same uncertainty as any other member of the dataset inherently and could only be quoted based on the measurement device’s resolution ability.

Which is what Pat wants to do. But as I keep trying to tell you the uncertainty of the average is not the same as the average uncertainty. You keep saying you accept this, but every time you circle back to saying it is the same thing.

Reply to  Bellman
July 6, 2023 3:46 pm

I’ll ask again. If, for you, an average is not a measurement, why do you keep talking about the measurement uncertainty of an average?”

*I* don’t, at least not when considering single measurements of different things using different devices, i.e. multiple measureands.

*I* talk about needing to know overall uncertainty so I can judge if a beam made up of multiple components will span a foundation and support the design load. I could care less about the “average” length, I want to know the possible minimum and maximums based on the overall uncertainty.

I may want to know the mean instrumental systematic uncertainty If I’m using different devices to measure different things, that’s not the same thing as wanting to know an “average” value for a measurement.

As I’ve said multiple times, the average uncertainty is not the uncertainty of the average.

“You still don’t understand that an average is not the same as one physical thing.”

If it’s not a physical thing then it doesn’t live in the *real* world of measurement. If that is true then it is worthless. You may as well argue how many angels can fit on the head of a pin.

A figure that characterizes a collection of things or a distribution. It is not the same as saying an average thing.”

So what? It’s not a measurement! It’s a statistical descriptor. And the statistical descriptor know as the “mean” is really only useful if you have a symmetrical distribution – which is highly unlikely when the data set is made up of different things. A random collection of boards collected from construction sites around the city are highly likely to have a very skewed distribution – making the mean truly useless.

Now, come back and tell us how all distributions are random, Gaussian, and their uncertainty all cancels out!

Reply to  Tim Gorman
July 6, 2023 2:56 pm

The GUM equations *are* correct – for random uncertainty associated with multiple measurements of the same thing using the same device under the same conditions.

But not just for that. I’m talking about equation 10 – could be used for repeated measurements of the same thing, but is mainly used when combining different things to make a new measurand. To use one example that you accept is correct, finding the volume of a cylinder from the height and radius. height and radius are not the same thing.

Or there example,

If a potential difference V is applied to the terminals of a temperature-dependent resistor that has a resistance R0 at the defined temperature t0 and a linear temperature coefficient of resistance α, the power P (the measurand) dissipated by the resistor at the temperature t depends on V, R0, α, and t.

Lots of different things there, not even measured with the same instrument.

What they write is *NOT* badly written or confusing.

Yet it seems to confuse you a lot of the time, such as when you think equation 10 is about combining the same thing using the same measurement.

Reply to  Bellman
July 6, 2023 3:54 pm

But not just for that. I’m talking about equation 10 – could be used for repeated measurements of the same thing, but is mainly used when combining different things to make a new measurand.”

You *still* don’t get it, do you? You are talking of a MEASUREMENT MODEL OF ONE THING! If you have to measure the length and width of a rectangular table then its area will have the uncertainty in the length and width propagated on to it! But you are still measuring for the same thing! “A” table.

Measuring the length and width of 100 different tables is going to tell you what? Will the average area tell you which tables will fit in your dining room? Will it tall you how big and how small tables can be?

You can’t even understand what you write yourself! “make a new measurand” PLEASE NOTE THAT “MEASURAND” IS SINGLULAR. You are finding the characteristics of ONE MEASURAND. You may have to find the length and width of that one measurand and propagate the uncertainties associated with those measurements but you still have only ONE MEASURAND.

You may have to measure the mass, pressure, and volume of a gas sample in order to know its temperature but that is still FOR JUST ONE MEASURAND.

Those measurements of the component parts simply won’t tell you anything about a different parcel of air thousands of miles away – i.e. multiple measurands.

Yet that is what climate science tries to do with temperature. It’s what *YOU* try to do with temperature.

Reply to  Bellman
July 6, 2023 5:03 pm

You have no idea what you are talking about. Quit until you have some knowledge from someone familiar with metrology.

Equation 10 is used when a measurand consists of multiple meausements put together in a defined mathematical equation that defines a quantity of an object. (A=πr²h) (I=εAσT⁴) (v=d/t) (P=nRT/V) These functional relationships all have variables with different effects on the measuand so you need to do a partial differential to determine a sensitivity value for each. Do you need to do this when the measures all have the same units and simple addition or subtraction is done? Nope. Plain old equations 4 and 5 will suffice.

Reply to  Jim Gorman
July 6, 2023 6:16 pm

Yep, and Tbar = sum(T_i) / N is NOT a measurand.

Reply to  Jim Gorman
July 7, 2023 3:26 pm

the thing is – you are the ones who insist that measurement uncertainty of individual temperature reading must be propagated in some way into the average temperature. Yet you keep insisting the equations for propagating uncertainty don’t work if the things you are propagating are different, or they don’t work for averaging, or whatever. Sol how do you want anyone to propagate these uncertainties?

You don’t have a problem propagating the uncertainties to the sum of temperature readings, even though they are different things, and the sum of temperatures is not a real thing. You don’t object to any of the propagation Pat Frank does even though it involves combing different things to make an average you insist doesn’t exist. So which equations do you want to use?

Reply to  Bellman
July 7, 2023 4:03 pm

OMG!

No one says you can’t propagate uncertainty in measurements of different things. In fact we are saying you MUST propagate those uncertainties.

What you CAN’T do is assume they are random and Gaussian and therefore cancel!

You can’t even get this one correct after its been pointed out multiple times!

You don’t have a problem propagating the uncertainties to the sum of temperature readings, even though they are different things, and the sum of temperatures is not a real thing.”

you can’t even get this one right! The uncertainties can certainly be propagated. But the uncertainty of the average is the uncertainty of the sum! It is not the average uncertainty! The average uncertainty just takes the total uncertainty and spreads it evenly across all of the elements. What sense does that make? Especially when you have measurements of different things with different uncertainties? The average uncertainty isn’t goin to tell you how long a beam will be when made up of random selections of different boards!

“You don’t object to any of the propagation Pat Frank does even though it involves combing different things to make an average you insist doesn’t exist. So which equations do you want to use?”

Again, the problem isn’t propagating uncertainty from different things. The problems are 1. assuming they are all random, Gaussian, and cancel, and 2. assuming the average uncertainty is somehow the uncertainty of the average!

Reply to  Bellman
July 8, 2023 6:35 am

There are treatments for hysteria.

Reply to  Jim Gorman
July 8, 2023 6:50 am

Could you suggest some to your brother.

Reply to  Bellman
July 8, 2023 9:12 am

you are the ones who insist that measurement uncertainty of individual temperature reading must be propagated in some way into the average temperature.

Not correct. The methodology in LiG Met. is not as you suggest.

The uncertainty stemming from instrumental resolution applies to every one of its measurements and to every mean of those measurements.

The uncertainty arising from unavoidable systematic errors revealed by sensor calibration experiments provides an estimate of the reliability of instrumental measurements and necessarily propagates into a measurement mean.

Reply to  Pat Frank
July 8, 2023 10:41 am

They don’t get and they will *never* get it.

Reply to  Pat Frank
July 8, 2023 2:32 pm

By “you are the ones” I wasn’t talking about your methodology. Just the three or so on here who have been insisting that the rules for propagation require the uncertainty of the average to be the same as the uncertainty of the sum, and never except that this is not what the methods of propagation they use actually leads to that conclusion.

The uncertainty stemming from instrumental resolution applies to every one of its measurements and to every mean of those measurements.

No disagreement there, just that unless you can explain how all the uncertainties from resolution are the same systematic uncertainty, the uncertainty of the mean will not be be the mean uncertainty.

Reply to  Bellman
July 8, 2023 2:50 pm

bdgwx disagrees with you.

When apprising the uncertainty due to instrumental resolution only (calibration uncertainty due to systematic measurement error not included), the mean of the uncertainty will be the uncertainty of the mean (and also of every measurement).

Reply to  Pat Frank
July 8, 2023 3:12 pm

Maybe he does, maybe he doesn’t. You don’t provide a reference. I don’t have to agree with him and he doesn’t have to agree with me. But if he says what you claim, I’d definitely disagree.

What would be true is that when measuring the same thing repeatedly, especially with a precise instrument, the resolution would be a systematic error. But my argument is that when you are measure different things, which vary well beyond the resolution, it will mostly be random.

old cocky
Reply to  Pat Frank
July 8, 2023 4:23 pm

There is a nice little summary of uncertainty of the mean and repeatability at http://bulldog2.redlands.edu/fac/eric_hill/Phys233/Lab/LabRefCh6%20Uncertainty%20of%20Mean.pdf

Reply to  old cocky
July 9, 2023 4:53 am

Very good treatment of uncertainty. The interval within which the population mean might lie is really only applicable when you have what the document calls “repeatable” measurements. Otherwise the average itself makes little sense as far as metrology is concerned.

I especially like the treatment of “range” as an index into uncertainty estimates. The range is directly related to the variance in a data set. As the variance goes up so does the uncertainty associated with the mean. That’s why it is important to know either the standard deviation or variance of a data set and not just the average value. Something sadly lacking in climate science.

I also like this part: “Here is a general rule for distinguishing a repeatable from unrepeatable measurements:
A measurement is repeatable if (1) you are sure that you are really measuring the same quantity each time you repeat the measurement, (2) your memory of the earlier measurement will not affect your later measurement, and (3) your measurement device is sufficiently precise that you will see variations in its readout when you repeat a measurement.”

Pat’s paper is all about (3). It’s unavoidable with measuring devices. Resolution is not the only factor involving uncertainty but also how small of an increment can be recognized. It doesn’t to any good to have resolution marks down to the thousandths digit if the instrument can only respond to increments in the tenths or hundredths digit. This is why trying to identify temperature anomalies in the hundredths digit is a fools errand. Averaging can’t increase either resolution or the smallest increment that is recognizable. This, more than anything else, should tell those trying to use the GAT that it is a garbage calculation. The uncertainty will be in at least the unit digit for such a large data set consisting of measurements of different things. Meaning you simply *DO NOT KNOW AND CANNOT DISCERN” changes in the hundredths digit.

This is true even for the UAH data set. The inherent uncertainties in knowing the exact conditions for each measurement, even in the same grid, makes the uncertainty interval for each data point quite wide. All kinds of things can affect the irradiance of the atmosphere, from clouds to smoke to aerosols to etc. I’m sure the UAH crew thinks those can be “averaged” out but its a false hope. If nothing else it will increase the range of values seen (i.e. the variance), even at the same location, which means the uncertainty of the average value goes up, not down.

The argument that “it’s always been done it this way by very smart people” is just hokum. It’s an argumentative fallacy, Argument to Tradition.

Reply to  Tim Gorman
July 9, 2023 9:12 am

This is a good resource and I have used several times. I have it stored in my notes file.

Here is a summary of some of my problems.

1) Daily Average – At a minimum, the Type B estimate of LIG uncertainty is 0.5° F. They add in quadrature. √(0.5² + 0.5²) = 0.7. This is the measurement uncertainty in the daily average. It should be propagated into any further calculations.

2) NIST TN 1900 declares the monthly average of Tmax as the measurand. They use the same equation as this document to find the same method to “estimate the uncertainty Uₘ of the mean of a normally distributed set of measurement using just one set of N measurements as follows:”

Uₘₑₐₙ = (ts) / √N

While NIST assumed measurement uncertainty was neglible this isn’t really the case. Two points,
a) “s” was used since it is the deviation of the distribution surrounding the mean and dividing by √N calculares the SEM;
b) √N is not applicable for dividing measurement uncertainty because it is not part of the deviation surrounding the mean, it is the deviation of the distribution surrounding each measurement;
c) the total measurement uncertainty is something like,

Uₘₑₐₛ = √(30 • 0.7²) = 3.8;

d) Assume the interval of the mean is 1.8 from NIST TN 1900;
e) the combined uncertainty would be

Uᴄᴏᴍʙɪɴᴇᴅ = √(3.8² + 1.8²) = ±4.2

Not an unreasonable overall uncertainty.

Reply to  Jim Gorman
July 9, 2023 4:26 pm

the total measurement uncertainty is something like,Uₘₑₐₛ = √(30 • 0.7²) = 3.8;

You really ought to explain to Pat Frank where he went wrong.

Reply to  Bellman
July 9, 2023 4:42 pm

Do you understand combined uncertainty? These are two pieces. There are other components that need to be evaluated to understand the total. Look at the GUM. Components add.

Reply to  Jim Gorman
July 9, 2023 5:49 pm

The point is that your calculation for that one thing, combing 30 days of the uncertainty in the daily mean, is not what Pat Frank does. I think he’s wrong but you are wronger. You are just falling back on the same mistake of thinking the uncertainty in an average is the same as the uncertainty of a sum.

You do this twice. First in the uncertainty of the daily average which you think can be bigger than either of the max or min uncertainties. Then when you take the uncertainty of the mean of 30 days, you are just adding this uncertainty 30 times in quadrature.

Compare this to PF’s equation 4 – the uncertainty of T_mean from the resolution uncertainty of the two measurements. He adds the two uncertainties in quadrature and the divides by 2. (Ignoring the typo in the paper.) This gives a standard uncertainty of 0.194°C, or a 95% uncertainty of ±0.382°C.

Then in equation 5 he does what you are trying to do, and workout the uncertainty of the monthly mean, though uses the value of 30.417 days in a month. But here he adds the squares and divides by N, before taking the square root. the result is the monthly uncertainty is the same as the daily one. What he does not do, is what you are doing and multiply the uncertainty by √30.

Reply to  Bellman
July 9, 2023 6:16 pm

you are wronger

A highly technical term…

Reply to  karlomonte
July 9, 2023 6:53 pm

√30 times wronger than Pat to be more exact. And considering how wrong Pat is, that’s a whole lot of wrongness.

Reply to  Bellman
July 9, 2023 9:07 pm

And still you remain — unskilled and unaware.

Reply to  karlomonte
July 10, 2023 4:32 am

So nows the time to prove you are skilled and aware. Just say who you agree with more.

What should the measurementuncertainty of a monthly average, when the daily uncertainty is ±0.7 and the month has 30 days?

Is it √(30 × 0.7²) as Jim claims?

Or is it, √(30 × 0.7² / 30) as Pat claims?

Reply to  Bellman
July 10, 2023 4:57 am

You can’t even tell the difference between total uncertainty and mean uncertainty. Pat calculated the MEAN uncertainty due to instrument physical characteristics, not the total uncertainty of a set of measurements of different things.

Reply to  Tim Gorman
July 10, 2023 4:39 pm

You can’t even tell the difference between total uncertainty and mean uncertainty.

OK, I’ll bite. When you say “total” uncertainty, do you mean the uncertainty of the sum, or do you mean all uncertainties combined. Because what Jim said was

c) the total measurement uncertainty is something like,

Uₘₑₐₛ = √(30 • 0.7²) = 3.8;

I assumed this was meant to be the uncertainty in a mean monthly values. I assume that U_meas is a typo for U_mean, and that the 30 refers to the number of days in a month.

If it’s only meant to be the uncertainty in the sum of 30 measurements, then fair enough. I’ve just no idea why you would want such a thing. The sum is meaningless, except as a step towards the mean.

But then he says at the end

e) the combined uncertainty would be

Uᴄᴏᴍʙɪɴᴇᴅ = √(3.8² + 1.8²) = ±4.2

Where 3.8 is the figure he got from c), and 1.8 is an assumed uncertainty in the monthly mean. So what would be the point in combining the uncertainty of a sum of 30 days, with the uncertainty for a mean of the same 30 days? And why does he say 4.2 is not an unreasonable uncertainty? What is it an uncertainty of?

Reply to  Bellman
July 11, 2023 3:32 am

OK, I’ll bite. When you say “total” uncertainty, do you mean the uncertainty of the sum, or do you mean all uncertainties combined. Because what Jim said was”

This has been answered for you multiple times. Yet you’ve never bothered to understand it. As Pat said, it’s useless to try and explain it to you. You just do not want to understand!

“If it’s only meant to be the uncertainty in the sum of 30 measurements, then fair enough. I’ve just no idea why you would want such a thing. The sum is meaningless, except as a step towards the mean.”

It’s meaningless to you because you’ve never actually worked out *anything* in Taylor’s book. You can’t even understand his Eq. 3.18 and 3.19.

If you have q = x/w then the relative uncertainty in q is the sum of the relative uncertainty in x PLUS the relative uncertainty in w. It is *NOT* the relative uncertainty in x *divided* by w!

It simply doesn’t matter if q is an average or something else. It’s where you and bdgwx keep going wrong with q = x/n. You do *not* divide the uncertainty of x by n (n is the number of elements in the average). You *add* the relative uncertainties of x and of n. It’s what Taylor does, Bevington does, and what Possolo does. You’ve never once worked out any of their examples, NOT ONCE!

You haven’t eve figured out what Jim was doing. The terms “combined uncertainty” and “expanded uncertainty” are truly foreign concepts to you because you’ve never actually worked any examples out from anywhere. You just keep applying the meme that all uncertainty is random, Gaussian, and cancels. You don’t even realize that you apply that meme *all the time*!

Reply to  Tim Gorman
July 11, 2023 5:15 am

And yet more personal abuse rather than answer a simple question.

If you have q = x/w then the relative uncertainty in q is the sum of the relative uncertainty in x PLUS the relative uncertainty in w. It is *NOT* the relative uncertainty in x *divided* by w!

And you still don’t understand the implication of relative uncertainty.

You don’t divide the relative uncertainties, but the absolute uncertainties will be affected by w. Specifically if w has no uncertainty than u(q) / q = u(x) / x, which (for some reason you are incapable of seeing), means u(q) = u(x) / w. It has to because q = x / w.

Rest of hysterical rant ignored.

Reply to  Bellman
July 11, 2023 7:12 am

Relative uncertainties ARE NOT AVERAGES!

Period, Exclamation point!

You *still* have read Taylor, done any of the exercises, or understand “weighting” when calculating uncertainty.

You quoted Possolo as having multiplied the radius uncertainty by 2. It’s apparent that you didn’t even understand what he did!

R^2 is R*R. When you propagate uncertainties you add the uncertainties of each factor. R appears twice as a factor. The uncertainty of R^2 is u(R) + u(R). TWO R uncertainty values! There is a reason why he didn’t include pi in his uncertainty calculation and my guess is that you have exactly ZERO understanding of why.

Reply to  Tim Gorman
July 11, 2023 1:26 pm

Relative uncertainties ARE NOT AVERAGES!

Why on earth would they think they were?

You quoted Possolo as having multiplied the radius uncertainty by 2.

Do I have to keep explaining to you how this works? Using Gauss’s formula – equation 10 from the GUM, the square of the uncertainty of a combination of values, is equal to the sum of the squares of each value, multiplied by the square of the partial differential for each term. In this case for the Radius the volume depends on R², hence the partial derivative is 2.

More completely, you have the formula V = πR²H.

Partial derivative for H is πR²
Partial derivative for R is 2πH

Hence

u(V)² = (πR²)² × u(H)² + (2πH)² × u(R)²

This can be simplified by dividing through by V², which nicely turns everything into relative uncertainties

[u(V) / V]² = [u(H) / H]² + [2u(R) / R]²

Any more questions?

Reply to  Bellman
July 11, 2023 1:33 pm

There is a reason why he didn’t include pi in his uncertainty calculation and my guess is that you have exactly ZERO understanding of why.

There was no point including π as a term as it has zero uncertainty. But as you never seem to understand it is there in the equation. It’s in both of the coefficients in the first form, and then when we divide by V² it disappears from the right hand side, becasue it gets cancelled from both terms. But now it appears on the left hand side, in the form of V, which remember is πR²H.

Reply to  Bellman
July 11, 2023 1:36 pm

But it would appear in the partial differential in each term, right? So he didn’t drop it because it has zero uncertainty. Using your logic it should appear in each term, just like you want (1/n) to appear in each term.

So why did Possolo drop it? Try again.

Reply to  Tim Gorman
July 11, 2023 1:54 pm

But it would appear in the partial differential in each term, right?

Yes. As I explained in my previous comments.

u(V)² = (πR²)² × u(H)² + (2πH)² × u(R)²

So he didn’t drop it because it has zero uncertainty.

The term π as a term is not included as a separate uncertainty as it will just be zero.

Using your logic it should appear in each term, just like you want (1/n) to appear in each term.

It does appear in each term, as I wrote. And yes, finally you are getting it, when the function is an average there is a (1 / n)² in each term.

So why did Possolo drop it? Try again.

Do you think you’ll understand it better if I explain again.

Reply to  Bellman
July 11, 2023 5:03 pm

If pi is dropped because it appears in each term and is zero why is n not dropped because it appears in each term and has zero uncertainty?

You seem to want to have your cake and eat it too.

Reply to  Tim Gorman
July 11, 2023 6:26 pm

Answered below, but probably not to your satisfaction.

Reply to  Bellman
July 11, 2023 1:33 pm

You wind up with 2 * u(R). u(R) + u(R). It’s a WEIGHTING factor. You didn’t answer why he dropped pi out of the uncertainty calculation. Kind of inconvenient for you maybe?

I’ll ask again. Why did he drop pi?

Reply to  Tim Gorman
July 11, 2023 2:02 pm

How many times are you going to repeat this nonsensical question. As a term in the equation for a volume it’s a constant and has no uncertainty. If you wanted to you could include it in the equation as if it were a parameter to the function, but as it’s uncertainty is zero it will just disappear. But it isn’t really a parameter, it’s just a constant so there isn’t a need to have it as an additional term.

I doubt I can help you much if you still don’t understand the concept.

Reply to  Bellman
July 11, 2023 5:05 pm

“n” is a constant with zero uncertainty. So why isn’t it dropped when figuring the uncertainty of the average?

But it isn’t really a parameter, it’s just a constant”

So is “n” in the average!

Reply to  Tim Gorman
July 11, 2023 6:13 pm

“n” is a constant with zero uncertainty. So why isn’t it dropped when figuring the uncertainty of the average?

Good grief, we’ve been going over this for 2 years and you still haven’t learnt a thing.

Lets take this real slow.

In the formula for the volume π is a constant with no uncertainty. It does not need a separate term because it has no uncertainty. But that does not mean it does not appear in the uncertainty equation. It appears as a modifier to each term. Remember, I told you what the equation is for u(V)

u(V)² = (πR²)² × u(H)² + (2πH)² × u(R)²

See π appears twice. Now if you remember I explained that you then can simplify the equation by dividing through by V². Now this works because V is actually πR²H. So when we divide each of the terms by V² we are actually dividing by (πR²H)². This means that for each term all the values cancel, except for the particular term, which is left as a divisor. Thus creating a relative uncertainty.

E.g. for height we had

(πR²)² × u(H)²

Divide this by (πR²H)²

(πR²)² × u(H)² / (πR²H)²

= u(H)² × [(πR²)² / (πR²H)²]
= u(H)² × [π² / (πH)²]
= u(H)² × [1 / H²]
= u(H)² / H²

And so as if by magic the π symbol disappears. But you can’t actually just get rid of multiplying everything by π. If π was integral to the original equation what happened to it. Well remember we divided by V² = (πR²H)². We couldn’t just divide the right hand side of the equation, because in algebra everything has to balance so we also divided the left hand side as well.

so

u(V)²

becomes

u(V)² / V²

and balance is restored. But V² = (πR²H)²

so we are really saying the left hand side became

u(V)² / (πR²H)²

and so that sneaky little π is still there, and partly determines the value of u(V).

Reply to  Bellman
July 11, 2023 6:24 pm

So now what about the average? For simplicity I’ll just talk about an average of two values, X and Y, and leave expanding it to n values as an exercise for the reader.

Let’s call the mean of X and Y, M, where

M = (X + Y) / 2

Our equation for the uncertainty then just becomes

u(M)² = A² × u(X)² + B² × u(Y)²

where A and B are the partial derivatives for X and Y respectively. What are the values. I say with some trepidation, as I know this is the part you claimed was wrong before, but as

(X + Y) / 2 = X / 2 + Y / 2

then we have A = B = 1 / 2.

So substituting back we have

u(M)² = (1 / 2)² × u(X)² + (1 / 2)² × u(Y)²

= (1 / 2)² × (u(X)² + u(Y)²)

and this is a lot simpler than the Volume uncertainty as it’s just addition with a constant modifier. We can juts take the square root and get

u(M) = √[u(X)² + u(Y)²] / 2

old cocky
Reply to  Bellman
July 11, 2023 9:30 pm

In the volume example, you divided the RHS by volume to get the relative uncertainty of the volume.

Reply to  old cocky
July 12, 2023 6:07 am

Yes. As I said, and the LHS as well. That’s why this equation leads to the standard rule that when adding and subtracting numbers you add the absolute uncertainties, but when multiplying or dividing you add the relative uncertainties.

Reply to  Bellman
July 12, 2023 6:35 am

In the formula for the volume π is a constant with no uncertainty. It does not need a separate term because it has no uncertainty. “

Again, how is this any different than “n” in the formula for the average.

You didn’t answer. My guess is that you can’t.

Reply to  Tim Gorman
July 12, 2023 7:02 am

I did answer, but as always I over-estimated your ability to understand simple equations.

Let my try again. There is no difference between having a function which includes a constant π and one with a constant 1 / n. They both result in a constant in the partial derivative for the term, that partial derivative appears in the uncertainty formula as a weight for each term.

Where the difference occurs is in how you can simplify the equation. In the volume example (becasue it’s an equation involving multiplying) you have a complicated set of modifiers on each term, and it simplifies the equation to divide through by V². This has the effect of removing all the factors modifying each term, which includes π. The effect is to tern the equation from one about absolute uncertainties into one about relative uncertainties.

In the equation for the uncertainty of the mean, there is little point in changing it becasue it is already simple enough. You could if you wanted multiply through by n², but all that would give you is an equation for n times the uncertainty of the mean. You’d still end up having to divide by n if you wanted the uncertainty of the mean.

If you have any further questions please try taking course in algebra first.

Reply to  Bellman
July 12, 2023 10:22 am

Where the difference occurs is in how you can simplify the equation. In the volume example (becasue it’s an equation involving multiplying) you have a complicated set of modifiers on each term, and it simplifies the equation to divide through by V². This has the effect of removing all the factors modifying each term, which includes π. The effect is to tern the equation from one about absolute uncertainties into one about relative uncertainties.”

q_avg also has “n” as a factor. When you multiply by q_avg why don’t all the “n” divisors cancel out?

Those *modifiers* you are talking about are weighting factors. If you have a common weighting factor in all terms and its uncertainty is zero then you remove it from the uncertainty equation, just like Possolo did with pi. (1/n) becomes a constant weighting factor with an uncertainty of zero. Removing doesn’t change the overall weighting of the other factors.

Reply to  Tim Gorman
July 12, 2023 10:41 am

q_avg also has “n” as a factor. When you multiply by q_avg why don’t all the “n” divisors cancel out?

I’m sorry. I can’t help you any further with your homework. If you still don’t understand my explanations, you’ll need to pay for a private tutor or accept that maybe this is beyond you’re abilities.

Reply to  Bellman
July 12, 2023 11:05 am

You tell another yoke-ah here?

Reply to  Bellman
July 12, 2023 6:36 am

u(M)² = A² × u(X)² + B² × u(Y)²”

And how is this any different than pi in the volume formula?

Reply to  Bellman
July 10, 2023 6:19 am

Here’s a quiz for you: why is the term “standard error of the mean” deprecated in the GUM?

Reply to  karlomonte
July 10, 2023 3:59 pm

It isn’t deprecated. They say it’s incorrect. I don’t know they say it, that’s why I keep asking. It’s simply stated it’s incorrect with no explanation.

It makes no odds what they call it though. Experimental standard deviation of the mean, is just a more clumsy, less correct name for the same thing.

Reply to  Bellman
July 10, 2023 4:05 pm

No, they ae not same. One is the standard deviation of the sample means and the other is the standard deviation of the measurements ASSUMING NO UNCERTAINTY IN THE MEASUREMENTS THEMSELVES.

One is how precisely you have calculated the population mean. The other is the variation of the data elements in the distribution!

Reply to  Bellman
July 10, 2023 9:04 pm

Its very simple, go read the definition of the term “error” and see if you can sort it out.

Reply to  karlomonte
July 11, 2023 5:41 am

So you don’t know either.

Reply to  Bellman
July 11, 2023 6:16 am

Idiot, all you have to do open, and read.

Ahem.

B.2.19
error (of measurement)
result of a measurement minus a true value of the measurand 

There is no true value associated with s/root(N), the SEM does not and cannot produce one. The term is incompatible with uncertainty.

Duh!

B.1 Source of definitions 

The definitions of the general metrological terms relevant to this Guide that are given here have been taken from the International vocabulary of basic and general terms in metrology (abbreviated VIM), second edition, 1993* [6], published by the International Organization for Standardization (ISO), in the name of the seven organizations that supported its development and nominated the experts who prepared it: the Bureau International des Poids et Mesures (BIPM), the International Electrotechnical Commission (IEC), the International Federation of Clinical Chemistry (IFCC), ISO, the International Union of Pure and Applied Chemistry (IUPAC), the International Union of Pure and Applied Physics (IUPAP), and the International Organization of Legal Metrology (OIML). The VIM should be the first source consulted for the definitions of terms not included either here or in the text. 

Do you think they just pulled these terms out of a hat?

2 Definitions
2.1 General metrological terms

The definition of a number of general metrological terms relevant to this Guide, such as “measurable quantity”, “measurand”, and “error of measurement”, are given in Annex B. These definitions are taken from the International vocabulary of basic and general terms in metrology (abbreviated VIM)* [6]. In addition, Annex C gives the definitions of a number of basic statistical terms taken mainly from International Standard ISO 3534-1 [7].

Annex D
“True” value, error, and uncertainty 

The term true value (B.2.3) has traditionally been used in publications on uncertainty but not in this Guide for the reasons presented in this annex. Because the terms “measurand”, “error”, and “uncertainty” are frequently misunderstood, this annex also provides additional discussion of the ideas underlying them to supplement the discussion given in Clause 3. Two figures are presented to illustrate why the concept of uncertainty adopted in this Guide is based on the measurement result and its evaluated uncertainty rather than on the unknowable quantities “true” value and error. 

Oh yeah, you are a prime example of the second sentence.

Reply to  karlomonte
July 11, 2023 7:28 am

Not once has bellman ever completely read anything for meaning. He is a champion cherry-picker and simply doesn’t care what the context of anything is as long as he thinks he can use it to throw up against the wall in the faint hope something will stick someday.

Reply to  Tim Gorman
July 11, 2023 8:42 am

Nope!

Reply to  karlomonte
July 12, 2023 8:12 am

Error (of measurement)

result of a measurement minus a true value of the measurand

Correct

There is no true value associated with s/root(N), the SEM does not and cannot produce one. The term is incompatible with uncertainty.

Meaningless gibberish.

The true value is the population mean. The error is the sample mean minus the true mean. Nobody says the standard error of the mean produces a true mean. It’s a measure of the estimated “dispersal” of the error expected from any sample means.

Do you think they just pulled these terms out of a hat?

N, but as far as I can see the VIM does not say anything about the correctness or not of the term “standard” error. I don’t think they even mention means or sampling. The only time they refer to standard deviation is to say that standard measurement uncertainty is defined as a standard deviation.

Granted, I’m not in the habit of keeping up with all international standards reports, so there maybe something I’ve missed. But so far you haven’t quoted anything form the VIM saying standard error is incorrect terminology.

“In addition, Annex C gives the definitions of a number of basic statistical terms taken mainly from International Standard ISO 3534-1

The same document I’ve taken the definition of standard error from. At no point does it say standard error is an incorrect term.

standard error

standard deviation (2.37) of an estimator (1.12)

EXAMPLE:

If the sample mean (1.15) is the estimator of the population mean (2.35) and the standard deviation of a single random variable (2.10) is σ, then the standard error of the sample mean is σ / √n where n is the number of observations in the sample. An estimator of the standard error is S / √n where S is the sample standard deviation (1.17).

Note 1 to entry: In practice, the standard error provides a natural estimate of the standard deviation of an estimator.

Note 2 to entry: There is no (sensible) complementary term “non-standard” error. Standard error can be viewed as an abbreviation for the expression “standard deviation of an estimator”. Commonly, in practice, standard error is implicitly referring to the standard deviation of the sample mean. The notation for the standard error of the sample mean is σ_x^bar.

Reply to  Bellman
July 12, 2023 8:37 am

Meaningless gibberish.

The true value is the population mean. The error is the sample mean minus the true mean. Nobody says the standard error of the mean produces a true mean. It’s a measure of the estimated “dispersal” of the error expected from any sample means.

No, no, and no.

Defend the GAT hoax at any cost!

Reply to  Bellman
July 11, 2023 9:07 am

The Standard Error of the Mean is not the uncertainty in the mean. If you understand what the sample means distribution truly is, then you will recognize that the SEM is the standard deviation of the sample means distribution. So what you have is the mean of the sample means distribution and standard deviation of the sample means distribution.

The standard deviation of the sample means distribution is what you need to define to your self.

It is defined as an interval within which the estimated mean lays. The value of the sample mean is determined by the values that are in the samples and the uncertainty in that calculation is determined by the values in the samples.

The SEM therefore doesn’t determine the “error” in the value of the estimated mean.

The SEM defines the interval where the sample mean may lay. That interval will get tighter and tighter with an increase in the size of each sample. In other words, the ESTIMATE of the population mean gets closer and closer to the actual population.

The reason the SEM has been depricated is because a way too many people, like you, thought it defined the accuracy, precision, and resolution of the value of the mean. It does not. It has nothing to do with the VALUE and UNCERTAINTY of the mean.

Think about it. To make it easy let’s look at data in integers. This is where Significant Digit rules come into play.

1) Data – integer
2) Mean – integer
3) Samples – integer
4) Sample Means – integer
5) Sample Mean – integer

Note1: 2 thru 4 may carry one extra digit to insure rounding accuracy.

Note 2: The standard deviation of the sample means may have any number of significant digits since it defines an interval.

Experimental uncertainty is something different. The GUM has changed from using Standard Error, another descriptor of the SEM, to STANDARD DEVIATION OF THE MEAN. This should make sense with the above.

With experiments, you are not sampling a popuation. Your are instead creating a population using several experiments.

From these you can calculate the population mean and the population standard deviation.

Using the equation:

SEM = σ/ √n

you can now calculate an SEM, which is what the GUM calls the experimental standard deviation of the mean. NIST then explains that this value should be expanded to obtain an experental uncertainty at a 95% confidence level.

If you understand statistics, this should be something you can undrstand if you try.

None of this has anything to do with what Pat Frank has done with instrument resolution uncertainty.

Reply to  Jim Gorman
July 11, 2023 3:36 pm

None of this has anything to do with what Pat Frank has done with instrument resolution uncertainty.

Indeed. It’s been a massive distraction.

Reply to  Bellman
July 10, 2023 7:53 am

And considering how wrong Pat is,…

Something you continually declare and never demonstrate.

Reply to  Bellman
July 10, 2023 5:18 am

ou are just falling back on the same mistake of thinking the uncertainty in an average is the same as the uncertainty of a sum.”

And you just refuse to understand that the SEM is *NOT* the uncertainty of the average or of the population.

The uncertainty of the average is DRIVEN by the uncertainty of the elements making up that average, not by how accurately you can calculate that average. You simply cannot decrease the uncertainty associated with the average by calculating it more and more accurately.

The average uncertainty, when totaled, will give you the total uncertainty. It is useful in that it can be *used* as an average uncertainty across a population of different things when trying to find the total uncertainty. That is what Pat has done with his analysis of the physical characteristics of the LIG.

If you had *ever* bothered to actually read Taylor, Chapter 3, for meaning instead of cherry-picking it for support of your CAGW dogma, you would understand this. If you had actually ever built a stud wall for a house or built a beam to span a foundation or built a race engine this would all become clear. The total uncertainty in the horsepower output of a race engine is not the average uncertainty of each cylinder but the *sum* of the uncertainties in each cylinder’s output (and the intake manifold, and fuel injectors, etc). It is *exactly* the same thing with the Earth’s temperature. The uncertainty of that value is the *total* uncertainty in the elements making up that average, it is *not* the average uncertainty of the elements.

Reply to  Tim Gorman
July 10, 2023 4:48 pm

You simply cannot decrease the uncertainty associated with the average by calculating it more and more accurately.

Do you ever listen to yourself?

The average uncertainty, when totaled, will give you the total uncertainty.

What do you mean by totaled? Do you mean add up, or combine all the different uncertainties?

It is useful in that it can be *used* as an average uncertainty across a population of different things when trying to find the total uncertainty.

You must stop using AI to write your comments.

That is what Pat has done with his analysis of the physical characteristics of the LIG.

Whatever you think he’s done, I’m pretty sure it’s not what you think.

The total uncertainty in the horsepower output of a race engine is not the average uncertainty of each cylinder but the *sum* of the uncertainties in each cylinder’s output (and the intake manifold, and fuel injectors, etc).

Thus making it a poor model for talking about the uncertainty of an average.

It is *exactly* the same thing with the Earth’s temperature.

You think the earths temperature is the sum of all the thermometer readings?

Reply to  Bellman
July 11, 2023 3:41 am

What do you mean by totaled? Do you mean add up, or combine all the different uncertainties?”

You’ve never once actually built anything, it is truly obvious in the questions you ask and the assertions you make. NOT ONCE!

Have you ever put anything together as a kit even? Ever had to ask “where did the hole go for this bolt”?

“Whatever you think he’s done, I’m pretty sure it’s not what you think.”

I know *exactly* what he did. He found a mean uncertainty in the design of LIG’s that can be used as a factor for the uncertainty total of temperature measurements. If I take an LIG out in the backyard to measure the temperature then part of the total uncertainty for that measurement includes the instrumental uncertainty associated with the design of that instrument. If I try to combine multiple single measurements made by multiple LIG’s then the individual uncertainties propagate onto the data set, including its average value.

It just becomes more and more obvious with every assertion you make about Pat’s study that you don’t have even a basic concept of metrology. You are stuck in your box of “all uncertainty is random, Gaussian, and cancels”. You don’t even realize you are stuck in a box.

“Thus making it a poor model for talking about the uncertainty of an average.”

You didn’t even understand the simple example I gave you. It’s hopeless trying to explain it to you.

“You think the earths temperature is the sum of all the thermometer readings?”

Again, you just don’t understand. You can’t even distinguish between stated values and the uncertainty interval that goes with the stated values, let alone how that uncertainty propagates. It comes from you always assuming that all uncertainty is random, Gaussian, and cancels. You can’t help yourself. You don’t even know that you do it.

Reply to  Tim Gorman
July 11, 2023 5:08 am

You’ve never once actually built anything, it is truly obvious in the questions you ask and the assertions you make. NOT ONCE!

Rather than going through your usual pathetic ad hominems you could have actually answered the question. I’m simply trying to figure out what you meant by “totaled” in that context. It wasn’t a trick question, I just didn’t want to go down the wrong path in the argument.

Reply to  Bellman
July 11, 2023 6:19 am

Which right after you wrote this:

You must stop using AI to write your comments.

Did you really expect Tim to take your question seriously?

Reply to  Bellman
July 10, 2023 7:51 am

You are just falling back on the same mistake of thinking the uncertainty in an average is the same as the uncertainty of a sum.

No one thinks that RMS is the same as RSS. You’re just hyping a strawman.

Then in equation 5 he does what you are trying to do,…

You’ve never understood the resolution argument, and still don’t.

Reply to  Pat Frank
July 10, 2023 3:55 pm

He doesn’t know enough to understand that he doesn’t understand. But he thinks he does.

Reply to  Pat Frank
July 10, 2023 3:56 pm

No one thinks that RMS is the same as RSS. You’re just hyping a strawman.

Tim Gorman:

you can’t even get this one right! The uncertainties can certainly be propagated. But the uncertainty of the average is the uncertainty of the sum! It is not the average uncertainty! The average uncertainty just takes the total uncertainty and spreads it evenly across all of the elements. What sense does that make?

https://wattsupwiththat.com/2023/06/29/the-verdict-of-instrumental-methods/#comment-3745807

Reply to  Bellman
July 10, 2023 4:00 pm

I just said that RMS is not the same as RSS in my statement. And you can’t even figure that out. What *is* your primary language?

I *told* what the average uncertainty is good for. It is good for THE SAME MEASURING DEVICE. You just can’t separate “same” from “different”. Why is that?

Reply to  Tim Gorman
July 10, 2023 5:34 pm

Do you or do you not think that the uncertainty of the mean is the uncertainty of the sum? Your evasion on this point is getting tedious.

In case you can’t figure out why I quoted your comment look at the comment I was replying to. I said

You are just falling back on the same mistake of thinking the uncertainty in an average is the same as the uncertainty of a sum.

and Pat replied

No one thinks that RMS is the same as RSS. You’re just hyping a strawman.

Obviously, he’s using RMS and RSS as shorthand for the uncertainty of the mean and the uncertainty of the sum – and if he isn’t his reply was a complete non sequitur.

Reply to  Jim Gorman
July 10, 2023 5:06 am

He doesn’t understand that when evaluating the uncertainty of the volume of a barrel that the uncertainty in both the radius and height add.

Reply to  Tim Gorman
July 10, 2023 4:09 pm

What are you fantasizing about now? We went over this topic months ago. I agreed entirely with the Possolo example, and with how to use GUM 10, to derive the uncertainty. It is not just the sum of the uncertainty of the height and radius. You need to add in quadrature the relative uncertainties. And the radius uncertainty needs to be doubled.

(u(V) / V)² ≈ (2 × u(R) / R)² + (u(H) / H)²

Reply to  Bellman
July 10, 2023 4:19 pm

You never did admit that Possolo assumed no measurement uncertainty, either random error or systematic bias. You wouldn’t admit that this is *exactly* what you and climate science do.

Possolo was trying to find the variation in one thing at one location – Tmax at one measuring station and made the assumption that measuring Tmax on different days was the same as multiple measurements of the same thing.

It works as an example for how to handle experimental measurements but the assumptions necessary in the example make it almost impossible to do in the arena of widespread temperature measurements where multiple locations and multiple measuring devices are involved – yet climate science does it regularly.

And you just don’t get it. It’s too far outside your religions dogma.

Reply to  Tim Gorman
July 10, 2023 5:09 pm

You never did admit that Possolo assumed no measurement uncertainty, either random error or systematic bias.

Time for a nap. Of course Possolo assumes measurement uncertainty, that’s the whole point of the exercise. The calculation of the uncertainty of the volume is based on the measurement uncertainty of the radius and height. Of course this is assuming random uncertainty, that’s the purpose of the exercise.

The radius was measured by climbing a set of stairs to the tank’s roof, whose shape and size are essentially identical to its base, measuring its diameter with a tape, and reporting the estimate of the radius as 8.40 m, give or take 0.03 m. This “give or take” is the margin of uncertainty, but without additional information it is not particularly meaningful or useful: one needs to know, for example, how likely the true value is of lying between 8.37 m and 8.43 m. That is, one needs to be able to translate uncertainty into probability. This is often done by regarding the measured value, 8.40 m, as the observed value of a random variable whose mean is the true value of R, and whose standard deviation is 0.03 m. This interpretation motivates calling the “give or take” standard uncertainty.

How can you think this is assuming no measurement uncertainty?

Reply to  Bellman
July 10, 2023 7:14 pm

“””””Of course Possolo assumes measurement uncertainty, “””””

From TN 1900:

“””””Assuming that the calibration uncertainty is negligible by comparison with the other uncer­tainty components, and that no other significant sources of uncertainty are in play,”””””

The purpose of this example is to determine experimental uncertainty in the data, not measurement uncertainty.

Quality people do this all the time to assess the tolerance specs for products. They must assess all kinds of uncertainty. Calibration, measurement/resolution, experimental, temperature, humidity, etc. TN 1900 develops one type, experimental.

Read the entire damn document, slowly, carefully, while taking notes. There is a plethora of information in it.

Reply to  Jim Gorman
July 10, 2023 8:08 pm

We were not talking about TN1900. We were talking about the book about uncertainty, where he uses the example of the volume of a cylinder. This constant moving of goal posts is getting increasingly tiresome.

Reply to  Bellman
July 11, 2023 3:59 am

Of course Possolo assumes measurement uncertainty, that’s the whole point of the exercise.”

A measurement consists of two parts, the stated value and the uncertainty interval. Possolo only uses the stated values, he ignores the uncertainty intervals.

You simply can’t understand why a trend line developed solely from stated values can’t be assumed to be the “true” trend line. You can’t understand using stated value variation either. Any uncertainty in the stated values *will* also affect the calculation of the variation in the stated values thus adding to the total uncertainty of the data set. The *only* way to make Possolo’s example work is to assume zero measurement uncertainty – and that violates real world constraints. It’s an EXAMPLE, meant for people to learn from. About analyzing experimental measurement results. Again, it doesn’t work in the real world.

” Of course this is assuming random uncertainty, that’s the purpose of the exercise.”

And that is exactly what you do when trying to support GAT calculations!

This is often done by regarding the measured value, 8.40 m, as the observed value of a random variable whose mean is the true value of R, and whose standard deviation is 0.03 m. “

You can’t even understand that this only applies for multiple measurements of the SAME THING! And even then it ignores systematic uncertainty in the measuring tape!

You simply don’t live in the real world. You can’t distinguish between examples with simplifying assumptions meant as a learning tool and real world applications. You want to live in an alternate reality where measurement uncertainty, both random and systematic, don’t exist!

Reply to  Tim Gorman
July 11, 2023 4:48 am

Possolo only uses the stated values, he ignores the uncertainty intervals.

Now you are lying about Possolo. He wrote a whole book on measurement uncertainties. That’s the book we are talking about – the one that shows how to calculate the uncertainty in the volume of a cylinder, when your measurements for the radius and the height are uncertain. How can you say he ignores uncertainty?

You simply can’t understand why a trend line developed solely from stated values can’t be assumed to be the “true” trend line.

Apart from all the times I’ve tried to explain to you how to calculate the uncertainty in a trend line based on stated values?

But, another nice deflection. Why can’t you ever stick to a subject?

The *only* way to make Possolo’s example work is to assume zero measurement uncertainty

Again, the whole point of bringing up Possolo was to talk about the example of the cylinder. You were the one who used it as an example in the first place. This has nothing to do with TN1900 example 2.

You can’t even understand that this only applies for multiple measurements of the SAME THING!

Where does it say you have multiple measurements of the same thing?

And even then it ignores systematic uncertainty in the measuring tape!

Take it up with Possolo. It’s an example to explain how you can propagate uncertainty – not something that need you to account for every possible issue.

You simply don’t live in the real world.

Yet you were the one who brought up this non-real world example. You were the one who said

He doesn’t understand that when evaluating the uncertainty of the volume of a barrel that the uncertainty in both the radius and height add.

Was that meant to be about living in the real world? Do you think in the real world the uncertainty in the volume is just the adding of the uncertainty in both the radius and height?

Reply to  Bellman
July 11, 2023 6:59 am

You are so lost in the forest you can’t even keep track of what is going on in the thread. EXPERIMENTAL uncertainty was the subject of Ex 2 in TN1900 and *it* was what we were discussing. I only brought up the volume of a barrel in order to give you another example of where Possolo didn’t consider systematic bias in the measuring device.

Apart from all the times I’ve tried to explain to you how to calculate the uncertainty in a trend line based on stated values?”

You calculate the best-fit residuals — ASSUMING THE STATED VALUES USED FOR THE TREND LINE ARE 100% ACCURATE!

It’s your same old meme over and over again. You assume all uncertainty is random, Gaussian, and cancels so all you have to do is look at the stated values to get a “true” trend line that you can data match to. You don’t even realize you do it. Even after example after example of showing that if the stated values have uncertainty then so does the trend line!

Take it up with Possolo. It’s an example to explain how you can propagate uncertainty – not something that need you to account for every possible issue.”

EXACTLY! Which means it is a “learning example” not meant to describe the real world of measurement uncertainty – AND YET YOU TAKE IT AS BEING A REAL WORLD EXAMPLE!

Was that meant to be about living in the real world? Do you think in the real world the uncertainty in the volume is just the adding of the uncertainty in both the radius and height?”

The uncertainty in the volume of the barrel *is* the propagated uncertainty from the measurements! But it is ALL of the uncertainty – not just the random error introduced from eyeball reading of a measuring tape!

You make a statement like thsi, never actually understanding that what you are saying is the the uncertainty of the individual measurements must be propagated onto the average in order to determine the uncertainty of the average.

You’ll turn around in the next post and say that the uncertainty of the average is the average uncertainty which is *NOT* the propagated uncertainty of the measurement elements! You just say what works for you at the time, regardless of whether it is consistent or not. And you wonder why everyone thinks you are a troll?

Reply to  Tim Gorman
July 11, 2023 1:35 pm

You are so lost in the forest you can’t even keep track of what is going on in the thread.

The thread, or at least this part of it, was you claiming about me

He doesn’t understand that when evaluating the uncertainty of the volume of a barrel that the uncertainty in both the radius and height add.

Everything since then has been you trying to deflect from it.

Reply to  Bellman
July 11, 2023 4:02 am

And moths ago you couldn’t accept that GUM 10 does *NOT* apply when systematic uncertainty is present. And you STILL can’t.

You can’t even distinguish when quadrature addition is justified and when it isn’t.

Reply to  Tim Gorman
July 11, 2023 4:28 am

Just making stuff up now. If I ever said that, quote the exact words and provide a link.

GUM 10 is for random independent uncertainties. Systematic uncertainty, by definition is not independent.

Reply to  Bellman
July 11, 2023 6:43 am

When you quote Gum 10 you are quoting something that ignores systematic uncertainty. It is part of your meme that “all uncertainty is random, Gaussian, and cancels”. Why do you never want to consider that field measurements ALWAYS have systematic uncertainty? Pat’s study is about one factor that contributes to systematic uncertainty in an LIG device. Yet you continue to hold that his study is wrong for some reason. Systematic uncertainty is different for every different measuring device. You *never know* where a device is exactly with calibration drift. Calibration drift is a time dependent factor as well as an equipment factor. NOT KNOWING is what the uncertainty interval is supposed to be used for. Device design shortcomings, however, are not calibration drift. You *can* analyze them and determine a mean value for the uncertainty introduced by the design shortcomings and then apply that across the universe of those devices as systematic bias. Something you just can’t seem to get your head around!

GUM 10 is for repeated measurements of the same thing using the same device under identical conditions. Yet you want to apply it to *everything* – including temperature measurements from thousands of different locations using thousands of different devices with wildly different variances and wildly different systematic uncertainties.

Every thread you get involved in here on WUWT is focused on real temperature measurements in the real world and yet you want to try and cram everything into the same box with you – where all uncertainty is random, Gaussian, and cancels. You can’t even admit that Possolo’s Ex 2 in TN1900 is a LEARNING example and not something that is useful in the real world of “global average temperature”.

If all you can talk about is life in your alternate universe then you are wasting everyone’s time and bandwidth on here. No one cares what happens in your alternate universe.

Reply to  Tim Gorman
July 11, 2023 4:44 pm

TN 1900 is an example of one component of of a combined uncertainty. The GUM goes to great length to raise the awareness of including ALL uncertainty. TN 1900 does not address measurement uncertainty or any systematic uncertainty. Hubbard and Lin showed the systematic uncertainty must be addressed at each station individually. There are too many microclimate variables to assume a standard value. Anthony has proved that with his station sitng studies.

Reply to  Tim Gorman
July 12, 2023 8:36 am

When you quote Gum 10 you are quoting something that ignores systematic uncertainty.

You were claiming I said that it did. I’m asking you to back up that claim or admit it was a lie.

Yet you continue to hold that his study is wrong for some reason.

For a number of reasons.

Systematic uncertainty is different for every different measuring device.

Which is one of them. If every device has a different systematic, then you can’t count those errors as being the same across all measurements. For the purpose of one instrument it’s systematic, but across all instruments it’s more random.

GUM 10 is for repeated measurements of the same thing using the same device under identical conditions.

It is not. There is no requirements for any of the inputs to be based on repeated measurements.

Yet you want to apply it to *everything* – including temperature measurements from thousands of different locations using thousands of different devices with wildly different variances and wildly different systematic uncertainties.

Nobody, as far as I know is applying GUM 10 to any uncertainty analysis of a global mean anomaly. It’s much more complicated than that.

where all uncertainty is random, Gaussian, and cancels.

For the benefit of any new readers here. This is a lie that Tim repeats over and over again in the hope that someone will be gullible enough to believe it.

You can’t even admit that Possolo’s Ex 2 in TN1900 is a LEARNING example and not something that is useful in the real world of “global average temperature”.

That’s exactly what I keep trying to tell Jim.

If all you can talk about is life in your alternate universe then you are wasting everyone’s time and bandwidth on here.

Nobodies forcing you to waste it. Normally all that happens in these threads is I’ll ask a simple question or make a simple observation, and then I’ll be bombarded with countless attacks, which I like a fool have to respond to. And then we get stuck in an infinite loop.

Reply to  Bellman
July 12, 2023 8:38 am

Stop posting nonsense that you don’t understand.

old cocky
Reply to  Jim Gorman
July 9, 2023 4:26 pm

1) Daily Average – At a minimum, the Type B estimate of LIG uncertainty is 0.5° F. They add in quadrature. √(0.5² + 0.5²) = 0.7. This is the measurement uncertainty in the daily average. It should be propagated into any further calculations.

I think that’s the total, so the variance needs to be scaled for the average.

It might be worth looking at relative uncertainties to see how they compare.

Reply to  Tim Gorman
July 9, 2023 4:15 pm

The interval within which the population mean might lie is really only applicable when you have what the document calls “repeatable” measurements.

You keep wanting to have this both ways. You take a document that talks about repeatability in a laboratory setting, but then use it to make claims about statistical field experiments. The document says nothing about the population mean, it’s only talking about how to get a better estimate of the measurement of a single thing by repeated measurements. Yet somehow draw the conclusion it’s saying the idea of a confidence interval isn’t even applicable without repeated measurements.

I especially like the treatment of “range” as an index into uncertainty estimates.

Which particular range are you talking about? They use it somewhat loosely in two different contexts. As a confidence interval, and as the range of the entire data set. Two very different ranges.

The range is directly related to the variance in a data set.

You realize this is exactly what I’ve been trying to explain to you. The SEM is directly related to the SD of a data set. It’s just you keep insisting it’s impossible to derive the uncertainty of a mean from just one sample.

Pat’s paper is all about (3).

Really? You think the resolution in the thermometers means there is no difference in the measurements. That all temperatures recorded anywhere and anytime are always the same, becasue the resolution of the LiG can’t detect the difference between 10°C and 20°C?

What (3) is saying is what I keep saying – that repeated measurements of the same thing, will nopt help you if you get the same reading each time. Because the random errors in the measurement is less than the resolution (what they call precision, incorrectly) of your device.

This is not a problem when you are measuring lots of different things of different sizes, becasue then the variation is going to be bigger than the resolution.

old cocky
Reply to  Bellman
July 9, 2023 4:29 pm

This is not a problem when you are measuring lots of different things of different sizes, becasue then the variation is going to be bigger than the resolution.

I’m fairly sure this is just spurious precision arising from the calculation.

Reply to  old cocky
July 10, 2023 4:53 am

The variation of different things is of dubious value at best. The average of SH and NH temps, which have different variances and ranges, tells you exactly as much about the temperature of the Earth as would combining the heights of Shetland ponies with the heights of Arabians would tell you about the heights of horses. The average of a bi-modal or multi-modal distribution tells you little about the distribution and the total variance of the combined data just gets huge.

Reply to  old cocky
July 10, 2023 4:14 pm

I’m fairly sure this is just spurious precision arising from the calculation.”

Lots pf people seem to believe this, and tell me repeatedly. What I never see is an explanation or proof or demonstration as to why all the precision is spurious.

To me iot seems to come down tot he simple fact that people here don’t understand what an average is.

Reply to  Bellman
July 10, 2023 4:20 pm

You mean that people that have had to LIVE with real world measurements and results don’t understand. Only you know what the average of different things really is in the real world.

Reply to  Tim Gorman
July 10, 2023 5:00 pm

You mean that people that have had to LIVE with real world measurements and results don’t understand.

Arguments from spurious authority carry little weight with me. Unless you can demonstrate in some way why it’s impossible for the precision of an average to be less than the individual measurements, I’m not interested. As things stand you are just arguing from tradition.

I’ve given you demonstrations showing that resolution of an average is not destroyed when you reduce the resolution of the data. The fact you can’t even accept that, or any over piece of evidence, makes it clear this is just an article of faith for you. Man can never fly you say, therefore that plane cannot really be flying.

Only you know what the average of different things really is in the real world.

Only me, and any one who’s studied statistics.

Reply to  Bellman
July 11, 2023 3:49 am

Arguments from spurious authority carry little weight with me.”

Taylor, Bevington, and Possolo are “spurious authority”? Only for you, only for you!

“Unless you can demonstrate in some way why it’s impossible for the precision of an average to be less than the individual measurements, I’m not interested. As things stand you are just arguing from tradition.”

Don’t try using argumentative fallacies, you don’t understand them any better than you understand uncertainty.

Even a six year old can understand that the average can’t have any more precision than the measuring device provides. This has been explained to you over and over ad infinitum. And you simply refuse to learn. If this were true you could get infinitely precise measurements from a yardstick. You’d just have to make enough measurements. Karlomonte’s ask of you to patent such a micrometer-yardstick *should* have gotten through to you bit it didn’t. It never will. You absolutely, stubbornly refuse to learn.

Reply to  Tim Gorman
July 11, 2023 5:03 am

Taylor, Bevington, and Possolo are “spurious authority”? Only for you, only for you!

Truly pathetic. I’d say a new low even for you, but I expect you’ve done worse.

Anyone can follow this thread up to see what I was talking about – and it was not any of those authors. I’m specifically talking about any of these labs who it is claimed say that “Thou must never quote an average to more decimal places than the individual measurements”.

None of those sources say anything of the sort – and if they did I’m sure you could quote their exact words. They all talk about the sensible rules, based on actually calculating the uncertainty, and which can easily lead to you having more digits in the average.

Even a six year old can understand that the average can’t have any more precision than the measuring device provides.

I don’t take advise from a six year old anymore than I would from some random physics lab. Arguments from incredulity are not very convincing. Just provide the mathematical prove, or a clear explanantion as to why it’s impossible – not just claim that it seems obvious.

This has been explained to you over and over ad infinitum.”

So that’s an now argumentum ad nauseam. Any more fallacies you would like to demonstrate.

If this were true you could get infinitely precise measurements from a yardstick.

No you could not for multiple reasons I keep explaining to you.

Reply to  Bellman
July 11, 2023 6:23 am

No you could not for multiple reasons I keep explaining to you.

When are you going to start?

After you find more “errors” in the ISO terminology?

Reply to  Bellman
July 11, 2023 7:06 am

None of those sources say anything of the sort – and if they did I’m sure you could quote their exact words. “

I’ve given you their exact quotes on this at least three times. I’m not going to repeat myself. Go look at Taylor, section 2.5 and Bevington, Chapter 1, Pages 2 and 3.

For the umpteenth time, you need to READ BOTH AUTHORS COMPLETELY, FOR MEANING, AND DO THE EXAMPLES!

STOP CHERRY-PICKING AND LEARN THE SUBJECT!

Nor have you given any reasons. You just keep saying the average can be more precise than the actual measurements – meaning a yardstick can be used as a micrometer if only you take enough readings.

old cocky
Reply to  Bellman
July 10, 2023 5:34 pm

To me iot seems to come down tot he simple fact that people here don’t understand what an average is.

Which “average” would you like? There are plenty to select from?
Assuming you mean the arithmetic mean, it’s the sum of the observations divided by the number of observations. It doesn’t necessarily lie in the possible sample space.

I’m fairly sure this is just spurious precision arising from the calculation.”

Lots pf people seem to believe this, and tell me repeatedly. What I never see is an explanation or proof or demonstration as to why all the precision is spurious.

I did this before, but there’s been a lot of to-ing and fro-ing between lots of people so would have been easy to miss.

Let’s try the faces of our familiar 6-sided die. The observations are {1, 2, 3, 4, 5, 6} The sum is 21, and the mean is 21/6. Taking out common factors, it’s 7/2.
Converting to decimal, you get 3.5, which has added a significant digit.

On the other hand, if the set was {2, 4, 6, 8, 10, 12}, you would have a sum of 42 (an auspicious number), and a mean of 42/6, or 7. No additional significant digits.

Similarly, {1.0, 2.0, 3.0} is 6.0 / 3, or 2.0, whereas {1.0, 2.0, 4.0} is 7.0 / 3, or 2.333333333333333333 to however many digits you care to express it.
Has the presence of 4.0 rather than 3.0 in the set of observations improved the precision of the mean?

Reply to  old cocky
July 10, 2023 5:54 pm

Converting to decimal, you get 3.5, which has added a significant digit.”

Indeed it has. But that doesn’t mean that extra digit is spurious. 3.5 very much is the correct mean of a fair 6-sided die, and insiting on rounding it to 4 gives a very misleading result.

The fact that the .5 is not spurious cvan easily be seen by throwing very large numbers of dice. You can be reasonably certain that the answer will always be close to 3.5, and not 4.

Similarly, {1.0, 2.0, 3.0} is 6.0 / 3, or 2.0, whereas {1.0, 2.0, 4.0} is 7.0 / 3, or 2.333333333333333333 to however many digits you care to express it. Has the presence of 4.0 rather than 3.0 in the set of observations improved the precision of the mean?“”

Which is just a problem of trying to push fractions into a decimal system. The extra digits aren’t spurious, it’s just the correct answer is 2 1/3. It’s just saying the correct answer in decimal can be as accurate as you want or need.

But in most real world situations this isn’t a problem. There’s enough uncertainty in the sampling to make too many digits not useful or meaningful.

What none of this means is that an average cannot by more precise, have more digits than were used in the individual measurement.

Reply to  Bellman
July 10, 2023 6:38 pm

First, dice are not measurements. Measurements of temperature are samples of a continuous function. Totally different from a discreet function.

As an example, what is the probability function of multiple fair die rolls. IOW, triangular, normal, Poisson, binomial, rectangular, etc.

Now let’s look at Tavg. What is the probability function of Tavg? How about a monthly distribution of Tavg.

Second, the mean may be 3.5, but what is the probability of rolling 3.5? You need to discuss 3.5 in terms of Expected Value. Then discuss a single measurement in terms of an Expected Value, i.e., the probability of a certain temperature.

Reply to  Jim Gorman
July 10, 2023 7:45 pm

First, dice are not measurements.

It wasn’t my example.

Totally different from a discreet function.

Not totally different. For many circumstances they work the same, it’s just the maths becomes more complicated when dealing with continuous functions.

As an example, what is the probability function of multiple fair die rolls

How many do you want. For large enough throws the CLT tells you it can be approximated by a normal distribution.

You can work out an exact solution for different numbers of dice, but I’m not sure if the distribution has a name.

What is the probability function of Tavg?

You’ll need to give some context. For an individual station on a specific day, or for all places on the globe over all time? Are you talking about the actual temperature or the measurements. If just the measurements rounded ot a specific number of digits, you have a discrete distribution in any case.

Second, the mean may be 3.5, but what is the probability of rolling 3.5?

Zero. Or as close to it as makes no difference.

You need to discuss 3.5 in terms of Expected Value.

You can, or you can discuss it as the average of a number of rolls.

Then discuss a single measurement in terms of an Expected Value, i.e., the probability of a certain temperature.

Why. You’re the only ones who seem to worry that any given value is not always the expected value. Expected does not mean the number we expect for all or any value, it’s the expectation of the average, in an absolute sense, the average of all possibilities.

And to get back to the point – the 0.5 is not spurious. It’s exactly what you expect on average. If it were spurious, any value between 3 and 4 would be equally valid. But roll the dice enough times and you can be sure the average will converge to 3.5, not 3.7 or 3.1.

old cocky
Reply to  Bellman
July 10, 2023 7:20 pm

It’s a fair cop with the halves, but once you have remainders from dividing by larger primes you introduce a lot of spurious decimal precision.
Try {1. 1. 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2} (14/13)

Yes, it’s a problem with base conversion, but what people see is the decimal notation.

But in most real world situations this isn’t a problem. There’s enough uncertainty in the sampling to make too many digits not useful or meaningful.

Not talking about sampling; purely the perceived precision of the mean.

What none of this means is that an average cannot by more precise, have more digits than were used in the individual measurement.

You just agreed about the decimal conversion of thirds.

Reply to  old cocky
July 11, 2023 5:38 am

Try {1. 1. 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2} (14/13)

The trouble here is you are trying to talk about spurious precision in exact averages. There is nothing spurious about 14/13. It’s exactly the average of those numbers. As a mathematician that’s the only acceptable way of writing it 😉 But if you insit in converting it to a decimal, then any number of places will be an approximation (ignoring the ways you can indicate repeating digits). The more decimal places the less spurious the answer.

However this is only tangential to the problem being posed, which is what happens when you have a messy real world average of multiple different values all measured with a degree of uncertainty and to a specific number of digits.

The claim made here, and elsewhere, is that if you measure everything, say, to the nearest unit, it’s impossible to know the average to less than the nearest unit. And that’s where I disagree. With a large enough sample size the uncertainty in the mean, whether coming from sampling or from the measurements, can be less than a unit. If you calculate the mean as 3.5123, and the uncertainty as ±0.12, you should write this as 3.51 ± 0.12, rather than 4.

The idea that the 0.5 is spurious is wrong in my opinion – it carries a lot of useful information which is lost if you round it to the nearest unit, and just quoting 4 as an answer not only adds uncertainty it distorts the result. The implied uncertainty of just stating 4 is that the answer could be anything from 3.5 to 4.5, when we know it’s almost unlikely to be larger than 3.63 but could be less than 3.5.

Reply to  Bellman
July 11, 2023 6:24 am

The claim made here, and elsewhere, is that if you measure everything, say, to the nearest unit, it’s impossible to know the average to less than the nearest unit. And that’s where I disagree.

And this why you need to patent your yardstick-micrometer…

Reply to  karlomonte
July 11, 2023 1:45 pm

I’ll tell you what. I’ll patent for you, once you explain how you can assure there are no systematic errors above a micrometer, a practical way of taking trillions of measurements, and how we ensure that the errors in each measurement are greater than several yards.

Until then you should concentrate on your straw man gun.

Reply to  Bellman
July 11, 2023 5:22 pm

At this point I see you have two choices:

1) write to the chairman of the ISO terminology committee and tell him how they gored your sacred cow.

2) go talk to the Forestry Dept. and see if they can fit you in, before it is too late.

Reply to  karlomonte
July 11, 2023 5:36 pm

Or 3) let engineers use whatever terminology they like, whilst the rest of the world continues to use the correct term.

Reply to  Bellman
July 11, 2023 6:13 pm

So the ISO committee consists of nothing but engineers?

You know this … how exactly?

Reply to  karlomonte
July 12, 2023 6:29 am

Where did I say that? What I said was I didn’t care if engineers wanted to use their own terminology, as long as they didn’t try to enforce it on others.

Here’s what the ISO says about the term standard error.

standard error

standard deviation (2.37) of an estimator (1.12)

EXAMPLE:

If the sample mean (1.15) is the estimator of the population mean (2.35) and the standard deviation of a single random variable (2.10) is σ, then the standard error of the sample mean is σ / √n where n is the number of observations in the sample. An estimator of the standard error is S / √n where S is the sample standard deviation (1.17).

Note 1 to entry: In practice, the standard error provides a natural estimate of the standard deviation of an estimator.

Note 2 to entry: There is no (sensible) complementary term “non-standard” error. Standard error can be viewed as an abbreviation for the expression “standard deviation of an estimator”. Commonly, in practice, standard error is implicitly referring to the standard deviation of the sample mean. The notation for the standard error of the sample mean is σ_x^bar.

https://www.iso.org/obp/ui/#iso:std:iso:3534:-1:ed-2:v2:en

Reply to  Bellman
July 12, 2023 7:08 am

Where did I say that? What I said was I didn’t care if engineers wanted to use their own terminology, as long as they didn’t try to enforce it on others.

Or 3) let engineers use whatever terminology they like, whilst the rest of the world continues to use the correct term.

And you STILL haven’t read the GUM, no surprise.

Reply to  karlomonte
July 12, 2023 10:23 am

Like you said, bellman believes only engineers use the GUM.

Reply to  Tim Gorman
July 12, 2023 10:36 am

I’m sorry if you find the word “engineer” derogatory. It was not meant to be. I merely used it to differentiate the people who are using the terminology in the GUM to those who use the more common definitions. And this only applies to the question of whether you think standard error is an incorrect term.

Reply to  Bellman
July 12, 2023 11:07 am

So just who do you think wrote the GUM?

Or is this another yoke-ah?

Reply to  Bellman
July 12, 2023 10:33 am

“””””Commonly, in practice, standard error is implicitly referring to the standard deviation of the sample mean. “””””

You need to work out in your brain what the “sample mean” truly is. I don’t think ISO defines it the same way you do. Read the preceding items and their references in the document and carefully work out the examples.

You will find that ISO is pretty much like the GUM. Their documents just haven’t been updated recently.

Reply to  Jim Gorman
July 12, 2023 11:09 am

Up above he claimed its a “true” value!

Reply to  Jim Gorman
July 12, 2023 11:32 am

You need to work out in your brain what the “sample mean” truly is

In my brain it’s the mean of a sample. But lets see what the ISO says:

sample mean

average

arithmetic mean

sum of random variables (2.10) in a random sample (1.6) divided by the number of terms in the sum

Seems about right. Why, what did you think it meant?

OK, let’s work through the examples

EXAMPLE:

Continuing with the example from 1.9, the realization of the sample mean is 9,7 as the sum of the observed values is 97 and the sample size is 10.

97 / 10 = 9.7. Sound’s like a mean to me.

Look, it’s a long day, and this thread is just more distraction. If you have something to say about the ISO definition, just say it rather than expect me to find it.

You will find that ISO is pretty much like the GUM. Their documents just haven’t been updated recently.

To paraphrase an old joke, the great thing about standard is there are so many to choose from.

Reply to  Bellman
July 11, 2023 7:23 am

The claim made here, and elsewhere, is that if you measure everything, say, to the nearest unit, it’s impossible to know the average to less than the nearest unit. And that’s where I disagree. With a large enough sample size the uncertainty in the mean, whether coming from sampling or from the measurements, can be less than a unit. If you calculate the mean as 3.5123, and the uncertainty as ±0.12, you should write this as 3.51 ± 0.12, rather than 4.” (bolding mine, tpg)

The interval in which the average might lie, the standard deviation of the sample means, is *NOT* the precision of the mean. It has nothing to do with the precision of the mean, only of your sampling and its prediction of the mean.

The precision of the mean is determined by the precision of the measurements in the data set. The precision of the mean is determined by the data element with the least precision.

This is such a simple concept that a child can figure it out.

old cocky
Reply to  Tim Gorman
July 11, 2023 3:53 pm

Plus 1 digit for rounding. bellman caught me fair and square on that.

Reply to  old cocky
July 11, 2023 5:10 pm

That extra digit is only for intermediate calculations. It is supposed to be truncated (by rounding or whatever) in the final result so that the final result has no more precision than the constituent measurments.

Reply to  Tim Gorman
July 11, 2023 5:44 pm

Supposed to by whom. Again none of the sacred texts tell you to have a final result that has no more digits than the constituent measurements.

Reply to  Bellman
July 12, 2023 6:22 am

Again none of the sacred texts tell you to have a final result that has no more digits than the constituent measurements.”

No more digits? Huh?

Again none of the sacred texts tell you to have a final result that has no more digits than the constituent measurements.””

Do you suffer from short-term memory loss? You’ve been given this at least once in this thread and multiple times over multiple threads!

Taylor: Section 2.2
“Rule 2.5: Rule for Stating Uncertainties – Experimental uncertainty should almost always be rounded to one significant figure.”
“Rule 2.9: Rule for stating answers – The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.”

His example is stating a measured speed as 6051.78 +/- 30 m/s is obviously ridiculous. The uncertainty of 30m/s means the digit 5 really might be as small as 2 or as large as 8. Meaning the trailing digits of 1, 7,8 have no significance at all! Calculating out an average past where the uncertainty applies is just ludicrous. It’s what a statistician does when they have no relationship to the real world.

Bevington, Chapter 1.1, Page 4:
“In other words, we let the uncertainty define the precision to which we quote our result.”

Exactly the same as Taylor.

Since the uncertainty is driven, at least in part, by the resolution of the measuring device, which, in turn, determines the precision with which a result can be accurately stated, the average value falls under the same limitation – it should not be quoted to a precision level greater than the uncertainty limit. Since the uncertainty magnitude is based on the least precise data provided by a measurement, it also determines the precision with which the result can be stated.

Why do you just keep on making these idiotic assertions? None of my five statistics books even cover this. In fact, of the two I have at my desk right now, neither have the words “resolution”, “precision”, or “measurement” in the index! That should be an indication of how much emphasis introductory statistics textbooks put on the science of metrology – NONE! Is it any wonder that so many in science and math today don’t have a clue about the real world of measurements and uncertainty? ?That includes *YOU*!

Reply to  Tim Gorman
July 12, 2023 7:10 am

He has a mental block to anything that is out-of-phase with his GAT hoax predilections.

So now the GUM is “wrong”, without him putting eyeball to page.

Reply to  karlomonte
July 12, 2023 10:24 am

he doesn’t *read* anything, he just cherry-picks.

Reply to  Tim Gorman
July 12, 2023 7:26 am

Experimental uncertainty should almost always be rounded to one significant figure.

Exactly. That’s the uncertainty not the result. If the uncertainty of a mean is less than the uncertainty of the measurements you can quote the result to more digits than the measurements. He tells you that explicitly. You should no that as you’ve “done all the exercises”.

In other words, we let the uncertainty define the precision to which we quote our result.

Again, exactly my point. The uncertainty defines the “precision”. Not some silly rule about the number of digits in the original measurements – the uncertainty.

Reply to  Bellman
July 12, 2023 10:30 am

Exactly. That’s the uncertainty not the result.”

Why didn’t you read the rest?

“The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.””

You can’t even read and understand two paragraphs. You have to cherry-pick one thing!

” If the uncertainty of a mean is less than the uncertainty of the measurements y”

The “uncertainty of the mean” the way you use it is the interval in which the population average might lie. As we keep telling you that has nothing to do, ABSOLUTELY NOTHING, with the accuracy of the mean. You can calculate the population mean down to the millionth digit and it can still be wrong by a million units.

The SEM is *NOT* the uncertainty of the average. You simply don’t understand precision vs accuracy. You can put five shots on target into the same hole (high precision) but they may be far from the center of the target. You can calculate the population mean down to “one hole” and it can still be right at the edge of the target (highly inaccurate).

Do you understand *anything* about the real world? It’s been explained to you often enough.

Reply to  Tim Gorman
July 12, 2023 11:06 am

Why didn’t you read the rest?

Just because I didn’t quote something doesn’t mean I haven’t read it. The second half of the quote continues to agree with me, and is just saying what I said in my previous comment – the uncertainty determines the number of digits to report.

The “uncertainty of the mean” the way you use it is the interval in which the population average might lie.

Not really, but lets not quibble.

As we keep telling you that has nothing to do, ABSOLUTELY NOTHING, with the accuracy of the mean.

Have you ever read Taylor. He makes it clear that he regards the SEM as the uncertainty of the mean. As always this doesn’t include the possibility of systematic errors, but that doesn’t mean it tells you ABSOLUTELYNOTHING about the accuracy, let alone the uncertainty.

You can calculate the population mean down to the millionth digit and it can still be wrong by a million units.

I’m sure you could. But you would have to have a really bad experiment to do so.

The SEM is *NOT* the uncertainty of the average. You simply don’t understand precision vs accuracy.

Yes, we keep on going round in circles over this. Precision is not accuracy. Accuracy is not trueness (as defined in the VIM). You can be very precise and still wrong. You can be very true and still not accurate. None of this is an argument for ignoring the precision of your measurement or average.

Do you understand *anything* about the real world? It’s been explained to you often enough.

And it doesn’t matter how many times I say I agree you will still keep claiming I don’t.

The problem is you keep using this argument as a distraction. This argument has nothing to do with how many significant figures you report or whether it’s better to have a large or small sample size, or whether the SEM is an indication of the uncertainty a sample mean.

Reply to  Bellman
July 12, 2023 4:24 pm

 He makes it clear that he regards the SEM as the uncertainty of the mean.”

Give me a section and page number. My guess is that you will quote something out of Chapter 4 or later where he is talking about random and Gaussian uncertainty — WHICH MEANS MEASUREING THE SAME THING MULTIPLE TIMES.

As always this doesn’t include the possibility of systematic errors, but that doesn’t mean it tells you ABSOLUTELYNOTHING about the accuracy, let alone the uncertainty.”

It tells you *NOTHING* about the real world. Again, if you would stop cherry-picking and actually read and understand Taylor and Bevington you *might* get a glimpse of this. You haven’t even bothered to read Taylor’s Chapter 1!

It is so tiresome typing in here from Taylor’s book that I can’t believe it. YOU COULD JUST READ HIM FROM COVER TO COVER!

Chapter 1 Intro:
“Experience has shown that no measurement, however carefully made, can be made completely free of uncertainties.”

“This chapter then describes how (in some simple cases, at least) the magnitude of the experimental uncertainties can be estimated realistically, often by means little more than plain common sense.”

Section 1.4: “In the applied sciences, for example, the engineers designing a power plant must know the characteristics of the materials and fuels they plan to use. The manufacturer of a pocket calculator must know the the properties of its various electronic components. In each case someone mus measure the required parameters, and having measured them, must establish their reliability, which requires error analysis. Engineers concerned with the safety of airplanes, trains, or cars must understand the uncertainties in driver’s reaction times, in braking distances, and in a host of other variables; failure to carry out error analysis can lead to accidents such as that shown on the cover of this book.”

This is what we are trying to explain to you and you refuse to accept. Quoting the average with more precision than the measurements themselves support can lead to every single on of the instances above. The average simply can’t be made more reliable by calculating it out to more digits than the measurements themselves justify.

You only confirm that you have never, EVER, been in a situation where health and welfare depend on your measurements. And you simply won’t believe that significant digit rules apply ALWAYS, not just when *you* choose to follow them.

I’m sure you could. But you would have to have a really bad experiment to do so.”

You just whiffed again. You simply can’t bring yourself to admit the truth!

” None of this is an argument for ignoring the precision of your measurement or average.”

You just don’t get the concept of uncertainty and the role it plays in the real world. You never will until a significant emotional event brings you face-to-face with reality. And since you will never be involved in a health and welfare situation you will never have that significant emotional event.

“The problem is you keep using this argument as a distraction”

It’s *NOT* a distraction – it has real world consequences. And you refuse to accept that. It’s because it goes against your CAGW religious dogma of “all uncertainty is random, Gaussian, and cancels”.

Reply to  Tim Gorman
July 12, 2023 6:35 pm

Give me a section and page number.

What. I thought you had read the entire book, done all the exercises and memorized it for meaning.

where he is talking about random and Gaussian uncertainty — WHICH MEANS MEASUREING THE SAME THING MULTIPLE TIMES.

and there are the goal posts moving. You claim that when reporting an average the number of digits must be based on the number in the measurements. Now you are going to claim that this rule doesn’t apply when you are actually just measuring the same thing multiple times. How would this make any sense?

Of course he’s talking about measuring the same thing becasue that’s the point of the book. It’s a book about measuring things, not statistics. But here goes. Page 102, section 4.4 (missing a few symbols to save time).

If … are the results of measurements of the same quantity then, as we have seen, our best estimate for the quantity x is their mean x. We have also seen that the standard deviation σ characterizes the average uncertainty of the separate measurements. Our answer x_best = x^bar, however, represents a judicious combination of all N measurements, and we have every reason to think it will be more reliable than any one of the measurements taken alone, In Chapter 5, I will prove that the uncertainty in the final answer is given by the standard deviation σ divided by √N. This quantity is called the standard deviation of the mean, or SDOM. (Other common names are standard error and standard error of the mean).

Note, the mean is more reliable than any one measurement, and σ divided by √N is the uncertainty.

Quoting the average with more precision than the measurements themselves support can lead to every single on of the instances above.

Yet Taylor never bothers to warn about this. And actually says the mean may be more precise than the measurements.

And since you will never be involved in a health and welfare situation you will never have that significant emotional event.

Glad you can predict my future. But a few dead relatives and a few health scares myself, suggests your prediction may not be that accurate.

Reply to  Tim Gorman
July 12, 2023 11:12 am

He still doesn’t understand uncertainty versus error.

old cocky
Reply to  Tim Gorman
July 11, 2023 5:57 pm

There seem to be various opinions on this, but one rule of thumb appears to be that the number of significant figures is of the same order as the sample size. That makes sense, because the mean is the sum divided by the count.

That is totally separate from the resolution, accuracy or precision of the data points.

For determining the mean value of a series of measurements of a single item, then truncation does seem appropriate. It’s probably context sensitive.

Reply to  old cocky
July 12, 2023 4:43 am

In physical science and engineering it is extremely important to not give the impression that you have measured things to a higher resolution that you actually did. Adding significant figures to an average value over and above what you actually measured gives others the impression you have used higher resolution measurements than you actually did.

It’s why the resolution of the average should be no greater than the resolution of the measurement with the least resolution in your data set. The resolution is always limited by that one value, unless, of course, you decide to delete it from the set. But then you have to justify doing that.

Reply to  Tim Gorman
July 12, 2023 6:50 am

In physical science and engineering it is extremely important to not give the impression that you have measured things to a higher resolution that you actually did.

But as you keep saying, an average is not a measurement. It’s a statistical calculation based on, possibly, hundreds or thousands of individual measurements.

Adding significant figures to an average value over and above what you actually measured gives others the impression you have used higher resolution measurements than you actually did.

Only people who don’t understand what an average is. (Which if comments here are to be believed might include most physics and chemistry students.)

It’s why the resolution of the average should be no greater than the resolution of the measurement with the least resolution in your data set.

The main point of averaging is to be able to detect differences in populations, with the well understood point being the larger your sample size the easier it is to detect small differences. The size of the difference you can detect is the definition of resolution.

Take the die example. You have a die, and want to test if it’s fair or biased. A simple test is to throw the die a number of times and see what the average is. If it’s significantly different from the expected 3.5 you can say with some confidence that it is not fair. The more throws you make in your test the easier it is to detect small significant differences.

Reply to  Bellman
July 12, 2023 7:11 am

Only people who don’t understand what an average is. (Which if comments here are to be believed might include most physics and chemistry students.)

Clown. You are obviously not part of this set.

Reply to  Bellman
July 12, 2023 9:30 am

The wheels on the bus go round and round!

Reply to  Bellman
July 12, 2023 10:03 am

But as you keep saying, an average is not a measurement. It’s a statistical calculation based on, possibly, hundreds or thousands of individual measurements.”

*YOU* are the one that wants to present it as a measurement! In the case where you have multiple measurements of the same thing under the same conditions and systematic uncertainty is reduced to insignificance, the average is a representation of the “true” value”. You simply don’t want to mislead others as to how precisely you made your measurements.

In situations where this is not the case it actually becomes even more critical to not overstate the precision. Suppose you are travelling in a capsule to Mars. Your engine burn times are based on the angular velocity you have with the planet. Do you want to mislead everyone that you measured that angular velocity to a precision you can’t justify with the measurements you made of it? Doing so could lead you into plowing into the planet or missing your insertion window by a wide margin because of the distance travelled at the wrong direction and velocity.

“Only people who don’t understand what an average is. (Which if comments here are to be believed might include most physics and chemistry students.)”

NO, *including* people that understand what an average is. People who assume that you know not to misrepresent your findings. People who depend on the precision with which you made your measurements to make safe products.

The main point of averaging is to be able to detect differences in populations, with the well understood point being the larger your sample size the easier it is to detect small differences. The size of the difference you can detect is the definition of resolution.”

Huh? “differences in populations”? You mean like the height differences between Shetland ponies and Arabians? Or do you mean differences in results from similar populations? Like a control group vs a test group where both groups are similar populations?

Once again, you betray your alternate universe. The subject at hand is PHYSICAL MEASUREMENTS such as temperatures. Combining different temperatures from around the world *IS* like finding the average height of the animals in a corral that are a mix of Shetland ponies and Arabians. They are not common populations. Neither are temperatures taken at Manitoba and Rio de Janeiro common populations.

Reply to  Tim Gorman
July 12, 2023 11:14 am

He won’t understand what angular velocity is.

Reply to  Tim Gorman
July 12, 2023 2:34 pm

Your engine burn times are based on the angular velocity you have with the planet. Do you want to mislead everyone that you measured that angular velocity to a precision you can’t justify with the measurements you made of it?

If I was traveling to Mars and my life depended on knowing the angular velocity, and for some reason it could only be measured by taking the average of a large number of measurements with an imprecise instrument – then I would definitely not want them to round the best estimate of the mean to the nearest integer, or whatever. Why would I possibly want to change a value that was a best estimate, for one you knew was probably wrong. It;s not much consolation as I crash into Mars to know that at least they didn’t overstate the precision.

Reply to  Bellman
July 12, 2023 4:27 pm

So you would rather have them depend on a value you give them that is more precise than your measurements justify. And you can’t even comprehend the ramifications of that!

If they *know* the uncertainty in your average then they can allow for it – something they can’t properly do if you misrepresent the uncertainty by giving a more precise average than the measurements justify.

old cocky
Reply to  Bellman
July 11, 2023 2:42 pm

I feel like Michael Palin 🙁

There is nothing spurious about 14/13. It’s exactly the average of those numbers. As a mathematician that’s the only acceptable way of writing it 😉 But if you insit in converting it to a decimal,

and what is that if not spurious precision?
It all looks very sciency, but 14/13 (or 15/13 or 25/13 or …) is purely a ratio of integers. The decimal representation leads one to think that the observations are also continuous values which have been measured to the same resolution.

Rather than the discrete values, repeat the exercise with measurements to 2 decimal places {1.00. 1.00. 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 1.00, 2.00}. Each of those measurements has an implicit resolution of +/- 0.005, so it’s now 14.00 +/- .065 (feel free to add in quadrature if you prefer), so are you still going to go to 20 decimal places?

Reply to  old cocky
July 11, 2023 3:34 pm

You are going to have to define “spurious” in this context. To me it’s implying any digits after a certain value can be discarded because they are essentially random.

“Each of those measurements has an implicit resolution of +/- 0.005, so it’s now 14.00 +/- .065 (feel free to add in quadrature if you prefer), so are you still going to go to 20 decimal places?”

No. As I keep saying you need to estimate the uncertainty and use that. In your case if the uncertainty of the sum is ±0.065, then the uncertainty of the mean is ±0.005. You can report the average as 1.077 ± 0.005.

This, though, supposes the uncertainties really are ±0.005 based only on the number of digits, they could be much larger. Secondly, you are only talking about the measurement uncertainty here. Assuming this is a sample, the standard error of the mean is a better guide. In this case that would be 0.074, and using a 2σ uncertainty interval, you could say 1.08 ± 0.15. This would be true whether you measure to the nearest integer, or 1/100th.

old cocky
Reply to  Bellman
July 11, 2023 3:45 pm

No. As I keep saying you need to estimate the uncertainty and use that. In your case if the uncertainty of the sum is ±0.065, then the uncertainty of the mean is ±0.005. You can report the average as 1.077 ± 0.005.

Thank you.

This, though, supposes the uncertainties really are ±0.005 based only on the number of digits, they could be much larger. Secondly, you are only talking about the measurement uncertainty here. Assuming this is a sample, the standard error of the mean is a better guide. In this case that would be 0.074, and using a 2σ uncertainty interval, you could say 1.08 ± 0.15. This would be true whether you measure to the nearest integer, or 1/100th.

That’s over-complicating it. Make it as simple as possible, but no simpler.

Reply to  old cocky
July 11, 2023 5:29 pm

Make it as simple as possible, but no simpler.

But that’s why I don’t like the SF rules. They are far too simple and result in either too much or too little uncertainty.

Reply to  Bellman
July 11, 2023 6:15 pm

Learn what the word means prior to using it in sentences, sounds just like the IPCC.

Reply to  Bellman
July 12, 2023 6:27 am

They are far too simple and result in either too much or too little uncertainty.”

Wrong! They help PREVENT either too much or too little uncertainty! They help get just the right amount of uncertainty!

Reply to  Tim Gorman
July 12, 2023 7:39 am

I don’t know why people waste their time writing all these books on uncertainty. If you can get just the right amount of uncertainty by just counting the digits in your measurements, why care about all these propagation rules?

Reply to  Bellman
July 12, 2023 4:41 pm

Because people’s lives and fortunes depend on doing measurements right and presenting your results correctly.

Reply to  Bellman
July 12, 2023 6:40 am

The truth comes out. You don’t LIKE Signifiant Digit rules. Too bad.

You are displaying your lack of knowledge and appreciation of what physical measurements consist of.

SF rules were developed for a reason.

One, so that people attempting to replicate an experiment would get similar results. The measurements could be refined by using better methods and devices.

Two, people replicating experiments would not think they needed to purchase better instruments with better resolution in orfer to achieve similar results.

It is very much why trained engineers and scientists in other fields who live and die with the measurements they make simply cant believe anomaly values from the early 1900’s that were measured to the nearest units digit yet quoted to the 1/100ths resolution and uncertainty in the 1/1000ths. These are mathematician values and not measurement values.

Reply to  Jim Gorman
July 12, 2023 7:13 am

Yep!

And this clown thinks the GUM is “wrong” when they gored his scared ox.

Reply to  karlomonte
July 12, 2023 7:17 am

I wish you would make your minds up. Do I worship at the later of the GUM, as Pat Frank was accusing me of, or do I despise it and all it’s works because I disagree with one comment.

Reply to  Bellman
July 12, 2023 7:27 am

Poor whinerman, so oppressed.

Get a brain, PDQ.

Reply to  Bellman
July 12, 2023 9:19 am

Irrelevant to the issue.

Reply to  Jim Gorman
July 12, 2023 9:52 am

Did I say it was relevant to any particular issue? I’m just pointing out the lack of consistency in all these ad homs.

Saying I worship at the alter of GUM is irrelevant to any issue, saying I think the GUM is wrong becasue they killed my sacred ox, is irrelevant to the issue. Unfortunately it’s the standard of argument I’ve come to expect here.

Reply to  Bellman
July 12, 2023 11:15 am

bgwxyz certainly worships the GUM.

Reply to  Jim Gorman
July 12, 2023 7:14 am

The truth comes out.

You mean that truth I’ve been telling you about since day 1, however many decades ago that was. I don’t like significant figure rules.

To be clear, for the most part I’ve nothing against them. When used properly they are quite useful. What I don’t like is the very simplistic rules that seem to be taught at an early stage, and are then treated as gospel, by students who don’t learn better.

But what I particularly don’t, is this completely arbitrary rule about averaging. The rule is introduced uniquely to deal with averages, even though it goes against all the other rules, which is a big clue that there is no mathematical basis for these rules.

One, so that people attempting to replicate an experiment would get similar results.

That seems like a bad reason. If you get different results by using a more refined instrument, isn’t that an important thing to know?

Two, people replicating experiments would not think they needed to purchase better instruments with better resolution in orfer to achieve similar results.

See my point above. The point of replicating experiments isn’t to confirm them. It’s to see if you can falsify them. Being able to get a significantly different result just by using better methods and devices is good as it falsifies the original experiment. The result was down to poor measuring devises.

Reply to  Bellman
July 12, 2023 7:28 am

Significant digits put your GAT hoax numbers to the lie, so you have to “dislike” them.

Reply to  karlomonte
July 12, 2023 4:28 pm

You NAILED it!

Reply to  Bellman
July 12, 2023 9:14 am

“””””But what I particularly don’t, is this completely arbitrary rule about averaging. The rule is introduced uniquely to deal with averages, even though it goes against all the other rules, which is a big clue that there is no mathematical basis for these rules.””””””

The “rule” is not about averaging. They are about maintaining the amount of information conveyed in a measurement, neither adding or subtracting to the original quantity of information.

How would you develop a rule that prevents someone measuring a measurand to the nearest inch but by averaging multiple measurements, reporting the average to 1/100th of an inch and the uncertainty to 1/1000th of an inch?

What rule would you recommend when adding a measurement with units resolution to one with a 1/10th resolution? Would you allow adding resolution to the units measurement?

Reply to  Jim Gorman
July 12, 2023 10:11 am

The “rule” is not about averaging.

The one I’m talking about is. The one that needs to be stated as the usual rules would result in too many digits in an average.

They are about maintaining the amount of information conveyed in a measurement, neither adding or subtracting to the original quantity of information.

And as I keep saying they don’t do that.

How would you develop a rule that prevents someone measuring a measurand to the nearest inch but by averaging multiple measurements, reporting the average to 1/100th of an inch and the uncertainty to 1/1000th of an inch?

Why is the question, not how. How’s very easy, you just say “though must round any average to the same digits as thine measurements, regardless of the uncertainty of the average”.

What rule would you recommend when adding a measurement with units resolution to one with a 1/10th resolution?

The ones described in all the books about propagating uncertainties. If you don’t know the uncertainty of your two measurements, but have to estimate it from the number of digits, you could follow Taylor’s suggestion and say one uncertainty is ±1, and the other is ±0.1. Then the uncertainty of the two added to is √(1² + 0.1²) ~= 1. This is the general rule of adding uncertainties in quadrature the smaller one tends to vanish, so you only really need to worry about the bigger ones.

Now, what is the uncertainty if you add 100 things, each measured with a “resolution” of 0.1?

Your rules would say that as all have a 1/10th digit the anser can also have a 1/10th digit, implying the uncertainty of the sum is the same as the individual values. Whereas the better rules would say that the uncertainty increases with the square root of the number of things being added, so you only really know the units figure.

As I see it, the rules or guidelines are fine for what they are, just simple rules to be used in simple circumstances, but they don’t scale. In the class room you are only adding up or taking the average of a few things. The rules don’t help with knowing what the uncertainty of the average of 10000 things are. They don’t even consider issues like sampling, or non independent errors.

Reply to  Bellman
July 12, 2023 11:17 am

Nice word salad, blob should be proud.

Reply to  Bellman
July 12, 2023 10:34 am

Once again you show that you live in an alternate universe from the rest of us.

“The point of replicating experiments isn’t to confirm them.”

That’s why we now know the speed of light to a much higher resolution than we did a century ago? Someone falsified the speed of light?

My virology PhD son would disagree with you. They do experiments all the time to confirm the results that other’s experiments produce. Not to falsify anything but to better understand the processes involved in the result! if this drug does this in this strain of mouse does it to the same in a different strain? That isn’t falsifying *anything* having to do with the first experiment.

Reply to  old cocky
July 11, 2023 5:07 pm

If it’s a repeating decimal then you have infinite precision according to bellman’s theory.

Of course he’s going to say you have to pick some point at which to truncate it. But he doesn’t want to have to use the significant digit rules to determine where to truncate. Those “rules” are too limiting for a man of his talents with statistics.

Reply to  Tim Gorman
July 11, 2023 5:53 pm

If it’s a repeating decimal then you have infinite precision according to bellman’s theory.

If you are talking about a constant, yes, all constants rational or irrational have infinite precision. That’s got nothing to do with the uncertainty of a sample mean. The precision of that is determined by the SEM.

But he doesn’t want to have to use the significant digit rules to determine where to truncate.

If by SF rules, you mean the ones presented in the GUM or Bevington or Taylor, I do want to use them. If you mean the simplistic ones that used as an approximation for actual uncertainty rules – as I may have said, I’m not a fan. But they are probably adequate for most purposes they are designed for. Just become senseless when people start insisting they mean that global annual average temperatures have to be rounded to the nearest degree.

Reply to  Bellman
July 12, 2023 6:33 am

If you are talking about a constant, yes, all constants rational or irrational have infinite precision. That’s got nothing to do with the uncertainty of a sample mean. The precision of that is determined by the SEM.”

An average of a population can be a repeating decimal. Is it also infinitely precise? The SEM is meaningless if you know the population so it’s of no help at all in such a case.

“If by SF rules, you mean the ones presented in the GUM or Bevington or Taylor, I do want to use them. If you mean the simplistic ones that used as an approximation for actual uncertainty rules”

Both Taylor and Bevington reproduce the “simple” rules in their books! Word for word. So you can either use them or you can say they are wrong and you do *NOT* want to use them. But you can’t have it both ways!

You’ve just proven once again that you are a cherry-picker. You’ve never actually read either Taylor or Bevington for meaning on anything. You just cherry-pick things, take them out of context, and then complain when caught doing so.

Reply to  Tim Gorman
July 12, 2023 7:14 am

He can’t even be bothered to read the terminology sections in the GUM!

Reply to  Tim Gorman
July 12, 2023 7:35 am

An average of a population can be a repeating decimal. Is it also infinitely precise?

Obviously. That’s the point of the population mean. It’s the thing it is. It’s infinity precise because if you had a god like ability to actually know it, you would always get the same answer.

The SEM is meaningless if you know the population so it’s of no help at all in such a case.

Well yes. If you know the exact value of something there’s little point in estimating it.

Both Taylor and Bevington reproduce the “simple” rules in their books! Word for word.

Then quote the parts where they say word for word that an average cannot be reported to more digits than the measurements. All you ever do is point out that they say what I agree with, that the uncertainty should define how many digits you report. Not the digits of your measurements, the digits of the uncertainty,

Reply to  Bellman
July 12, 2023 4:40 pm

I am *NOT* your slave! I don’t exist to type in excerpts from the books for you to ignore on here and never read. I’ve done it often enough to know that you will *NOT* read them at all but will just continue with your asinine assertions.

I’ve given you the references from both Taylor and Bevington.

Taylor: Section 2.2
Bevington: Pages 1-6 (the very START of his book)

It is up to you to go read them and understand them We all know you won’t do it. We all know it would put the lie to your assertion so you will simply ignore what they say in their books.

It’s up to *YOU* to prove they don’t discuss the reason for significant digits in metrology.

Reply to  Tim Gorman
July 12, 2023 6:02 pm

It’s up to *YOU* to prove they don’t discuss the reason for significant digits in metrology.

This is idiotic.

You claim that there is a rule that must be followed in all cases when taking an average, that the number of significant digits of the mean must be the same as that of the individual measurements.

I point out that if this is such an important rule it’s odd that none of the books on error analysis actually state this rule, nor does the very document dedicated to explaining how to express uncertainty.

You fail to show that any of them do, claiming it’s too much effort, then say it’s up to me to prove they don’t. How? By publishing the entire book?

Bevington: Pages 1-6 (the very START of his book)

He mentions significant figures on pages 4 and 5. He says what I keep saying, you have to let the uncertainty define the number of digits.

On page 54, Example 4.1, about the standard error of the mean, he has an experiment with a ball and a timer. The uncertainty of the timer is said to be ±0.020 s. From Example 1.2 and Figure 1.2, we can see that the measurements are reported to 2 decimal places. He makes 50 timings and takes the average at 0.635s with a standard deviation of 0.020s. The standard error of the mean is 0.020 / √50 = 0.0028. He quotes the result as 0.635 ± 0.003 s.

The uncertainty of the mean is less than the uncertainty of the individual measurements, and can be quoted to an extra decimal place, along with the uncertainty.

Taylor: Section 2.2

From that section:

Rule for Stating Answers

The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.

Again, zero mentions of this special rule for averages.

And as I keep pointing out and you ignore, or do your usual special pleading, at least two exercises explicitly point out that they demonstrate that the result of an average can have more digits than the individual measurements.

You’ve already replied to this comment, but you just accused me of living in an alternative universe.

Exercise 4.15.

Given the three measurements in Problem 4.1, what should you state for your best estimate for the time concerned and its uncertainty? (Your answer will illustrate how the mean can have more significant figures than the original measurements.)

My emphasis.

4.17

(a) Based on the 30 measurements in Problem 4.13, what would be your best estimate for the time involved and its uncertainty, assuming all uncertainties are random? (b) Comment on the number of significant digits in your best estimate, as compared with the number of significant digits in the data.

Answer to 4.17

(a) (Final answer for time) = mean ± SDOM = 8.149 ± 0.007 s. (b) The data have three significant figures, whereas the final answer has four; this result is what we should expect with a large number of measurements because the SDOM is then much smaller than the SD.

Reply to  Bellman
July 11, 2023 3:45 pm

This is stupid. We have providd you with web page after web page from unversity labs detailing the fact that an average can not exceed the resolution of the measured quantities. This is formalized in Significant Digit rules that everone who takes physical lab classes must follow.

If you can find a lab instruction, book, or text that lets you do as you claim, POST IT!

Listen carefully. I know you have never been responsible for quality measusurements by your silly posts. If you would have taken this type of averaging to a boss, the first thing they would have asked is when did we buy new high precision measuring equipment. When you told him/her that we didn’t, we just started averaging to more decimal points, you would have been fired on the spot.

Reply to  Jim Gorman
July 11, 2023 5:24 pm

He ran away from the terminology quiz, I wonder why.

Reply to  karlomonte
July 11, 2023 5:34 pm

I wonder why

Maybe it’s becasue I’m too busy today to respond to every 10000 word essay.

Reply to  Bellman
July 11, 2023 6:16 pm

Normally you whine about how short my posts are, now you whine about how long they are.

Hypocrite.

Reply to  karlomonte
July 11, 2023 7:38 pm

I point out your posts are short and devoid of content. I’m pointing out that the long posts, not yours, are very long and take a very long time to reply to.

And it’s going to take even longer now, as I’ve just accidentally deleted a too long reply to Jim’s magnum opus.

Reply to  Bellman
July 11, 2023 8:39 pm

And so whinerman … whines.

Reply to  karlomonte
July 12, 2023 6:10 am

And so the troll, trolls, even as he demands I answer every question posed.

Also, the troll says

Which right after you wrote this:

You must stop using AI to write your comments.

Did you really expect Tim to take your question seriously?

Reply to  Bellman
July 12, 2023 7:16 am

You call me a troll — hilarious!

Go talk with the Forest Dept. this technical stuff ain’t working out for you.

N.B.: in this context, “troll” refers to anywho who dares to not give whinerman the respect he thinks he deserves.

Reply to  karlomonte
July 12, 2023 8:18 am

N.B.: in this context, “troll” refers to anywho who dares to not give whinerman the respect he thinks he deserves.

No, troll in this context means someone who posts 50 odd messages a day, consisting of nothing but one line sneering insults. Someone who even as he objects to being called a troll will call me “whinerman”.

Reply to  Bellman
July 12, 2023 8:40 am

Poor whinerman, so abused.

Reply to  Jim Gorman
July 11, 2023 6:58 pm

We have providd you with web page after web page from unversity labs detailing the fact that an average can not exceed the resolution of the measured quantities.

Stating something as a fact is not detailing it. None of the pages you’ve shown actually deliver any kind of evidence for why you can’t do it. It’s just “it doesn’t feel right” or “becasue we say so”.

And don’t you think it’s odd that you keep relying on web pages for university labs as authorities, rather than any of the ones actually describing uncertainty and measurement, including the GUM, VIM or NIST. If these rules are so important, why do none of the documents tasked with explaining how to report uncertainty mention them?

If you can find a lab instruction, book, or text that lets you do as you claim, POST IT!

Taylor (2.9)

Rule for Stating Answers

The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.

(Note Taylor want the uncertainty to usually be stated to 1 significant figure unless the first is a 1 in which case he says to use 2)

Section 3.4 and the infamous 200 sheets of paper example. Thickness of 200 sheets measured with an uncertainty of ±0.1″. Thickness of 1 sheet calculated by dividing by 200 is reported as 0.0065 ± 0.0005″

You can scream, correctly that that is not strictly an average. But you still have to explain why it’s possible for one sheet of paper to be reported to 2 digits more than the original measurement, and then why that isn’t acceptable when reporting an actual average.

Exercise 4.15.

Given the three measurements in Problem 4.1, what should you state for your best estimate for the time concerned and its uncertainty? (Your answer will illustrate how the mean can have more significant figures than the original measurements.)

My emphasis.

4.17

(a) Based on the 30 measurements in Problem 4.13, what would be your best estimate for the time involved and its uncertainty, assuming all uncertainties are random? (b) Comment on the number of significant digits in your best estimate, as compared with the number of significant digits in the data.

Answer to 4.17

(a) (Final answer for time) = mean ± SDOM = 8.149 ± 0.007 s. (b) The data have three significant figures, whereas the final answer has four; this result is what we should expect with a large number of measurements because the SDOM is then much smaller than the SD.

Reply to  Bellman
July 12, 2023 6:41 am

Stating something as a fact is not detailing it. None of the pages you’ve shown actually deliver any kind of evidence for why you can’t do it. It’s just “it doesn’t feel right” or “becasue we say so”.”

You simply don’t get it. It’s so that people reading your result won’t believe you are making measurements at a higher resolution than you actually have!

How hard of a concept is that to grasp? It’s not a feeling or “because we say so”. It’s because it has real world consequences if people think, for instance, that the manufacturing tolerances in a product is greater than it actually is! It could actually cause death and destruction in the real world, like a bridge collapse because someone thought you measured shear strength to a precision that was past your ability to actually discern!

But we all know you live in an alternate universe. I guess death and destruction don’t mean anything there.

Reply to  Tim Gorman
July 12, 2023 7:23 am

His latest line about the GUM being the product of “engineers”-only is beyond incredible.

Reply to  karlomonte
July 12, 2023 8:10 am

Could someone get a “sense of humor” implant to karlo with due haste.

Literally all I said was that “engineers” could use what ever term they wanted. This minor joke is apparently regarded now as some blasphemy against the sacred GUM, because apparently it implies that that the GUM was written by engineers and not something that fell from the heavens on a golden cloud.

Reply to  Bellman
July 12, 2023 11:22 am

Oyvey! He tell yok-ah, Ho ho ho!

Saturday Night Live needs scab joke writers.

And again, just who do you think wrote the GUM?

old cocky
Reply to  Bellman
July 10, 2023 7:28 pm

Getting back to your original example, why is the mean of {1, 2, 3, 4, 5, 6} more precise than the mean of {2, 4, 6, 8, 10, 12}?

Reply to  old cocky
July 10, 2023 7:49 pm

It isn’t. Both are equally precise, at least in the measurement definition of the word. If you just mean in the sense if how many digits you are giving, again it’s just a matter of how you write it or which base you use. In this case both are exact numbers so have an infinite number of significant figures. 3.5 or 7.0, or 3.500 and 7.000. No difference.

old cocky
Reply to  Bellman
July 10, 2023 8:27 pm

So 3.5 and 7 are to the same precision?

There’s no measurement involved in discrete values, just a count (cue muppet with cape and pointy teeth). A mean isn’t a measurement, either. It’s a calculated statistical measure of centrality

Reply to  old cocky
July 11, 2023 5:20 am

So 3.5 and 7 are to the same precision?

In this case yes. To be clear I’m talking about the standard VIM definition of precision. The variability in repeated measurments, or some such. But I expect this is getting confused with the traditional meaning of the number of decimal places in an argument, which is more to do with resolution.

Reply to  Bellman
July 11, 2023 6:26 am

BZZZZZT.

“standard VIM definition of precision” — which you also think contains mistakes and errors!

Reply to  karlomonte
July 11, 2023 1:06 pm

Which mistakes and errors do you imagine I’m claiming the VIM makes?

Reply to  Bellman
July 11, 2023 5:26 pm

Get some clues PDQ, you need help.

It was YOU who claimed the terminology in the GUM is wrong.

Backpedaling now?

Reply to  karlomonte
July 11, 2023 5:32 pm

The VIM is not the one claiming the term Standard Error is incorrect.

Reply to  Bellman
July 11, 2023 6:17 pm

Duh!

Go read what I spoon fed you, dolt.

Reply to  Bellman
July 12, 2023 4:28 am

The VIM 2021 doesn’t eveñ mention “Standard Error”! No wonder it doesn’t call it incorrect!

Reply to  Jim Gorman
July 12, 2023 6:05 am

Yes, that’s the point. I was accused of disagreeing with the VIM.

Reply to  Bellman
July 12, 2023 7:24 am

No you were not; if you had eyes to read you might have realized thus, prior to taking another leap into the oily blackness.

Reply to  old cocky
July 11, 2023 7:24 am

You’ll never convince bellman of this. To him the average *is* a measurment.

Reply to  Tim Gorman
July 11, 2023 12:56 pm

I keep saying I don’t care one way or another if you define it as a measurement or not. All I’ve said is that if you insist it isn’t a measurement, then trying to apply the concept of measurement uncertainty to it is a waste of time.

Reply to  Bellman
July 11, 2023 1:31 pm

Why do you insist on never reading what people say? The mean uncertainty in an instrument type because of design (an average) *is* useful. But it is a *factor* in the overall uncertainty estimate, not a measurement of a measurand.

Reply to  Bellman
July 12, 2023 6:05 am

You simply refuse to understand the difference between a measurement and a statistical treatment of a set of measurements.

I can go out and measure the amount of wear left on 50 rear tractor tires. I have a set of 50 masurements. I can treate those statistically and find a mean and standard deviation. I can even calculate an SEM for that set of measurements.

Yet the mean is NOT a measurement. The mean is simply nothing more than a description of the central tendency of the set of measurements. Can this mean have an uncertainty? Certainly. Each of my measurements will have an uncertainty consisting of several different things.

The mean of a set of temperature measurements from different devices is nothing more than a set of tire wear measurements. It is not a measurement in and of itself. The mean is simply the value of the central tendency.

Yet the mean is calculated from values that do have uncertainty. That uncertainty must be propagated into the calculated mean. You can’t just say there is no uncertainty because it is not a measurment in and of itself.

Reply to  Jim Gorman
July 12, 2023 6:25 am

Succinct and to the point. Of course it will go right over bellman’s head.

Reply to  Jim Gorman
July 12, 2023 7:49 am

You simply refuse to understand the difference between a measurement and a statistical treatment of a set of measurements.

Quite the reverse. You are the ones claiming that a mean has to be treated like a measurement. You insist that the precision of a mean can only be the same as any individual measurements,. I’m saying the mean can have a higher precision becasue it isn’t a measurement as such.

That uncertainty must be propagated into the calculated mean.

Nobody is saying that isn’t the case. What the argument is about what that propagation is. With some saying it’s the same as the sum and therefore grows with sample size. Others saying it is the same as the average uncertainty and therefore stays the same with increasing sample size, and the rest (including me) saying that in general it reduces with sample size.

But, what we also say is that in general, the uncertainty caused by the random sample from a population is usually much bigger than the measurement uncertainty and makes it largely irrelevant.

Reply to  old cocky
July 11, 2023 12:17 pm

Count Count!

old cocky
Reply to  karlomonte
July 11, 2023 4:44 pm

It’s Count von Count, you heathen.

Reply to  Bellman
July 10, 2023 7:38 pm

You just asserted that every physical measurement lab using Significant Figure Rules don’t know what they are doing! Not physics lab. Not chemistry labs. Not electronic labs. Not commercial labs. And, not even NIST.

Congratulations.

Reply to  Jim Gorman
July 10, 2023 8:03 pm

I’ve been telling you for years I don’t agree with the “rules” as a set of arbitrary commandments. They may well have their place in those sorts of environments, but if they are using the rules as a means of figuring out the uncertainty in situations involving hundreds or thousands of measurements – then yes I think they are wrong. They need to listen to statisticians or metrologists, as appropriate, especially if talking about averaging a large sample.

And, not even NIST.

Could you point me to the NIST list of rules for significant digits? All I could find was

https://www.nist.gov/system/files/documents/2019/05/14/glp-9-rounding-20190506.pdf

which makes no mention of the rules you keep going on about, just the usual round the uncertainty to 2 significant digits and report the value to the same order.

Reply to  Bellman
July 10, 2023 4:31 am

You take a document that talks about repeatability in a laboratory setting, but then use it to make claims about statistical field experiments.”

Measurements are measurements be they in a lab or in the field. The difference is that in the lab you can many times, not always but many times, make multiple measurements of the same thing. In the field, you get one chance at measuring temperature – no multiple measurements of the same thing.

it’s only talking about how to get a better estimate of the measurement of a single thing by repeated measurements.”

Correct. Something you seem to ignore when talking about the global average temperature calcultion.

Yet somehow draw the conclusion it’s saying the idea of a confidence interval isn’t even applicable without repeated measurements.”

It isn’t applicable when you are measuring different things! The average of different things is nothing but a mathematical calculation leading to no actual knowledge of the set of different things.

The average height calculated from a set consisting of the heights of Shetland ponies and Arabian horses is useless. It doesn’t exist. You have a bi-modal distribution. It’s no different with temperatures in the SH combined with temperatures in the NH – a multi-modal distribution whose average value is meaningless physically. Each have different variances as well as values making the use of anomalies a fool’s errand.

“Which particular range are you talking about?”

It’s pretty obvious what they are talking about. It’s your reading ability that is lacking. It is the range of values that determines the uncertainty. Did you not look at the table at all? Probably not, you never read for meaning, only for cherry-picking possibilities.

“You realize this is exactly what I’ve been trying to explain to you. The SEM is directly related to the SD of a data set.”

Malarky! The SEM is the SD of the SAMPLE MEANS. That is *NOT* the variance or SD of the population. You keep on making the mistake of equating the two no matter how often it is explained to you!

” It’s just you keep insisting it’s impossible to derive the uncertainty of a mean from just one sample.”

That’s because the SEM is the SD of sample MEANS. If you only have one sample then the CLT does not guarantee that it is Gaussian at all. *YOU* just assume that one sample can perfectly describe the population. The only way to calculate the SEM is to have multiple samples – then the CLT comes into play.

Really? You think the resolution in the thermometers means there is no difference in the measurements”

“You honestly do not understand Pat’s paper at all.” He has tried to explain it to you multiple times and you just stubbornly refuse to listen. Resolution *is* a factor in the repeatability of measurements. The problem is that you are never measuring the same thing when it comes to temperature measuring stations in the field.

“That all temperatures recorded anywhere and anytime are always the same, becasue the resolution of the LiG can’t detect the difference between 10°C and 20°C?”

All you are doing is highlighting your lack of understanding concerning Pat’s paper. The fact that a measuring device has a variation detection limit does *NOT* mean they all similar devices read the same. I have no idea where you got this idea.

What (3) is saying is what I keep saying – that repeated measurements of the same thing, will nopt help you if you get the same reading each time.”

MALARKY! Field temperature measurements are *NOT* measurements of the same thing. And each single measurement of different things will be biased (systematic bias) by its detection limit.

Why do you *INSIST* on looking at things as always being multiple measurements of the same thing? It’s obvious that your belief that all uncertainty is random, Gaussian, and cancels drives such a worldview and you seem to be incapable of abandoning it.

Detection limits *are* a factor in multiple measurements of the same thing, even in the lab. Detection limits are derived from the physical limits of the measuring device. They have to be included in uncertainty calculations. You DO NOT KNOW what you do not know. You seem to be unable to grasp the concept of DO NOT KNOW. It biases your entire view of uncertainty.

Reply to  Tim Gorman
July 10, 2023 4:32 pm

The average height calculated from a set consisting of the heights of Shetland ponies and Arabian horses is useless.

Then stop doing it. Honestly, you can come up with useless calculations all day. It says nothing about the useful calculations.

It’s pretty obvious what they are talking about.

I asked what you were talking about, not them. As I said, they use range in two different senses.

It is the range of values that determines the uncertainty.

So the standard deviation then.

Did you not look at the table at all?

My mistake. You mean the range of the data set. Not sure why you find that impressive though. It’s just making a less exact estimate of the SEM without bothering to calculate the standard deviation.

They say calculating the SD can be tedious, which may be true if you are doing it by hand. But it’s trivial with a computer. I’m just surprised you want to use this quick and dirty approach, when before you were insisting the only way to get the SEM was to take many separate samples.

The SEM is the SD of the SAMPLE MEANS. That is *NOT* the variance or SD of the population

You missed the bit where they take the SD and divide by root N.

Reply to  Bellman
July 11, 2023 3:19 am

My mistake. You mean the range of the data set. Not sure why you find that impressive though. It’s just making a less exact estimate of the SEM without bothering to calculate the standard deviation.”

You *still* don’t get it. The SEM is *NOT* the uncertainty of a measurement. It is how precisely you have identified the mean of a set of data. You can can make the SEM vanishingly small and *still* not know the accuracy of the average! The accuracy of the measurements is *NOT* the SEM. Write that down 1000 times, Maybe you’ll remember it but I doubt it.

If you have the standard deviation of the population then it means you already know the average of the population so the SEM becomes meaningless.

It’s obvious you didn’t bother to read the paper. It just shows with every post you make. When you can only make one measurement you can estimate the uncertainty associated with it using the table. That has nothing to do with the SEM.

STOP CHERRY-PICKING and actually *study* something for once!

They say calculating the SD can be tedious, which may be true if you are doing it by hand.”

This is a STUDENT’S handout. You *want* students to learn how to do it manually so they understand what the computer is doing. You are a prime example where that has not happened for you. You’ve not worked out one single exercise in Taylor or Bevington. All you do is cherry-pick stuff you think you can fool people with on your baseless assertions. You can’t really understand the difference between SEM and population accuracy till you’ve done the work! And you’ve never done the work!

Reply to  Tim Gorman
July 11, 2023 6:38 am

He can’t even make it past the terminology, yet now he’s lecturing Pat Frank about being “wrong”.

Reply to  Bellman
July 6, 2023 6:17 am

Another climastrology psychic that “knows” how I “view everything”.

This is lame, even for you, whinerman.

Reply to  Bellman
July 6, 2023 2:18 pm

“””””But in my world there is nuance.”””””

A plaintiff lawyer would love for you to use the word “nuance”!

“Well just how did you NUANCE the measurement?”

“Why did you feel the need to NUANCE a measurement?

“Did you insure your NUANCED measurement would not influence the end result such your design would or could be considered negligent?”

“Did you insure your NUANCED measurement would not influence the end result such your that your design would cause the device not to meet specified requirements.”

“Why did you not follow internationally accepted design methods?”

You may think these are made up hypotheticals, but I assure you they happen often. You have obviously never taken any engineering ethics courses, if you just make nuanced decisions based on your gut.

Reply to  Jim Gorman
July 6, 2023 2:31 pm

Fortunately I’m not on trial at the moment, and it isn’t a crime to disagree with things said in the GUM.

You have obviously never taken any engineering ethics courses, if you just make nuanced decisions based on your gut.”

Seriously? You think it’s unethical to use the term “standard error”?

Reply to  Bellman
July 6, 2023 2:46 pm

You just made my point! You obviously support making poor people poorer through higher costs of energy.

You have no skin in the game. If you are wrong, so what, right?

“””””You think it’s unethical to use the term “standard error”?”””””

Like Willis, quote my words. If you read my post, I never used the words, “standard error”.

As usual you never understand simple ideas. Nuance is not a term in use in engineering when discussing measurements. Conservatism all the way in values and their uncertainty!

Reply to  Jim Gorman
July 6, 2023 4:39 pm

You obviously support making poor people poorer through higher costs of energy.

Pathetic. If you can’t argue your case without this level of ad hominem, then I doubt you have much confidence in it.

Like Willis, quote my words. If you read my post, I never used the words, “standard error”.

My point was that I’m being attacked for daring to say I disagreed with something in the GUM, at the same time being attacked for worshiping the GUM, when the only thing I’ve disagreed with is them saying it’s incorrect to use the term “standard error”. You now suggest it’s unethical to have a nuanced opinion on the GUM.

Nuance is not a term in use in engineering when discussing measurements.

Maybe it should be. It doesn’t make sense to be too certain about uncertainty.

Conservatism all the way in values and their uncertainty!

You should resist the temptation to increase the error estimate “just to be sure”.

Reply to  Bellman
July 6, 2023 5:57 pm

Quote the words I said, not what you think they mean. I never said “increase the error estimate”.

I said, “Conservatism all the way in values and their uncertainty!”. I don’t know how you misinterpreted that. I can tell you do not have physical lab training.

For your information!

Conservativism in values means properly using Significant Figure rules to insure all people know the resolution of what was measured. It means not “adjusting” data to suit your preconceived ideas and instead calling the data not fit for purpose..

It means not using averages to increase reported resolution. This the most egregious thing I have ever heard. It requires the creation of information that simply does not exist. It is what Pat Frank tried to show. If you have ten numbers whose measurements have one decimal digit, that is all a sum, division, multpication, or subtraction can support. Why? Because any additional decimal digits exist within the uncertainty interval. There is no mathematical operation that can act like a crystal ball to determine what the value should be.

Reply to  Jim Gorman
July 6, 2023 6:20 pm

Sometime in the past he claimed to have “spent years in the software industry”, which could mean many different things.

He’s either as clueless and obtuse as the posts indicate, or he’s an agent provocateur for the IPCC clowns.

Reply to  karlomonte
July 6, 2023 6:32 pm

I have done my share of programming. Fortran, Cobol, Basic, Unix tools, and C.

Programmers are number manipulators. Everything is numbers, even characters. No concept of measurements, they are just another number to be manipulated.

Reply to  Bellman
July 5, 2023 6:01 pm

You still believe “the error can’t be that big!”

You look at the words in the GUM but don’t understand what they mean. You can’t even get past the terminology!

Reply to  karlomonte
July 5, 2023 6:48 pm

Do you just post these randomly. This has nothing to do with my comment. But if you insist

You still believe “the error can’t be that big!”

If you mean the errors in this paper, I can easily believe it, especially having spent some time conversing with the man.

If you mean the uncertainty intervals in his paper, then yes I find it difficult to believe. They just have no relation to the actual data.

But I wouldn’t want to let you think that is the only reason. I think the assumptions are wrong as well.

You look at the words in the GUM but don’t understand what they mean. You can’t even get past the terminology!

What bit of

“Experimental standard deviation of the mean” is sometimes incorrectly called standard error of the mean.

Do you think I misunderstood?

Reply to  Bellman
July 5, 2023 6:55 pm

Do you think I misunderstood?

I think you’re an agent for the IPCC communists.

Reply to  Bellman
July 6, 2023 5:42 am

They just have no relation to the actual data.”

Really? You have not shown one iota of proof that LIG thermometers do *NOT* suffer from the uncertainties inflicted on the measurements by the apparatus design.

Yet you can sit there and say that they have no relationship to the actual data? That’s not fact based, it is subjective opinion based on a complete lack of knowledge of the subject and biased by cult dogma from the CAGW religion.

Reply to  Tim Gorman
July 6, 2023 6:20 am

All he has is the magic of averages which allows him to see things that aren’t there.

Reply to  karlomonte
July 6, 2023 2:49 pm

You’re almost getting it. Statistics can be a magical tool, much like a microscope, which allows you to detect things that are there, just invisible to the naked eye.

Reply to  Bellman
July 6, 2023 6:22 pm

Your yardstick-micrometer remains a fantasy that exists only inside your skull.

Reply to  karlomonte
July 6, 2023 6:32 pm

You realize this is a strawman that exists only wherever you keep your brain?

Reply to  Bellman
July 6, 2023 8:22 pm

Hardly, you are the pseudoscientist who believes it is possible to increase resolution via the magic of averaging.

Are you backtracking now?

Embrace your identity!

Reply to  Bellman
July 7, 2023 3:14 am

Except a microscope sees things that exist in reality!

Reply to  Tim Gorman
July 6, 2023 2:33 pm

Calibration is done in a controlled environment. Constant air temperature, humidity, air pressure, air movement, immersion times, etc. As soon as they are placed in a screen in the environment all those careful conditions go out the window.

That simply must cause some difference in systematic uncertainty.

Reply to  Tim Gorman
July 6, 2023 3:02 pm

You have not shown one iota of proof that LIG thermometers do *NOT* suffer from the uncertainties inflicted on the measurements by the apparatus design.

I’m not saying anything about that. I’m assuming he’s got that bit correct – it doesn’t seem an unreasonable level of uncertainty to me. The data I’m talking about is the global average anomalies, either for a month or year. That’s where the claimed uncertainties are just reflected in the data. I don’t care how ignorant you are, there is no way it makes sense to claim the interval is 3 – 4°C for an annual global average.

cult dogma from the CAGW religion.

You do realise how much of a tell it is when you say things like that?

Reply to  Bellman
July 6, 2023 4:02 pm

The data I’m talking about is the global average anomalies, either for a month or year. That’s where the claimed uncertainties are just reflected in the data. I don’t care how ignorant you are, there is no way it makes sense to claim the interval is 3 – 4°C for an annual global average.”

In other words you believe that all uncertainty cancels and you don’t need to worry about it affecting a calculated average!

When you subtract two anomalies their uncertainties cancel, right?

Reply to  Tim Gorman
July 6, 2023 6:23 pm

Yep!

Reply to  Bellman
July 6, 2023 6:23 pm

I don’t care how ignorant you are, there is no way it makes sense to claim the interval is 3 – 4°C for an annual global average.

You’re a total idiot if you really believe this (and it isn’t a front for something).

old cocky
Reply to  Bellman
July 5, 2023 8:32 pm

Here are the key differences between the two:

Standard deviation: Quantifies the variability of values in a dataset. It assesses how far a data point likely falls from the mean.

Standard error: Quantifies the variability between samples drawn from the same population. It assesses how far a sample statistic likely falls from a population parameter.

Yep. Within sample (or entire population) vs between samples.

Ambiguity strikes again…

According to https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.01%3A_The_Mean_and_Standard_Deviation_of_the_Sample_Mean the SEM is the standard deviation of the sample means, hence the intra-sample standard deviation of the value of interest (in this case the mean)

Reply to  Bellman
July 4, 2023 5:17 pm

What do you think “standard error” *IS? Even the GUM says the standard deviation of the mean is sometimes incorrectly called the standard error of the mean. And you refuse to admit to this.

And no one has said *anything* about exaggerating the uncertainty interval “just to be sure”. You are delusional! Instrumental uncertainty is *NOT* exaggerating the uncertainty interval!

Reply to  Tim Gorman
July 4, 2023 7:01 pm

What do you think “standard error” *IS?

Well in Bevington, it’s being used to mean what the GUM would call standard uncertainty.

Even the GUM says the standard deviation of the mean is sometimes incorrectly called the standard error of the mean. And you refuse to admit to this.

I admit that’s what the GUM says. I just think they are wrong. They never justify or cite reasons why it’s incorrect to call it the standard error of the mean. They just don’t seem to like it.

But, as I said in another comment, Bevington is not talking about the standard error of the mean here.

And no one has said *anything* about exaggerating the uncertainty interval “just to be sure”.

Bevington was the one who said it. “He should resist the temptation to increase this error estimate, “just to be sure.”

I just think it’s an important point. I can’t think why you are so upset about it.

Reply to  Bellman
July 4, 2023 5:12 pm

You left out the fact that he was talking about repeated measurements OF THE SAME THING. Which global temperatures are *NOT*.

And Bevington *is* saying that uncertainty is not error. How you get that he is doing so is just beyond belief!

He states in his book: “Error is defined by Webster as “the difference between an observed or calculated value and the true value”. Usually we do not know the “true” value; otherwise there would be no reason for performing the experiment.”

If you don’t know the true value then there is no way to determine *error*. What you *can* specify is the interval within which the true value may lie. That is *NOT* error it is UNCERTAINTY. They are NOT the same thing. One is based on knowing, the other is based on *not* knowing. Universes apart.

You’ve spent over two years trying to refute this simple and basic fact of metrology. “Not knowing” is different from “knowing”. And you simply refuse to understand that.

bdgwx
Reply to  Pat Frank
July 4, 2023 9:17 am

PF: All the surface air temperature compilations use the same data

There is overlap for sure, but they do not use the same data. And in the case of reanalysis they use wildly different observations and methodologies.

PF: The only valid UAH-surface temperature comparisons would employ raw data.

UAH does not measure the surface temperature; a fact evidenced by it’s ~263 K baseline.

Reply to  bdgwx
July 4, 2023 9:54 am

So, are you implying that the UAH trend is incorrect? I’ve not seen any papers that refute the accuracy of the trend, only that the absolute temperatures don’t agree with surface temps. Do you have a paper that refutes the trend? You should know that NOAA Star now has essentially the same results as UAH.

Reply to  bdgwx
July 4, 2023 11:41 am

You are quite simply wrong about this 263K number (probably another meaningless average).

Reply to  karlomonte
July 4, 2023 12:13 pm

For the dweeb who mindlessly downvoted my comment, here is the UAH baseline temperature distribution for August:

Aug UAH baseline.jpg
Reply to  bdgwx
July 4, 2023 5:22 pm

There are only so many temperature measurement stations out there. EVERYONE uses the same data – the data those stations collect.

People may use widely different methodologies but they only have available what the temperature measurement stations collect, at least for the global average temperature. Harvard, MIT, etc don’t each have a network of measurement stations in Brazil, or Vietnam, or even in the US.

Reply to  Monckton of Brenchley
June 30, 2023 8:01 pm

Great response about a stunning finding by Dr. Frank. Climate science has been enchanted by statistics without the ability to deal with the underlying issues.

Reply to  Monckton of Brenchley
July 4, 2023 8:12 am

Professor Karl Wunsch “could find no fault with the paper.”

You sure?

Carl Wunsch commented September 2019
I am listed as a reviewer, but that should not be interpreted as an endorsement of the paper. In the version that I finally agreed to, there were some interesting and useful descriptions of the behavior of climate models run in predictive mode. That is not a justification for concluding the climate signals cannot be detected! In particular, I do not recall the sentence “The unavoidable conclusion is that a temperature signal from anthropogenic CO2 emissions (if any) cannot have been, nor presently can be, evidenced in climate observables.” which I regard as a complete non sequitur and with which I disagree totally.
The published version had numerous additions that did not appear in the last version I saw.
I thought the version I did see raised important questions, rarely discussed, of the presence of both systematic and random walk errors in models run in predictive mode and that some discussion of these issues might be worthwhile.
CW”

Italics mine.

https://pubpeer.com/publications/391B1C150212A84C6051D7A2A7F119

Truth be told, this paper was the peer reviewed equivalent of Michael Jackson baby dangling. Harmless eccentrics might be interesting, even informative. But as David Letterman said of Michael, post child endangerment “Gosh guys. I think Michael might be getting a reputation as an oddball”. Hence the avoidance of both Michael and Dr. Frank above ground, since then….

Reply to  bigoilbob
July 4, 2023 12:46 pm

You always create new ways to demonstrate your ignorance, bob. With a dash of dishonesty on the side.

Guess what this means: “there were some interesting and useful descriptions of the behavior of climate models run in predictive mode.

That’s Prof. Wunsch agreeing with the analysis: propagating error through air temperature projections.

Doing so provided, “some interesting and useful descriptions of the behavior of climate models run in predictive mode.” Namely that climate models have no predictive value.

The published version had numerous additions that did not appear in the last version I saw.

The review continued through a third round, one more after Carl Wunsch had signed off on the manuscript and recommended publication.

Further changes were requested by reviewer #2 in the third round. Of course Prof. Wunsch did not see them, as he had already signed off.

which I regard as a complete non sequitur and with which I disagree totally.

In PubPeer comment #7, I pointed out that last sentence was prominent in every single version Prof. Wunsch saw.

But in your commitment to honesty, you missed comment #7, didn’t you bob. Given your level of discernment, one supposes you didn’t look past #5.

v.1 final sentences: “Even advanced climate models exhibit poor energy resolution and very large projection uncertainties. The unavoidable conclusion is that an anthropogenic temperature signal cannot have been, nor presently can be, evidenced in climate observables.

v.2 final sentences: “Any impact from GHGs will always be lost within the uncertainty interval. Even advanced climate models exhibit poor energy resolution and very large projection uncertainties. The unavoidable conclusion is that an anthropogenic temperature signal cannot have been, nor presently can be, evidenced in climate observables.”

v.3 final sentences: “Any impact from GHGs will always be lost within the uncertainty interval. Even advanced climate models exhibit poor energy resolution and very large projection uncertainties. The unavoidable conclusion is that a temperature signal from anthropogenic CO2 emissions (if any) cannot have been, nor presently can be, evidenced in climate observables.

Every version. And as the final sentence of the Abstract. Every version of it.

And if Prof. Wunsch, or anyone else, thinks the last sentence is a non-sequitur, given a demonstration of the utter predictive impotence of climate models, then I’d like to know how he, or anyone else, plans to show a causal effect of CO₂ on climate observables.

Your blind oppositional opportunism drops you into a slime pit every time, bob. Where you seem quite at home.

Reply to  Pat Frank
July 5, 2023 9:39 am

I read all of the comments. Yes, Dr. Wunsch might very well be covering up for his carelessness.

https://www.amazon.com/PROBLEM-SOLVING-FLOW-CHART-Notebook/dp/B09K1HRHZW

But the fact remains that as soon as he realized how badly he effed up he ran away from you.

But I must admit, you’re pretty much hermetically sealed against criticism beyond the inside of your eyelids. Even within this fawning fora you deflect. And you also don’t meet head on (pretty much universal) criticism from outside, save for your self assertions that you “won the argument”. The fact that this paper will be functionally ignored in superterranea will also be poo pooed by you, with you comparing yourself to Einstein, Maxwell. In fact, you deserve your own version of the linked flowchart. It’s supposed to rain here this afternoon and I can work on it for you…

Reply to  bigoilbob
July 5, 2023 10:09 am

Clown.

Reply to  bigoilbob
July 5, 2023 10:39 am

“””””which I regard as a complete non sequitur and with which I disagree totally.”””””

You are making an argumentative fallacy in the way you are using this statement.

You fail to include in your response that the statement you reference was not also included in statements from other reviewers, therefore the truth of the statement is in doubt.

You would fail a high school debate by not including that other reviewers accepted the conclusion.

From an accepted source of argument judging.

https://www.grammarly.com/blog/appeal-to-authority-fallacy/

“””””The appeal to authority fallacy is the logical fallacy of saying a claim is true simply because an authority figure made it.”””””

Reply to  Jim Gorman
July 5, 2023 11:27 am

Logical fallacies are pretty much the only ammunition the GAT hoaxers have in the tank.

Reply to  bigoilbob
July 5, 2023 1:35 pm

But the fact remains that as soon as he realized how badly he effed up he ran away from you.”

Willful dyslexia.

I reiterate Prof. Wunsch: “there were some interesting and useful descriptions of the behavior of climate models run in predictive mode.

That is Prof. Wunsch agreeing with the analysis: propagating error through air temperature projections. Which demonstrates they’re predictively useless.

He made no mistake he did not run away from his review or from my paper.

Prof. Wunsch merely dissociated himself from, “The unavoidable conclusion is that a temperature signal from anthropogenic CO2 emissions (if any) cannot have been, nor presently can be, evidenced in climate observables.” which appeared in all versions of the manuscript; the last line of the Abstract and the final sentence.

The rest of your comment is a rant of lies — (don’t meet … criticism … self assertions … comparisons with blah, blah) all refutable by mere inspection of the record — and the bitter insults of someone who has lost the debate and is in consequence losing the bit of his mind that might have been rational.

It was already a stretch — you bringing up the old discussion of Propagation… to discredit LiG Met. Truly a fool’s errand.

June 29, 2023 6:32 am

Profound thanks again to Anthony and Charles for publishing my work and providing a forum for free speech about science.

Reply to  Pat Frank
June 29, 2023 7:58 pm

I am sure that many of us are glad to have had your work drawn to our attention. It appears to be thorough and rigorous and reaches the parts that others fail to reach. Many congratulations.

Ever since I encountered photographic thermometers, designed to measure darkroom chemical temperatures with gradations marked at 0.2C intervals I have wondered whether weather thermometers shouldn’t have been constructed to similar designs. Of course the photographic thermometer covers a much more limited temperature range, so there would have to be several to cover the span of real world temperatures, and the consequences of a thermometer spending time outside its natural recording range might be as bad for accuracy as the defects you have identified.

Reply to  It doesnot add up
June 30, 2023 7:51 pm

Thanks for the kind words, Idau.

Back when meteorological stations were designed and installed. no one cared about knowing air temperature to ±0.1 C.

The thermometers are of a special design, that preserve the liquid column at the T_max or T_min.

Your idea is good in principle, but having many small-range accurate thermometers in a Stevenson screen likely would not have worked in practice.

June 29, 2023 6:42 am

“The lengthened growing season, the revegetation of the far North, and the poleward migration of the northern tree line provide evidence of a warming climate. However, the rate or magnitude of warming since 1850 is not knowable.”
__________________________________________________________________

Also provides evidence of increased atmospheric carbon dioxide. It’s not just temperature and it’s not a problem:

1. More rain is not a problem.
2. Warmer weather is not a problem.
3. More arable land is not a problem.
4. Longer growing seasons is not a problem.
5. CO2 greening of the earth is not a problem.
6. There isn’t any Climate Crisis.

Reply to  Steve Case
June 29, 2023 7:42 am

Or
Has it been warmer in the past?
Has it been colder in the past?
Has it been clouder in the past?
Has there been less cloud in the past?
Has there been more rain in the past?
Has there been less rain in the past?
Have there been more hurricanes in the past?
Have there been fewer hurricanes in the past?
Have sealevels been higher in the past?
Have sea levels been lower in the past?
Has the been more atmospheric CO2 in the past?
Has there been less atmospheric CO2 in the past?

With the past starting yesterday the answer to all those questions is yes. The answer to “Is any of this a problem?” is no

Reply to  Ben Vorlich
June 29, 2023 7:43 am

Apart from
Has there been less atmospheric CO2 in the past?
Which is problematic

Reply to  Steve Case
June 29, 2023 8:14 am

there is a climate emergency cult crisis

Bob Johnston
June 29, 2023 6:44 am

All of the climate fanatics like that the temperature data is awful because it allows them to do adjustments to it, adjustments that have the effect of lowering past temps and raising current ones. It’s why we never hear about the USCRN from the climate fanatics because it shows no warming over its history. It’s my belief that at some point NOAA will stop publishing the USCRN data because it doesn’t show what they want it to show.

Reply to  Bob Johnston
June 29, 2023 7:07 am

Thanks for reminding me of USCRN

Nick Stokes
June 29, 2023 6:45 am

The paper starts with the usual Pat Frank nonsense:
“The published 95% uncertainty of the global surface air-temperature anomaly (GSATA) record through 1980 is impossibly less than the 2σ = ±0.25 °C lower limit of laboratory resolution of 1 °C/division liquid-in-glass (LiG) thermometers.”

GASTA is not a measured temperature. It is an average. Much information goes into it; many errors cancel. The average is known more accurately than each reading that makes it up.

Years ago I set out here, empirically, how averaging improves precision for the simple task of averaging over a month. You can downgrade instrumental precision by rounding, and still get basically the same result. Here is an extract, covering Melbourne daily max:

comment image

The reason is clear if you look at the individual month. The spread of temperatures is far larger than could be explained by instrumental error. It is caused instead by weather. And that is what determines the uncertainty of the monthly average for one place, and even more for the global average. And as more data goes into the average, the precision improves by the (almost) universally accepted accumulation of information. The error is basically sampling error, and it does diminish roughly with √ N. There may be issues of independence, but the cancellation process dominates.

Again, the limit to accuracy of the average is determined by weather sampling issues, not instrumental accuracy.

Reply to  Nick Stokes
June 29, 2023 6:52 am

Nitpick Nick is in panic mode…

Tom Halla
Reply to  Nick Stokes
June 29, 2023 7:03 am

Nick, each temperature measurement is of a different thing, not multiple measurements of the same thing. So how do they “average out”?

Nick Stokes
Reply to  Tom Halla
June 29, 2023 7:07 am

Because some deviate high and some deviate low, for whatever reason. And when you add them, the deviations cancel.

Reply to  Nick Stokes
June 29, 2023 7:10 am

The average is known more accurately than each reading that makes it up.

Liar. The usual 1/root-N mantra of the trendologists.

Reply to  karlomonte
June 30, 2023 2:13 am

No, nooo, noooooo, the average makes more sense than each reading making up the average, which is noise. While single observations have uncertainity intervals, averages have error bars for example.

Here is maximum temperature measured at Albany, Western Australia on 4 January 1907: 17.9 +/- 0.3 degC.

So what use is a single number? And why do you use inflammatory, aggressive and ungentlemanly terms such as liar?

Are you so angry that you are bereft of etiquette.

Why not take an ISO 17025 nap and hop-out spreading joy and goodwill on the sunny-side?

Yours sincerely,

Dr Bill Johnston

http://www.bomwatch.com.au

Reply to  Bill Johnston
June 30, 2023 4:06 am

Because Stokes is a liar, by his deceptive quoting of people (among other things).

Seeing you defend Nitpick is no surprise, at this point.

Your obsession with me is noted, get professional help.

Reply to  Bill Johnston
June 30, 2023 5:03 am

No, nooo, noooooo, the average makes more sense than each reading making up the average, which is noise.

Bill thinks all errors are random “noise” and cancel with the magic of averaging, just like all the rest of the trendologist ilk.

Tom Halla
Reply to  Nick Stokes
June 29, 2023 7:12 am

Accurate to the nearest half degree means accurate to the nearest half degree. All the measurements are independent, so canceling error is impossible.
You are reifying “temperature” to somehow get accuracy greater than that of the instruments used to make each individual reading.

Nick Stokes
Reply to  Tom Halla
June 29, 2023 7:21 am

So how do you explain that example? I took the temperatures accurate to 0.1°C, got monthly means. I then rounded to 1°C, and the means did not vary on that scale. They varied on the scale expected by cancellation, which was σ/sqrt(31), where σ is the sd of the uniform distribution over 1°C.

Tom Halla
Reply to  Nick Stokes
June 29, 2023 7:29 am

Nice trick. Temperature varies over the period of a day, a week, a year. What, pray tell, are you measuring more accurately than any instrument used to make any given measurement?

Nick Stokes
Reply to  Tom Halla
June 29, 2023 7:32 am

This is just the maximum temperature for each day, and working out the average max for the month.

Reply to  Nick Stokes
June 29, 2023 12:21 pm

Then why do we go to the great expense of making instruments with high precision and calibrate them, when, by your claim, all we have to do is make multiple gross measurements from a cheap instrument?

Alexy Scherbakoff
Reply to  Clyde Spencer
June 29, 2023 4:28 pm

Thanks to Nick’s explanation my tape measure has become a micrometer.

Nick Stokes
Reply to  Clyde Spencer
June 29, 2023 11:26 pm

If you want an accurate measure of the place where you are, high precision is great. If you want to know the global average, the obstacle is coverage and sampling error, not local precision.

Reply to  Clyde Spencer
June 30, 2023 2:34 am

Because Clyde Spencer, ‘we’ don’t.

How much have you actually contributed to “making instruments with high precision and calibrate them”. Are they “high precision” in the first place?

Most people commenting here have never seen a meteorological thermometer let alone taken regular weather observations using one. So aside from drawing a tribal repose, who of the royal “we”, know anything about measuring weather parameters.

How about a straw-poll of those who have undertaken regular standard weather observations, verses the ‘experts’ who have sat on the sidelines sipping cool-aid and urging the referee, but have never played the game?

Cheers,

Dr Bill Johnston

http://www.bomwatch.com.au

Reply to  Bill Johnston
June 30, 2023 4:08 am

You forgot to whine about ISO 17025 in your latest Bill-rant, Bill.

Reply to  Bill Johnston
June 30, 2023 8:53 pm

It sounds like you are saying you believe that technicians reading a thermometer, that don’t have a clue about how it was constructed, would be expected to understand the issues of accuracy related to bore diameter consistency, bulb dimensional stability over time, and the role of surface tension and wettability of the glass bore are better prepared to assess the function than an engineer. I suppose most anything is possible.

How many thermometers have you constructed in your career?

Reply to  Clyde Spencer
June 30, 2023 9:28 pm

Dear Clyde,

Perhaps you did not hear what I wrote.

You previously said “we go to the great expense of making instruments with high precision and calibrate them” ….

I don’t have to have made any thermometers to highlight that met-thermometers with 1 degC (or 0.5 degC) indices are not “high precision instruments”. If you want a high precision instrument, by all means buy one of those, but don’t buy a met-thermometer. One reason you might want to buy one is to cross-calibrate a met-thermometer in a water-bath, for instance.

I believe, parallax error (reading a thermometer at angle from horizontal), reading the wetted-perimeter (not the meniscus) and unconscious rounding, affect the precision of an estimate to a greater extent than the precision of the thermometer itself.

All the best,

Bill Johnston

Reply to  Bill Johnston
July 2, 2023 3:25 am

“How about a straw-poll of those who have undertaken regular standard weather observations,”

Well, I used to work for a railroad where we recorded the temperatures at every railroad station at 6am, 12 noon, 6pm and midnight. This is done by all railroads. I actually had to observe a thermometer to get my particular station’s readings. Does that count?

Reply to  Tom Abbott
July 2, 2023 3:54 am

Thanks Tom,

This becomes a problem because standard weather observations include Max for the previous day, and overnight Min (observed at 9am). Dry bulb and wet bulb at 9am local time and 3pm (for relative humidity ranges and dew point) rainfall, visibility, wind etc….. This all cobbles together to make predictions and inform the locals how hot it is and to use sun-block and get vaccinated …

However, an airport may be very interested in ambient-T, RH, and wind strength and direction through the day in order to calculate parameters affecting load, speed, takeoff direction and ‘lift’ in relation to runway length … upper-air, fuel consumption .. etc

The railroad may want to predict expansion in lengths of rail as that may affect buckling etc etc. The local wind farm or solar array may want to know something else – wind droughts and hailstorms come to mind …. The local sewerage treatment woks may want to know about evaporation, fermentation …

You get the picture.

In any event, the conditions under which observations are made and the diligence with which they are made are vital to the outcome. More importantly from a climate perspective the consistency of repeated observations through time …

Cheers,

Bill

Reply to  Bill Johnston
July 2, 2023 1:37 pm

Thanks, Bill.

Reply to  Nick Stokes
June 29, 2023 7:58 am

How do you know the measured values are a correct indication of the physically true air temperatures, Nick?

Reply to  Pat Frank
June 29, 2023 4:40 pm

He doesn’t know. He doesn’t even mention how he propagated the measurement uncertainties in his comparison! He can’t actually know that the means came out the same.

Nick Stokes
Reply to  Pat Frank
June 29, 2023 5:23 pm

It is the standard calculation of mean for any numbers. Do you have any example of how “physically true” numbers would behave differently?

Reply to  Nick Stokes
June 29, 2023 5:59 pm

If your numbers are meaningless, your mean is meaningless.

Nick Stokes
Reply to  Pat Frank
June 29, 2023 6:40 pm

Pat, you’ve thumped the table about how uncertainty propagates which is at variance with many years of established scientific practice. But you have never given a verified example of how it works with “true” numbers (or how the calculation knows they are “true”).

Reply to  Nick Stokes
June 29, 2023 7:19 pm

Nick you are talking about a mathematically derived probability not error ranges. What you describe works for pure abstract numbers but not for real-world instrument readings. Gain a little understanding for once.

bdgwx
Reply to  Richard Page
June 29, 2023 7:45 pm

Richard Page: What you describe works for pure abstract numbers but not for real-world instrument readings. Gain a little understanding for once.

Then tell us. If the methods, equations, and tools that NIST, JCGM, all of the other internationally recognized standards bodies have produced fail in “real-world” scenarios then why does everybody trust them for calibration certification and what methods, equations, and tools are we supposed to use in their place?

Reply to  bdgwx
June 29, 2023 8:58 pm

You forgot to spam the NIST uncertainty machine link in this post.

HTH

Reply to  bdgwx
June 30, 2023 3:41 am

Most of these bodies *specifically* work with multiple measurements of the same thing and assume that generates a random and Gaussian error distribution – just like Bevington does. And then they assume that all that random, Gaussian error cancels out. From there they assume one of two things: 1. the SEM is the uncertainty of the mean or 2. The variation in the stated values is the uncertainty in the mean.

When you have multiple measurements of the different things you simply cannot assume the uncertainty will give you a random, Gaussian distribution. Therefore you cannot justify either of the two assumptions. And, in fact, this is what Pat has proven – the uncertainty is *NOT* random and Gaussian for temperature measurements.

It is only climate alarm cultists that have to fall back on the assumption that all uncertainty is random and Gaussian FOR EVERYTHING. In the real world that just isn’t the case. As Pat said, it is an assumption of convenience to make things *easier*. The *easy* way is seldom the right way.

Reply to  bdgwx
June 30, 2023 5:18 am

Because the calibration protocols result in a competitive environment with various calibration labs auditing themselves and others.

The protocols use multiple measurements of the same thing under the same environment.

They don’t go out and take single measurements of different things and apply the average to a single instrument being calibrated.

You *have* lived in your parents basement for too long. Get out into the real world. Go get a job as an apprentice machinist. You’ll find out quickly how measuring different things tells you nothing about the thing you are creating.

Reply to  bdgwx
June 30, 2023 6:44 am

Because the bodies are staffed by statisticians and computer modellers who understand the manipulation of abstract numbers but have never taken measurements in the field and do not understand that there is a big difference between abstract numbers and temperature readings.

bdgwx
Reply to  Richard Page
June 30, 2023 8:17 am

but have never taken measurements in the field

NIST has never taken measurements in the field? Are you being serious here?

and do not understand 

And now we come full circle. NIST and JCGM were the institutions that were going to take down climate science once and for all. Yet here we are with WUWT commenters taking down NIST and JCGM because their methods, equations, and tools are inconsistent with a contrarian publication in a predatory journal.

Reply to  bdgwx
June 30, 2023 8:56 am

Yet here we are with WUWT commenters taking down NIST and JCGM because their methods, equations, and tools are inconsistent with a contrarian publication in a predatory journal.

bgwxyz sure puts his true feathers on display here with this inane, irrational rant.

Reply to  bdgwx
June 30, 2023 9:03 pm

The accusation that Frank’s work was published in a “predatory journal” is really an implied ad hominem. You are attempting to demean the work not by addressing the substance, but by attacking Frank’s character by suggesting where it was published is of more importance than what he presented. That is like the trolls who refuse to even read anything published here at WUWT.

You have destroyed what little credibility you may have had by demonstrating a lack of objectivity and engaging in what is universally understood to be an act of desperation to win an argument.

bdgwx
Reply to  Clyde Spencer
July 1, 2023 6:33 pm

Just because MDPI is listed as a predatory journal does not mean Pat Frank’s character is bad. In fact, I’d be willing to bet the vast majority of authors including Frank who publish in MDPI do so in good faith. In fact, there is precedent for describing authors who publish in a predatory journal as being “wronged”, “misled”, and needing “protection”. Don’t hear what I’m not saying. I’m not saying that USC 15 § 45 will be applied by the FTC in this case or not. I don’t know. I’m just saying in the past when they have they have not attacked the character of the authors. In fact, they’ve defended them.

And if pointing out that 1) equation (4) has an arithmetic mistake 2) equations (5) and (6) are not consistent with NIST and JCGM formulas and 3) that no attempt was made to propagate the LiG resolution uncertainty in the spatial domain does not qualify as “addressing the substance” then what would qualify?

Reply to  bdgwx
July 1, 2023 9:35 pm

Yet YOU tried to beat Pat Frank over the head for using a journal you don’t approve of.

Hypocrite.

Clyde has you pegged.

Reply to  bdgwx
July 2, 2023 10:19 am

And if pointing out that 1) equation (4) has an arithmetic mistake

It has a misprint. The result of the calculation is correct.

2) equations (5) and (6) are not consistent with NIST and JCGM formulas

Eqns. 5 and 6 are correct. Your repeated complaints display a fully refractory ignorance.

and 3) that no attempt was made to propagate the LiG resolution uncertainty in the spatial domain

Irrelevant. The paper concerns instrumental performance.

does not qualify as “addressing the substance”

Your 1-3 are insubstantial. Your capacities for willful ignorance and an insistent foolishness are exceptional.

then what would qualify?

An argument from knowledge, of which you are very evidently incapable, bdgwx.

bdgwx
Reply to  Pat Frank
July 2, 2023 11:29 am

PF: It has a misprint.

Which is fine. I misprint all of the time. In fact, I’d venture to guess I misprint more than you do. That’s not what I’m concerned about. What I’m concerned about is the calculation in (4) is performed differently than in (5) and (6) which is very odd since all 3 of them are proport to be the uncertainty of a mean just over different time periods.

PF: The result of the calculation is correct.

To be pedantic no. Even with the division typo corrected there is still a mistake. You either needed to have 1.96σ on the LHS or multiply by 2 on the RHS. Again, though, I’m not really concerned with that detail per se. But it does make me seriously question the peer-review process here.

PF: Eqns. 5 and 6 are correct.

Says who?

PF: Irrelevant. The paper concerns instrumental performance.

Of course it is relevant. You claim to compute the global average temperature uncertainty. That can only be done correctly with the consideration of the grid mesh used to compute the global average temperature or at the very least a recognition of the spatial domain.

Reply to  bdgwx
July 2, 2023 1:08 pm

since all 3 of them are proport to be the uncertainty of a mean just over different time periods.

No, they do not. As already noted to you ad nauseam.

Says me, bdgwx. And for good reason: eqns. 5 & 6 compute the relevant quantities, namely the RMS of uncertainty over the time ranges.

Your continued misrepresentation is, charitably, a persistence in ignorance.

the global average temperature uncertainty.

The lower limit of instrumental resolution and accuracy in an air temperature anomaly. Your view is entirely wrong.

bdgwx
Reply to  Pat Frank
July 2, 2023 7:09 pm

PF: Says me

If you’re going to boil it down to either believing you or NIST, JCGM, ISO, etc. then can you really blame me if I choose the later?

Reply to  bdgwx
July 2, 2023 8:42 pm

You’re headed into the cornfield regardless.

Reply to  bdgwx
July 2, 2023 4:50 pm

What does spatial spread have to do with instrument capability? You aren’t making any sense. If an LIG has a resolution limit in Bangor, Maine it will have the same resolution limit in Phoenix, AZ. What in Pete’s name does geography have on resolution limit?

Reply to  Pat Frank
July 2, 2023 12:27 pm

bg thinks because he can stuff the average formula into GUM 10 and the vaunted NIST uncertainty machine, he is now the world’s foremost expert on the subject.

bdgwx
Reply to  karlomonte
July 2, 2023 7:06 pm

Let it me known that I think NIST calculates the uncertainty of the output of a measurement model that computes the mean correctly.

Reply to  bdgwx
July 2, 2023 8:16 pm

Ask me if I care what you believe…

Reply to  bdgwx
July 3, 2023 4:53 am

You don’t even know the difference between the SEM and the accuracy of the population mean. So how can you judge whether the NIST is doing what you think it is doing?

Reply to  Clyde Spencer
July 1, 2023 9:33 pm

Yup, yup, and yup.

Reply to  bdgwx
June 30, 2023 8:52 pm

They don’t fail, they are just not used correctly.

Reply to  Nick Stokes
June 29, 2023 8:57 pm

Quit while you are behind, Stokes, by yapping about true values you are only highlighting your abject ignorance.

Reply to  karlomonte
June 30, 2023 2:36 am

karlomonte you need another nap.

b.

Reply to  Bill Johnston
June 30, 2023 4:09 am

Give it up, Bill-clown.

Reply to  karlomonte
June 30, 2023 3:45 am

Oh karlomonte, you missed your afternoon nap, yap or whatever

So hard to keep up with yet so far behind.

Reply to  Bill Johnston
June 30, 2023 5:04 am

Hypocrite. Is this what you call “gentlemanly etiquette”?

Reply to  Nick Stokes
June 29, 2023 10:54 pm

I’ve propagated errors in my experimental work and put the results in my published papers. I first learned error propagation in Analytical Chemistry and later in Instrumental Methods labs.

…at variance with… You don’t know what you’re talking about, Nick.

You’re right about calculations. They don’t know about true. But true is a non-issue.

Accuracy is the crux. Accuracy statements are qualified. Experimental scientists know the reliability of their data. They propagate the known uncertainties into their final result to convey its reliability to others.

LiG Met. is about accuracy statements. The behavior of numbers reveals nothing about accuracy. Calibration reveals accuracy.

If inaccurate numbers sum to zero, the zero is inaccurate.

Reply to  Pat Frank
June 30, 2023 3:34 am

A self-obvious truth that somehow seems to always elude the climate alarm cultists.

Reply to  Pat Frank
June 30, 2023 3:42 am

If, if, if, but how do you personally know numbers are meaningless?

This is about your paper, not putting-down Nick Stokes.

I want to know about your paper and your justifications, not your putting other people down.

Cheers,

Bill Johnston

Reply to  Bill Johnston
June 30, 2023 5:05 am

So go read it, duh.

Reply to  Bill Johnston
June 30, 2023 7:11 am

Where have I put anyone down, Bill.

Reply to  Pat Frank
June 30, 2023 9:23 am

> Where have I put anyone down

Is that a trick question, Pat?

Pay me 10 bucks per put down and I might find them back for you.

Reply to  Willard
June 30, 2023 1:20 pm

If you have evidence, present it Willard.

Reply to  Pat Frank
June 30, 2023 11:13 pm

What’s in it for me, Pat.

Reply to  Willard
July 1, 2023 2:45 pm

A demonstration that your pejorative inference is honest.

Reply to  Pat Frank
July 2, 2023 9:40 pm

I already have that demonstration ready, Pat, and inference might not mean what you make it mean.

Reply to  Bill Johnston
June 30, 2023 1:21 pm

If you want to know about my paper and my justifications, and you don’t find an answer in the paper, then ask a specific question, Bill.

Reply to  Nick Stokes
June 29, 2023 8:32 pm

Wrong again.

Nick has obviously no clue when the law of large samples can be used and when it can’t.

Reply to  bnice2000
June 29, 2023 8:59 pm

You could have ended your sentence after “no clue”.

Reply to  Nick Stokes
June 30, 2023 3:36 am

Figuring out how close you are to the population mean doesn’t tell you how accurate that population mean is.

Why is the difference so hard to understand?

ThomasEdwardson
Reply to  Tim Gorman
June 30, 2023 8:23 pm

Because they never earned their rifle merit badge as a Boy Scout. When scoring targets, we just assume that the rifle sights can be off by two or three inches at the target (the accuracy of the population mean). But if the scout has a good sight picture, good sight alignment, good hold control, good breath control, can gently squeeze the trigger, and follows through the shot, they will always be able to place 5 shots inside a one inch circle (a very small population mean). As long as a quarter covers all five shots, they pass, even if the quarter is totally outside the black. The scout controls the population mean. The armorer maintaining the rifles controls the accuracy of the population mean when adjusting the sights.

Reply to  ThomasEdwardson
July 1, 2023 7:51 am

You got it! And the SEM is the same thing as the quarter!

Reply to  Pat Frank
June 30, 2023 3:37 am

But Pat, this becomes an inflammatory argument – how do you know they are not?

Argue the point not the person making the point.

All the best,

b.

Reply to  Bill Johnston
June 30, 2023 1:19 pm

Bill, if someone is speaking from ignorance, it is not a personal insult to point that out.

Reply to  Bill Johnston
June 30, 2023 4:56 pm

Argue the point not the person making the point.

You might want to follow your own advice and refrain from saying things like, “Oh karlomonte, you missed your afternoon nap, yap or whatever …”

Reply to  Clyde Spencer
June 30, 2023 6:00 pm

I agree Clyde. Tedious as it is, this thing with karlomonte has a history and seems to have developed a life of its own.

b.

Reply to  Bill Johnston
June 30, 2023 9:03 pm

I called you out on your irrational obsession with Jennifer M., and now you have transferred this obsession onto me,

Reply to  karlomonte
June 30, 2023 9:44 pm

You say “irrational obsession with Jennifer M.”. She was wrong in her assertions, but frustratingly kept on repeating the same things. Somehow that turns into an “irrational obsession”.

Furthermore I prepared a report detailing why and how she was wrong. I followed that up the other day with a second report that showed the protocols I suggested she follow, were robust. Both reports are available here: https://www.bomwatch.com.au/statistical-tests/

I have no obsession with you, dear karlomonte. In fact you say some things that are interesting and detailed and in context quite handy to know. You let yourself down with your extreme language and accusational tones tho.

All the best,

Dr. Bill Johnston

http://www.bomwatch.com.au

Reply to  Nick Stokes
June 29, 2023 4:39 pm

The problem is that you ignored the uncertainty in BOTH cases! You simply can’t know if they came out the same or not! What was the variance of each data set? Did you round up, down, or at random. Rounding doesn’t improve uncertainty, it just carries it along!

Reply to  Nick Stokes
June 30, 2023 8:47 pm

Again you are dealing with a distribution. Did you check to see if the distribution of that data was normal? Was it the same distribution after rounding.

Let me point out that these are measurements with a certain resolution. Significant digits apply. If you found the mean using data with 1 decimal place was the same as using data with only units, you were lucky or you fiddled the data.

A mean of integer data should be an integer. Then to be equal, the average of data in the tenths would have to come out 20.0.

Reply to  Nick Stokes
June 29, 2023 7:52 am

In your dreams, Nick.

Reply to  Nick Stokes
June 29, 2023 10:25 am

Relative errors and uncertancies always add in a sum or a mean. They never cancel each other except in “cancel culture”.

bdgwx
Reply to  Petit-Barde
June 29, 2023 2:24 pm

Then explain why when I enter (x0+x1)/2 into the NIST uncertainty machine the result is an uncertainty that is less than the individual uncertainties of x0 and x1? Is NIST wrong? Is JCGM 100:2008 wrong? Is Bevington section 4 wrong?

Reply to  bdgwx
June 29, 2023 3:10 pm

Ever hear of garbage-in-garbage-out?

Apparently not.

Reply to  karlomonte
June 29, 2023 4:42 pm

He *still* believes that the average uncertainty is the uncertainty of the average!

Reply to  bdgwx
June 29, 2023 4:12 pm

Because when dividing the sum by 2, the quadrature-combined uncertainty is divided by 2 as well, to keep the fractional uncertainty constant.

Suppose each uncertainty is ±0.2. Then the uncertainty in the sum of x_0 + x_1 is sqrt(2x(0.2)^2] = ±0.28.

But when dividing the sum by 2, the uncertainty is divided by 2 as well.

Uncertainty in the mean is then ±0.14.

Reply to  Pat Frank
June 29, 2023 4:47 pm

All that really tells you is how accurately you have calculated the mean. It doesn’t tell you what the accuracy of the mean is.

In addition, why is the uncertainty divided by 2 when dividing the sum by 2? The number “2” has no uncertainty. If you follow the rules for uncertainty, the uncertainty of the average would be the relative uncertainty of the two factors plus the relative uncertainty of the constant (i.e. Zero). You can’t just arbitrarily reduce the uncertainty just because you have found an average. In

Reply to  Tim Gorman
June 29, 2023 5:56 pm

Tim, division by 2 keeps the proportionate uncertainty constant,

Reply to  Pat Frank
June 30, 2023 3:53 am

Ahhh! But that is still the average uncertainty and not the uncertainty of the average.

Reply to  Pat Frank
June 30, 2023 11:32 am

One thing we can agree on. But it will have effect on Tim. It’s a article of faith for him. You must never reduce an uncertainty, so the uncertainty of an average is equal to the unsertainty if the sum, no matter how illogical this becomes.

Reply to  Bellman
July 1, 2023 5:34 am

Says the man that has never once built a beam to span a foundation, designed a staircase, built a stud wall for a room addition, built a racing engine, used a milling machine or lathe to build even something simple like a bolt, measured crankshaft journals or cylinder bores in an engine, built a self-guiding robot, built an electric signal amplifier, or anything else in the world requiring actual measurements.

Uncertainties add, they always add. You simply cannot reduce them by averaging, not in the real world. You simply cannot assume that all uncertainty is random, Gaussian, and cancels. Without that assumption your whole world falls apart.

Reply to  Tim Gorman
July 1, 2023 5:59 am

Don’t me tell Pat Frank. He’s the one with paper doing what you say is wrong.

Reply to  Bellman
July 1, 2023 7:08 am

Why can’t I get my phone to correct my bad typing. That should have been

Don’t tell me, tell Pat Frank. He’s the one with the paper doing what you say is wrong.

bdgwx
Reply to  Pat Frank
June 29, 2023 6:14 pm

PF: But when dividing the sum by 2, the uncertainty is divided by 2 as well.

Uncertainty in the mean is then ±0.14.

Exactly!

And when you follow that procedure you get σ_T_month_mean = 0.035 C and σ_T_annual_mean = 0.010 C in place of (5) and (6) respectively. You can multiple those by 2 to get 2σ [1].

See the difference. Above you divided by 2 after doing the square root. But in your publication you divide by 30.417 and 12 before doing the square root.

[1] In your (5) and (6) you multiple by 1.96 which is the coverage factor for 95% CI; not 2σ. I’m ignoring that for now since it does not significantly effect the outcome.

Reply to  bdgwx
June 29, 2023 10:34 pm

The x_0 and x_i are different values, bdgwx. Then taken in an arithmetic average.

Eqns. 5 and 6 are RMS of a constant uncertainty. They’re not arithmetic averages.

bdgwx
Reply to  Pat Frank
June 30, 2023 1:10 pm

The x_0 and x_i are different values, bdgwx.

It doesn’t matter what x0 and x1 are. What matters is what u(x0) and u(x1) are. And you said “Suppose each uncertainty is ±0.2” so that’s what I entered into the NIST calculator.

Eqns. 5 and 6 are RMS of a constant uncertainty. They’re not arithmetic averages.

I know. They are quadratic averages. Not that it matters because when you take an average of the same value repeatedly you still get that same value as result regardless of whether it is an arithmetic average or a quadratic average. Regardless the uncertainty u((x0+x1)/2) is not either the arithmetic nor quadratic average of u(x0) and u(x1) as is plainly obviously from the result of the NIST calculator.

Nick Stokes
Reply to  Pat Frank
June 29, 2023 6:41 pm

and is 0.14 not less than 0.2?

Reply to  Nick Stokes
June 29, 2023 10:34 pm

Did I dispute that?

bdgwx
Reply to  Pat Frank
June 30, 2023 7:23 am

PF: Did I dispute that?

Let u(x0) = 0.195, u(x1) = 0.195, …, u(x11) = 0.195.

Let y = (x0 + x1 + … + x11) / 12

Your equation (6) computes 2u(y) = 0.382 C.

NIST computes 2u(y) = 0.113 C.

See the problem?

Reply to  bdgwx
June 30, 2023 10:35 am

The problem is, there’s no ‘y’ in eqn. 6.

See your problem?

bdgwx
Reply to  Pat Frank
June 30, 2023 1:05 pm

PF: The problem is, there’s no ‘y’ in eqn. 6.

y is the measurement model that computes the annual average. Equation (6) is said to compute u(y). Refer to JCGM 6:2020 for details on developing measurement models.

PF: See your problem?

No. What specifically about the NIST calculator is a problem?

Reply to  Pat Frank
July 1, 2023 1:20 pm

bdgwx’s problem is that he doesn’t understand using equations.

Let’s take his (x0+x1)/2 as the functional description of a measurement. So let’s make sample 1, x0 = 70, x1 = 71,

Then, q₁ = 71 and q₂ = 72

What is the formula for the mean? How about GUM equation 3.

q̅ = (1/n)Σqₖ

Now, we will put in values.

According to the functional description of (x0 + x1)/2, we get:

q̅ = [(70 + 71) / 2] / 1 = 70.5

Hmmmm? What is the problem here. We only have one item (sample). So we divide by 1, and voila, a simple mean occurs.

Let’s try again with several measurements

q₁ = (1+2+3+4)/4 = 2.5
q₂ = (6+8+10+11)/4 = 8.75
q₃ = (2+6+11+7)/4 = 6.5

q̅₁ = (2.5 + 8.75 + 6.5) / 3 = 5.92

What is?

q̅₂ = 1+2+3+4+6+8+10+11+2+6+11+7) / 12 = 5.92

Why is this?

Because “4” is a common denominator. when you divide by 3, you multiply 4*3 = 12. Consequently you get the mean either way.

Since these are experimental values. The GUM Section 4.2.2 (equation 4) applies. We assume uncertainty is ±0.5.

s²(qₖ) = (1/(n-1))Σ(qⱼ – q̅)²

s²(qₖ) = (1/2)Σ(2,5 – 5.92)² + (8.75 – 5.92)² + (6.5 – 5.92)²

s²(qₖ) = (1/2)(11.7 + 8.0 + 0.34) = 10.02

So now, equation 5 gives:

s²( q̅) = s²(qₖ) / n

s²( q̅) = 10.2 / 3 = 3.34

s( q̅) = √3.34 = ±1.83

Reply to  bdgwx
June 29, 2023 4:42 pm

You’ve been told why this is multiple times. But it violates your religious beliefs that you can increase resolution and decrease uncertainty by averaging.

Average uncertainty is *NOT* the uncertainty of the average!

bdgwx
Reply to  Tim Gorman
June 29, 2023 6:22 pm

TG: Average uncertainty is *NOT* the uncertainty of the average!

Yeah. I know. That’s what I’m trying to tell Pat. (5) and (6) in his publication compute the average of the uncertainty (specifically the quadratic average) and not the uncertainty of the average.

Nobody cares about the average of the uncertainty. It is of little use because it does not tell you how certain you about the average itself.

To be precise here:

average of the uncertainty = Σ[u(x_i), 1, N] / N

uncertainty of the average = u(Σ[x_i, 1, N] / N)

Now can you explain this to Pat?

Reply to  bdgwx
June 29, 2023 10:26 pm

Nobody cares about the average of the uncertainty.

That’s exactly what I cared about.

The uncertainty is the lower limit of detection and is therefore of constant weight in a daily mean, in a monthly mean, and in a yearly mean. Eqns 5 and 6 are there to show that.

Years ago I collaborated in a project with a guy who didn’t understand the X-ray absorption spectroscopy I used. When we wrote our paper, he kept editing what I wrote into what he thought it should mean. He was wrong and his changes made the paper wrong. I kept having to change it back, til we solved the meaning problem.

You’re doing the same thing, bdgwx. You don’t understand the work. You change it around in your mind to what you think it should mean. Your concept is wrong. And then you address your mistaken understanding.

And since the paper doesn’t match what you think it should be, you think I made a mistake. But I didn’t. You did. But you don’t understand the work, and so you don’t grasp your mistake.

Rinse and repeat. Over and over and over.

bdgwx
Reply to  Pat Frank
June 30, 2023 7:02 am

PF: That’s exactly what I cared about.

Then you need to tell people that your publication is only determining the quadratic average of the individual temperature measurements over a period period of time and not the uncertainty of the average itself.

You even use the phrase The uncertainty in Tmean for an average month (30.417 days) is the RMS of the daily means. In other words, you are labeling (5) as being u(Σ[Tavg_i, 1, N] / N) but calculating Σ[u(Tavg_i)^2, 1, N] / N instead.

Reply to  bdgwx
June 30, 2023 10:26 am

…you need to tell people that your publication…

That part of the paper is about the lower limit of resolution, applicable to all 1C/division meteorological LiG thermometers.

This is exactly what I tell people. But you obviously didn’t grasp it.

bdgwx
Reply to  Pat Frank
June 30, 2023 12:35 pm

You called it “The uncertainty in Tmean”, but all you did is calculate the quadratic average of the individual measurement uncertainties. That is not the uncertainty in Tmean.

Reply to  bdgwx
June 30, 2023 1:15 pm

It is when the uncertainty entering the mean is restricted to the lower limit resolution of LiG thermometers.

…field-conditions lower limit of visually-read resolution limited…

bdgwx
Reply to  Pat Frank
June 30, 2023 5:55 pm

PF: It is when the uncertainty entering the mean is restricted to the lower limit resolution of LiG thermometers.

Who says that? Do you have a reference?

And why does that change the uncertainty of the mean formula?

Can you derive the RMS formula you used in equations (5) and (6) via the law of propagation of uncertainty?

Reply to  bdgwx
June 30, 2023 4:08 am

There is nothing to explain. The two are not the same.

u(Σ[x_i, 1, N] / N)

break that down: Σ[x_i, 1, N] / N is the AVERAGE VALUE.

Adding the u in front of it doesn’t tell you anything. You don’t KNOW “u”.

Σ[u(x_i), 1, N] / N does nothing but spread the uncertainty factors evenly across all members in the data set. That doesn’t tell you the uncertainty of the average.

2,3,4 has an average of 3. When you find the average uncertainty you convert that to 3,3,3. So what? The uncertainty of the average is not the average uncertainty, it is the uncertainty propagated from each member onto the average. The propagated uncertainty of 2,3,4 using root-sum-square is 5.4. The propagated uncertainty of 3,3,3, is 5.2. You don’t even get the same answer for the propagated uncertainty! So how can the uncertainty of the average be the same as the average uncertainty?

Reply to  Tim Gorman
June 30, 2023 4:56 am

An example measurement:

A K-type thermocouple is immersed in a constant-temperature oil bath and connected to an oscilloscope capable of 2 Gsamples/s. The oscilloscope is programmed to digitize the TC voltage for one second, at the max sampling rate. Then the voltages are downloaded and the mean is calculated.

Blindly stuff the mean formula into the GUM uncertainty propagation we get:

u(Tbar) = u(T_i) / sqrt(2×10^9) = u(T_i) / 45000

Vanishingly small, and this u(Tbar) goes to zero in the limit as the number of samples goes to infinity. This is a non-physical result.

Alternatively, the standard devision can be used to calculate the uncertainty (completely in line with the GUM). But this leads to:

u(Tbar) = sigma / 45000

With good instrumentation practices the standard deviation from the oscilloscope is likely pretty small, so this again leads to tiny numbers that make no sense.

Anyone experienced with temperature measurements using thermocouples should immediately see the problem here — no reference junction. Without subtracting the reference junction voltage, the TC voltages are not proportional to absolute temperature.

In other words, this is a relative temperature measurement with an allegedly tiny measurement uncertainty. This makes no sense.

The point is that averaging these billions of data points has not reduced uncertainty, at all. The uncertainty of the sampling must be combined separately with the results from the huge uncertainties associated with the thermocouple itself.

These are NOT reduced by endless averaging.

Using the techniques of the GUM without understanding only leads to nonsense.

Reply to  Petit-Barde
June 30, 2023 3:50 am

Except they are always +/-, never just ++.

I wood have fought 1 -1 = zero or plus three or somfing

What are you talking about?

Reply to  Nick Stokes
June 29, 2023 4:36 pm

BUT YOU DON’T KNOW THE MAGNITUDES OF *ANY* OF THE ERRORS THAT YOU THINK CANCEL!

So how can you possible know they cancel? Pat Frank has shown that the temperature distributions uncertainties are NOT NORMAL. Non-normal distribution cannot just be assumed to be random and cancel out!

All you are doing is repeating the unproven meme of the the climate science cult – “ALL ERROR IS RANDOM, GAUSSIAN, AND CANCELS”.

Pat has shown how this meme simply cannot be true. But you, as a true cultist, believe it in the face of actual, physical proof otherwise.

You simply can’t be embarrassed, can you?

Reply to  Tim Gorman
June 29, 2023 5:02 pm

No, but he can tell you to go play with the NIST uncertainty machine.

Reply to  Nick Stokes
June 29, 2023 7:14 pm

“… when you add them, the deviations cancel.” Wrong Nick – you cannot possibly predict whether the deviations will cancel or multiply – either is entirely possible and completely random. As you add more readings to your average, the instrument and reading errors will make the final result completely worthless. A complete lack of precision and understanding, despite being able to parrot the religious mantra’s word-perfect.

Reply to  Nick Stokes
June 29, 2023 8:30 pm

WRONG again.! The deviations do not cancel.

You are applying maths with is basically just WRONG.

Reply to  Nick Stokes
June 30, 2023 2:42 am

The “spot the agenda”- Nick From the “hottest eva” school of “weather=climate” bollox….Meteo France & UK Met office school.

The deniers or eliminators of the MWP or the Roman Periods, which didn’t have thermometers at all…..just a few vineyards close to Scotland.

I guess I prefer to believe funny old historical evidence about green Greenlands and Roman vineyards or non heated villas in England 2000yrs ago eh?

Reply to  Nick Stokes
June 30, 2023 2:53 am

Because some deviate high and some deviate low, for whatever reason. 

When averaging things, the “reason” really does matter. Errors cannot cancel if the reasons are different (except coincidentally and so not in the long term).

Say one area is unusually cold because it is shaded by a steep hill and local winds run down the hill and through it. That could balance with a sunny place surrounded by warm water. 
Now say a cold front comes over. 
Clouds block the sun cooling the warm anomaly. And the winds increase, increasing the cold anomaly. Averaging is not balancing anymore.

You cannot average measurements of different things just because the measurements are in the same units. ‘T at place A’ is not the same thing as ‘T at place B’.

Your argument is wrong.

Nick Stokes
Reply to  MCourtney
June 30, 2023 11:47 am

Errors cannot cancel if the reasons are different”

This is arithmetic. 1-1=0. It doesn’t matter how it got to be 1.


Reply to  Nick Stokes
June 30, 2023 12:45 pm

This is reality, not pure mathematics.
If all 1s become 2s and all -1s stay the same you are arguing that:

2-1=0

And you can’t say your snapshot measurements respond in the same way if you don’t know what’s going on. It does matter how it got to be 1 if you are going to average it with other measurements. If you don’t know, you have no reason to say it’s the same sort of measurement.
.
Simply put, you can’t average apples and oranges.

Nick Stokes
Reply to  MCourtney
June 30, 2023 8:41 pm

This is reality, not pure mathematics.”
It’s just the way numbers work. If you add them, and they vary in sign, there will be cancellation.

You may have some argument for saying you shouldn’t add them. But if added, that is what will happen.

Reply to  Nick Stokes
July 1, 2023 7:41 am

Nope! You can’t just consider sign, you also have to consider magnitude. There may be SOME cancellation but you simply cannot just assume TOTAL cancellation unless you actually know the distribution of the errors AND their magnitude. Both of which are impossible to know when you have a multiplicity of measurements of different things made by different instruments.

It’s what root-sum-square addition is used for. PARTIAL cancellation.

Reply to  Nick Stokes
July 1, 2023 5:39 am

Nope. The population growth of swans in Denmark is highly correlated the the annual human birthrate in Denmark. They each have their own uncertainty interval. You can’t simply subtract the two and say there is cancellation of the uncertainty intervals. They are not physically related and so the uncertainty intervals don’t actually cancel. Statisticians would tell that they do but they only look at the numbers, not at the causal relationship.

Temperatures taken at different places and times *are* just like the swans and babies. They are DIFFERENT things. No causal relationship. The uncertainty in Station A doesn’t cancel the uncertainty at Station B. When you combine the two the total uncertainty goes UP, not down.

Reply to  Nick Stokes
June 30, 2023 8:30 pm

You have no reference to back this up. This is an unsupported assertion and is not true. You are trying, probably unknowingly, to conflate experimental data variance with measurement uncertainty. You can’t do that. Dr. Frank has shown that isn’t true. Did you even read his summary above?

Reply to  Nick Stokes
June 30, 2023 8:34 pm

You should know that “adding” measurements requires adding uncertainties. You have no measurement experience or you wouldn’t assert this.

Reply to  Nick Stokes
June 30, 2023 11:45 pm

You still didn’t get what Tom Halla was saying!

Reply to  Tom Halla
June 29, 2023 12:14 pm

And, it isn’t just the temperature that varies with every unique parcel of air that passes a weather station. The humidity, pressure, and density varies with each parcel.

During the Friday before the 3-day 4th of July holiday in 1968, I, my wife, and my younger brother were camping along the North Fork of the American River, not far from Auburn. It was one of the hottest days I have ever experienced. At one point in the afternoon I went up to my pickup truck and turned on the radio to see if I could get a weather report. I got a station in Sonora (CA) that reported 120 deg F. Sonora is basically along the same longitude, and during such heat spells, the isotherms tend to be parallel to the Sierra Nevada, which runs north-south. All three of us spent the entire day immersed in the water to keep cool. Occasionally, a gust of wind would come up the canyon, and all three of us would start coughing. It was like being in a steam sauna. Apparently, the hot air gust was flash evaporating water from the surface of the river. I doubt that such circumstances of rapid change in humidity is recorded by weather stations. However, it is important because it controls the heat index and reflects greater enthalpy of the air parcels. The air parcels are all part of the global atmosphere, but they have physical properties that vary over short time scales, and are thus different enough that they shouldn’t be considered to be the same thing, just variants of the same class.

Thomas
Reply to  Clyde Spencer
June 29, 2023 5:03 pm

Exactly. Temperature isn’t a measure of the heat content of air. So, an average global surface temperature can tell us nothing about the total heat content of the atmosphere. At best it’s a weak proxy for heat content. Increased heat content is predicted with increased CO2. Even if we had a hundred-year record of temperatures measured every second at a billion equally spaced locations on the surface of the earth, that would not tell us if the heat content of the atmosphere had increase. Heat content is enthalpy (heat units per mass unit of air and its associate water vapor). A 110 °F day in Phoenix and have the same heat content as an 80 °F day in Houston, just 1000 miles away. The temperature difference is 30 °F the heat contents is the same.

Reply to  Thomas
June 30, 2023 4:40 pm

At best it’s a weak proxy for heat content.

More formally, temperature is a proxy for the more important property of enthalpy; however, temperature has a low correlation with enthalpy in humid climate zones. Therefore, the difference between enthalpy and the estimate of the enthalpy of air parcels varies with the climate zone and season.

Reply to  Tom Halla
June 29, 2023 12:30 pm

Pat Frank also shows that the error distribution is not normal. This means, even if errors (on many measurements of the same thing, not many different measurements) would cancel out in the case of a randomly distributed error, for temperature measurements, this is not the case.

Nick Stokes
Reply to  Eric Vieira
June 29, 2023 1:48 pm

Errors cancel even if not normally distributed (which is not the same as random).

Reply to  Nick Stokes
June 29, 2023 4:13 pm

One can’t assume errors cancel when one doesn’t know the errors. To do so is self-serving.

Reply to  Pat Frank
June 29, 2023 4:51 pm

You took the words right out of my mouth. In a skewed distribution there is no guarantee that the left side will cancel out the right side. The probability distribution is basically a histogram telling how often a value occurs. Unless matching values for appearance occur on both sides of the mean there is no guarantee of cancellation. It’s why the median in a skewed distribution isn’t the same as the mean.

You would think Stokes would know this but he’s blinded by his cultish beliefs.

Reply to  Tim Gorman
June 30, 2023 2:53 am

No. Histograms have fixed bins, it is a probability destiny function.

Skew is also measurable, so too its effect on the mean/median. While rainfall is not, daily temperature is mostly robust to skew. (Compare annual mean-T with median-T for instance).

b.

Reply to  Nick Stokes
June 29, 2023 4:40 pm

If the distribution is unsymmetrical, they cannot completely cancel. Therefore, there is a residual bias. That could be small or large. It is impossible to say without knowing the distribution. You are engaging in wishful thinking.

Nick Stokes
Reply to  Clyde Spencer
June 29, 2023 6:48 pm

They won’t completely cancel, whatever the distribution. But the mean of the errors will tend to zero, whatever the distribution. That is what the Law of Large Numbers says. OK, exceptions for distributions with tails too fat to define a second moment.

Reply to  Nick Stokes
June 29, 2023 10:00 pm

When measurements are inaccurate, a zero mean is inaccurate.

Reply to  Clyde Spencer
June 30, 2023 2:54 am

Provide examples.

b.

Reply to  Bill Johnston
June 30, 2023 3:21 am

For the several hundred sites in Australia that I have examined, daily data distributions are bimodal. However, removing the day-of-year cycle may result in long-tailed distributions but with no change in the mean or median.

Multiple analyses show that the change from 230-litre Stevenson screens to 60-litre screens increased the likelihood of upper-range extremes.

Something for another day at this point.

Cheers,

Dr Bill Johnston

http://www.bomwatch.com.au

Reply to  Bill Johnston
June 30, 2023 5:20 pm

Visualize a skewed probability distribution function with a median of zero. Add all the numbers constituting the left-hand side to the numbers on the right-hand side. The sum will not be zero.

Reply to  Clyde Spencer
June 30, 2023 5:56 pm

Yes Clyde,

I routinely use a variety of normality/distribution tests as well as histograms and Q-Q plots.

Cheers,

Bill

bdgwx
Reply to  Eric Vieira
June 29, 2023 2:20 pm

It doesn’t matter. The errors cancel all the same as long as they are random. You can actually prove this out for yourself with the NIST uncertainty machine.

Reply to  bdgwx
June 29, 2023 3:07 pm

An error that is for example not randomly distributed is a systematic error. These don’t cancel out.

bdgwx
Reply to  Eric Vieira
June 29, 2023 4:13 pm

It depends on the measurement model. If the measurement model is y = a – b then the systematic error cancels. If the measurement model is y = a + b then the systematic error doubles.

The general rule is that when the product of all partial derivatives of the measurement model is less than zero then systematic error is reduced and vice versa when positive. Refer to JCGM 100:2008 equation 16 for why this happens.

You can prove this out for yourself with the NIST uncertainty machine using the correlation feature.

Reply to  bdgwx
June 29, 2023 4:47 pm

… when the product of all partial derivatives of the measurement model is less than zero then systematic error is reduced and vice versa when positive.

Being “reduced” is not the same as “cancelled.” Unless one can specify how much the systematic error is reduced, then one is dealing with an specified uncertainty and cannot claim to know what the uncertainty is. The best one can do is to bracket the uncertainty with inequalities.

bdgwx
Reply to  Clyde Spencer
June 29, 2023 5:55 pm

CS: Being “reduced” is not the same as “cancelled.”

It is when the measurement model is y = a – b. This is because the product of the partial derivatives ∂y/∂a * ∂y/∂b = -1.

An example where you only get partial cancellation (or only a reduction) is y = a – b/2. In that case the product of the partial derivatives ∂y/∂a * ∂y/∂b = -0.5.

CS: Unless one can specify how much the systematic error is reduced

That’s what the law of propagation of uncertainty does. When the product of the partial derivatives is -1 all of the systematic uncertainty is cancelled. When it is between -1 and 0 then only some of it is cancelled.

Reply to  bdgwx
June 29, 2023 4:52 pm

More spam. YOU go play with your machine.

Reply to  bdgwx
June 29, 2023 4:56 pm

No, the systematic error does *NOT* cancel out. This has been pointed out to you over and over ad infinitum.

1+1+1+ 1 = 4. Add a systematic bias to each and you can get 2+2+2+2=8. Where is the cancellation? The average of the first is 1. The average of the second is 2. Where is the cancellation of the systematic error?

Stop repeating cult dogma and think for a minute!

Reply to  Tim Gorman
June 30, 2023 4:01 am

No it does not. Systematic error is not additive in time, it adds to each value and therefore to a measurable step-change in the mean.

As interesting and uninformative as this discussion has been, it is snooze time for me.

Good night (OZ time),

Bill

Reply to  bdgwx
July 3, 2023 7:09 am

Just seen this. It needs to be corrected in case a schoolchold sees it uncritically and fails their 11+.

It depends on the measurement model. If the measurement model is y = a – b then the systematic error cancels. If the measurement model is y = a + b then the systematic error doubles.

Wrong. Whether the model is {y = a – b} or {y = a + b} the errors add. They don’t cancel.
Let us demonstrate this.
Consider the effect of the units you are working in. If we create a new unit, exactly the same scale except zero is more than b lower, then -b becoems a +b. How can arbitrarily changing the units (not the scale,or the measurement, just the zero) change whether we know more or less about the accuracy of the measurement?
It can’t.
If we don’t know one thing, we cannot take away what we don’t know from another thing we don’t know and get no things we dont know left.
We just have two bits we don’t know. The errors add.

Reply to  MCourtney
July 3, 2023 7:37 am

This has been pointed out to them multiple times – but it never sinks in. Just like they believe the SEM somehow describes the accuracy of a population mean.

bdgwx
Reply to  MCourtney
July 3, 2023 12:22 pm

MCourtney: Wrong. Whether the model is {y = a – b} or {y = a + b} the errors add. 

It’s correct. Systematic error cancels when y = a – b. JCGM 100:2008 equation 16. Assume u(x) = u(a) = u(b) for simplicity. Notice that the uncorrelated term reduces to 2u(x)^2 for both y = a – b and y = a + b. Then notice that the correlated term reduces to -2u(x)^2 for y = a – b and +2u(x)^2 for y = a + b when r(a, b) = 1. You can also verify this with the NIST uncertainty machine.

MCoutney: Consider the effect of the units you are working in. If we create a new unit, exactly the same scale except zero is more than b lower, then -b becoems a +b.

I don’t know what “zero is more than b lower” means. Can you present a concrete example?

How can arbitrarily changing the units (not the scale,or the measurement, just the zero) change whether we know more or less about the accuracy of the measurement?

It’s the same situation with C or K. If a = 283.15 ± S K and b = 273.15 ± S K where is S is entirely systematic thus r(a, b) = 1 then y = 10 ± 0 K. Or if we change the units but keep the scaling the same like is the case with C we have a = 10 ± S C and b = 0 ± S C. Then y = 10 ± 0 C. In either case the systematic uncertainty S cancels since r(a, b) = 1. Nothing has changed with combined uncertainty u(y). It’s still 0 as long as S is entirely systematic meaning r(a, b) = 1.

MCourtney: If we don’t know one thing, we cannot take away what we don’t know from another thing we don’t know and get no things we dont know left.

Ah, but that is the power of algebra. We don’t have know what the value of a variable is to know that it can cancel in a subtraction. If there is a systematic effect S then both a and b will have the S effect. Thus y = (a + S) – (b + S) = a – b. Notice that S cancels out and we don’t even need to know what’s value is.

MCourtney: The errors add.

Not when r(a, b) > 0 they don’t. Here is a concrete example using JCGM 100:2008 equation 16.

y = a – b
a = 5
b = 3
u(a) = u(b) = 1

When r(a, b) = 0.00 then y = 2 ± 1.41
When r(a, b) = 0.25 then y = 2 ± 1.22
When r(a, b) = 0.50 then y = 2 ± 1.00
When r(a, b) = 0.75 then y = 2 ± 0.71
When r(a, b) = 1.00 then y = 2 ± 0.00

Note that r is the degree to which u(a) and u(b) correlate. r = 0 means there is no correlation. r = 1 means there is complete correlation. If all of u(a) and u(b) is systematic then r = 1. If none of u(a) and u(b) is systematic then r = 0.

This can be verified with the NIST uncertainty machine.

Reply to  bdgwx
July 3, 2023 1:54 pm

Systematic error cancels when y = a – b.

Liar. The GUM does NOT say this.

Reply to  bdgwx
July 3, 2023 4:30 pm

Uncertainty INTERVALS simply do not cancel. They do not have a scalar value that can be defined.

The uncertainty interval goes from minimum to maximum value. Those values *add* no matter what. See Taylor, Section 2.5.

for u = p – v

For variable “p” with +/- ẟp where p_max = p_best + ẟp and p_min = p_best – ẟp

for v you get v__max = v_best + ẟv and v_min = v_best – ẟv

When you subtract them the maximum possible uncertainty interval is p_max – v_min (or alternatively v_max – p_min.

THERE IS NO CANCELLATION.

If you think there might be some cancellation then you add the uncertainties in quadrature – BUT THEY STILL ADD!

They add whether you have p + v or p – v.

You *still* have absolutely *NO* understanding of uncertainty. Thinking it has a scalar magnitude that can be subtracted in order to cancel it is just one more piece of religious dogma you refuse to give up on.

Reply to  bdgwx
June 29, 2023 3:11 pm

Again with the uncertainty machine spam from bee’s wax.

Nick Stokes
Reply to  karlomonte
June 29, 2023 6:50 pm

NIST and GUM used to be the authority that would put climate science right. Now they are classed as know-nothings too. The only authority left is Pat Frank.

Reply to  Nick Stokes
June 29, 2023 9:01 pm

Idiot.

Reply to  Nick Stokes
June 29, 2023 9:56 pm

The LiG detection limits in Table 1 are from NIST.

Reply to  bdgwx
June 29, 2023 4:53 pm

Who says that the uncertainty in measurements of different things represents a random distribution that is normal? I can guarantee you that the wear in eight pistons in a racing engine is *NOT* normal even though each one represents a random value.

Thomas
Reply to  bdgwx
June 29, 2023 7:18 pm

I suggest you read, “Bernoulli’s Fallacy: Statistical Illogic and the Crisis of Modern Science.”

bdgwx
Reply to  Thomas
June 29, 2023 7:39 pm

Thanks. I’ll check it out. Does Aubrey Clayton challenge the law of propagation of uncertainty, NIST, JCGM, etc.?

Reply to  Nick Stokes
June 29, 2023 7:51 am

Nick’s analysis requires perfect rounding, which in turn requires perfect instrumental accuracy and infinite precision. Neither is available in the physically real world.

Nick is wrong and should know he’s wrong, because we already had that discussion here on WUWT seven years ago. You cleverly deleted my reply posted to mohyu, remember Nick?

Limits of instrumental resolution mean limits on information. There’s no magicking knowledge out of thin air.

There’s an excellent discussion of accuracy, precision and resolution at Phidgets, which I recommend to all. Quoting, “resolution determines the upper limit of precision. The precision in your data cannot exceed your resolution.”

Instrumental detection limits and systematic measurement error will ensure refractory errors in measured data, especially with respect to the physically true magnitude (air temperature).

There’s no getting around it, except in numerical Never-Never-Land.

Reply to  Pat Frank
June 29, 2023 8:10 am

Just yesterday Stokes deceptively misquoted me when I tried to explain that a formal uncertainty analysis of the UAH temperature measurement has not be performed:

https://wattsupwiththat.com/2023/06/26/uncertain-uncertainties/#comment-3740021

Reply to  Pat Frank
June 30, 2023 3:04 am

Fundamentally, Stokes fails to understand the distinction between accuracy and precision. For him, they are the same thing.

Reply to  Graemethecat
June 30, 2023 8:59 am

If he did do so, his entire act would collapse.

Mr.
Reply to  Nick Stokes
June 29, 2023 10:51 am

As Maxwell Smart would say –
“Aaaahhh. It’s the old numbers game again, 99”

Loren Wilson
Reply to  Nick Stokes
June 29, 2023 7:51 pm

If you measure the length of a board with a tape measure that is one inch off, you can average all you want but you will still be wrong by one inch. Averaging reduces random uncertainty but not calibration issues, especially when both liquid in glass thermometers and platinum resistance thermometers tend to drift in one direction. For mercury thermometers, if the bulb shrinks, it reads high, not low. As a PRT ages, stress builds up in the platinum wire and increases its resistance and the measured temperature. Non-random errors cannot be corrected by mathematical approaches meant for Gaussian distributions.

Reply to  Loren Wilson
June 30, 2023 6:55 am

The problem comes when you know your tape measure is off by up to 1 inch, but have no idea whether it is longer or shorter or by exactly how much – then consider it may change each and every time you take a measurement!

Reply to  Nick Stokes
June 29, 2023 8:28 pm

and it does diminish roughly with √ N

Just flat out WRONG.

These are individual readings in time, NOT a series of repetitive readings of the same thing.

Go back and learn about basics about errors of measurement and when to apply “√ N”, and when NOT TO.

Reply to  bnice2000
June 29, 2023 9:02 pm

He won’t, 1/root-N is his religion.

Nick Stokes
Reply to  bnice2000
June 29, 2023 11:32 pm

NIST has here a set of examples of application of the GUM. Their E2 is exactly the calculation of the average temperature for a month, with data on 22 days. So how do they find the uncertainty of their average? They divide the sd by sqrt(22).

comment image

Reply to  Nick Stokes
June 30, 2023 4:23 am

Which is NOT what you and the rest of the trendologists rant about!

NIST is taking the variance of the sample set as the uncertainty, which is the standard deviation of the mean over the square root of N minus the degrees of freedom.

bg-whatever is blindly stuffing the mean formula into the uncertainty propagation equation and declaring that the uncertainty of the average is the uncertainty of a single temperature measurement divided by root-N.

sigma / sqrt(N-df) =/= u(T_i) / sqrt(N) !!

You are lying, again.

And when do climastrologers EVER report standard deviations of the myriad averages they do? NEVER

Reply to  Nick Stokes
June 29, 2023 10:58 pm

the usual Pat Frank nonsense” Win the argument by contemptuous dismissal. Gavin Schmidt does similar. So does Noam Chomsky. You’re in good company, Nick.

Reply to  Nick Stokes
June 30, 2023 1:20 am

Pigs only fly backwards from east to west, so it is possible that the temperature is the same from day-to-day but the instrument keeps changing or something …. sinusoidal calibration or lack of, perhaps daily isobaric correlation … ISO thingy, GUM, ask karlomonte. Oh wait, karlomonte has never observed the weather.

More seriously Nick, it is a refreshing change that the paper was published don’t you think? As I see it, you and karlomonte could join forces and submit a rebuttal.

Just an idea … pigs do fly after-all.

Dr Bill Johnston

http://www.bomwatch.com.au

Reply to  Bill Johnston
June 30, 2023 4:26 am

And how do you know that I have “never observed the weather”?

Are you psychic?

No, you are just a clown (one who now seems to have transferred his obsession with Jennifer M. onto me).

Reply to  karlomonte
July 1, 2023 1:59 am

Have you or are you just a BS-artist? Observing is different to recording.

b.

Reply to  Bill Johnston
July 1, 2023 6:42 am

Yes, you are obsessed with me, ankle-biter.

Reply to  karlomonte
July 2, 2023 9:46 pm

“Yes, you are obsessed with me, ankle-biter.” wins the argument by contemptuous dismissal. Pat does similar. So does Rush Limbaugh. You’re in good company, Mr. Three Cards.

Reply to  Willard
July 7, 2023 3:47 pm

Pat does similar.”

Another accusation sans evidence. Put up or shut up Willard.

Reply to  Nick Stokes
July 7, 2023 3:39 pm

This comment starts with the usual Nick Stokes ignorance.

and continues from there.

June 29, 2023 6:51 am

1. The people compiling the global air temperature record do not understand thermometers.

2. The rate or magnitude of climate warming since 1900 is unknowable.

Oh, the trendologists won’t like this one!

The Dark Lord
June 29, 2023 7:39 am

I have said from the beginning … the data is not fit for purpose … any attempt to run it thru a computer model or statistical exercise is just an exercise in “Look how smart I am” but does nothing to reveal any truths about the weather or climate … its one big intellectual circle j*rk … sure, people have devoted their entire careers to it … its just cold fusion on a huge scale because of the money involved …

hiskorr
June 29, 2023 7:51 am

Finally! Finally! Thank you, Mr. Frank! I’ve suspected as much for decades, but without the wit to explain it!

JCM
June 29, 2023 7:54 am

but this makes empirical curve fitting and arbitrary parameterization seem less informative. that’s annoying.

June 29, 2023 7:57 am

When it comes to climate, first we find the average station temperature for the day. Then we average up all those daily means to find the annual station average. Then we take all the annual station averages from the equator to the poles winter spring summer and fall and compute the annual global temperature. Then that is compared to a base line and an anomaly is reported. Finally all those anomalies for the last 150 years are plotted out to see if there’s a trend or not, and if there is … But that’s not the end of the story, all that averaging is reviewed every month, and corrections are made all the way back to January 1880

This past month NASA made 1732 corrections to their Land Ocean Temperature Index and yes a change was made to the January 1880 entry (lowered from -18 to -19 0.01 degrees C
Reply to  Steve Case
June 29, 2023 5:04 pm

Variance of the data record is a clue to the uncertianty associated with the data set. The higher the variance the less certain one can be of anything statistical analysis provides.

Yet with each average you list out climate science studiously omits any variance calculations for the underlying data, for the the variance of the daily averages, the monthly averages, the annual averages, the global average, or the baseline average.

Yet it is basic statistics that when you combine random variables the variance of those variables add – meaning that the resulting data set is less and less certain. The normal distribution is assumed to have the average value happen far more often than other values as you move away from the mean. As the difference in occurrence gets smaller and smaller with larger variance the certainty of whether or not the mean is *really* the most likely outcome gets less and les.

Just more basic science that the climate cult ignores – every single time. it would just be too damning to have to show the variance in the global average!

hiskorr
Reply to  Tim Gorman
June 29, 2023 6:56 pm

Good points, however, even with the “daily average temperature” calculations, information is lost regarding variations in Tmax and Tmin. For example: CO2 (or something else) may cause a change in cloud cover which may both make Tmax cooler and Tmin warmer. Therefore, dealing only with Tavg variations conceals possible greater variations within daily temps. The same can be said in going from daily to annual temps, and from local to global. Each statistical operation loses information from within the data in order to describe the data set as a whole.

There is absolutely no reason to assume that changes in a single variable (CO2) will cause uniform variations in all temperature measurements, daily, annual, and global, as implied by GAT variation

Reply to  Tim Gorman
June 29, 2023 7:26 pm

Most of the climate enthusiasts have shown themselves to be statisticians, not professional scientists.

June 29, 2023 8:10 am

Would an Expert in Applied Mathematics please explain to me how you can use the RMS value of points on a graph to show the TRUE value of the “missing” data.

I ask this as a result of my experience way back in 1974 when I was taking a Statics/Dynamics course in college. HP had placed the HP-35 “Pocket Calculator,” the first, on the market the year before and the professor allowed the use of it in class and for exams. This created a problem. All of the problems and answers up until this time had relied upon the use of a Slide Rule. More than half of the solutions to the problems did not agree with the answers when performed on the HP-35, however, were correct when performed on the Slide Rule. Problem was that the calculator did not have the precision of the SR. The calculator was only accurate to seven (7) places! Thus, when working with small angles precision was lost. IMHO the same “error” effect would be presented in predictive calculations for small changes in a parameter that is predicting an event far in the future. E.g. How many decimals of accuracy does the velocity and trajectory angle of a rocket need to be to land on the moon, intact, when no corrections to either can be made mid-course?

Also, I have been taught that the RMS value of a Sine Wave of sinusoidal Alternating Current represents the Direct current/voltage that dissipates the same amount of power as the average power dissipated by the alternating current/voltage. I have heard that temperatures are taken at ~8 AM and ~8PM. and also The day “High” and day “Low.” However, the RMS value of a Sine Wave = 0.707, Square Wave = “peak,” Triangular Wave = 0.577. None of these are representative of the actual HEATING/Cooling effect of the sun’s radiation or lack thereof. The actual heat collection potential curve is closer to a triangular wave with no lower lobe. Definitely NOT sinusoidal. This seems like another confusion factor originated by the uninformed and brainless, those lacking knowledge of advanced mathematics or those with an agenda.

Reply to  usurbrain
June 29, 2023 5:09 pm

The “average” of a sine is not the same as the RMS value of the sine. The average is about .63 of the peak. The sun does follow a sine wave path above a point as the earth turns. So the amount of radiation received at a point on the earth is also a sine wave. That is why if you look at daytime temps they are pretty much a sine wave from sunrise to about 3pm when the radiation from the earth overtakes the injection of energy from the sun and the temperature after about 3pm becomes an exponential or polynomial decay. (it’s 3pm because of the thermal inertia of all the components involved in the system)

Reply to  Tim Gorman
June 29, 2023 6:03 pm

Nit Pick – the average of a true sine wave is 0.0000000…..
The portions above and below 0 are equal, if not, then it is not a Sinewave.

Reply to  usurbrain
June 30, 2023 4:36 am

The temperature profile during the day is not a true sine wave. It is a truncated one, typically ending around 3pm local time. Thus there *will* be an average. In addition, the part of the temperature curve that is sinusoidal always has an offset, a DC value if you will. That also keeps the sinusoidal temperature curve from averaging to zero.

bdgwx
June 29, 2023 9:47 am

Where did you get the formula used in equation (5) and (6) in your publication?

Is it Bevington 4.22 like what you told me last time?

Reply to  bdgwx
June 29, 2023 10:15 am

Standard RMS bdgwx.

bdgwx
Reply to  Pat Frank
June 29, 2023 10:56 am

Then you used the wrong formula. Refer to the law of propagation of uncertainty equation 16 for the general case and equation 10 for the uncorrelated case in JCGM 100:2008. Notice that it reduces to u(y) = u(x) / sqrt(N) when y = Σ[x_i, 1, N] / N and u(x) = u(x_i) for all x_i and r(x_i, x_j) = 0. This can also be found in Bevington 3.13 for the general case or 3.14 for the uncorrelated case. Alternatively you can use equation 4.14 or 4.23 if the goal is to determine the uncertainty of the average. You can use the NIST uncertainty machine to verify this if there are any doubts.

Reply to  bdgwx
June 29, 2023 11:17 am

The formula for RMS is used to calculate the RMS, bdgwx. That’s what I did.

As usual, you don’t know what you’re talking about,. As usual, that doesn’t stop you from holding forth.

Reply to  Pat Frank
June 29, 2023 1:09 pm

He lives for telling people to “play around” with his cherished NIST uncertainty machine, refusing to acknowledge that if garbage is input to the machine, the output will be garbage.

bdgwx
Reply to  Pat Frank
June 29, 2023 1:26 pm

That’s the problem. RMS is not the same thing as the uncertainty of the average. See Bevington section 4. Don’t believe me or Bevington? Use the NIST uncertainty machine and prove this out for yourself.

Reply to  bdgwx
June 29, 2023 1:47 pm

Clown, is this all you can do when people call you out on your nonsense? Over and over and over and over…

And this reminds me, it is almost for you to spam those lame IPCC graphs, again, for 90th time.

Reply to  bdgwx
June 29, 2023 4:15 pm

RMS is the root-mean-square of the errors, not the mean error of an average.

bdgwx
Reply to  Pat Frank
June 29, 2023 5:15 pm

RMS is the root-mean-square of the errors

I know. That’s the problem. All you’ve done is compute the quadratic mean of the individual measurement uncertainties. And since you’ve assumed they all have the same uncertainty it is hardly shocking that the quadratic mean, like the arithmetic mean, is also the same.

not the mean error of an average.

I didn’t say anything about “the mean error of the average”. I’m talking about the uncertainty of the average as described by Bevington via equation 4.14 which is obviously different than RMS. I’ll repeat. RMS is not the same thing as the uncertainty of the average.

Reply to  Pat Frank
June 29, 2023 5:30 pm

He doesn’t understand the difference!

Reply to  bdgwx
June 29, 2023 5:30 pm

You *still* insist on cherry-picking, don’t you?

In Section 4.1 Bevington states:

“In Chapter 2 we defined the mean u of the parent distribution and noted taht the most probable estimate of the mean u of a random set of observations is the average x_bar of the observations. The justification for that statement is based on the assumption that the measurements are distributed according to Gaussian distribution.”

He goes on to talk about the “Calculation of the Mean” and the “Estimated Error in the Mean”. Both of these are related to the SEM – i.e. how close you can estimate the mean from samples. It is *NOT* the uncertainty of the mean, it is only how close you have calculated the mean.

He goes on to say: “Equation 4.12 might suggest that the error in the mean of a set of measurements x_i can be reduced indefinitely by repeated measurements of x_i. … There are three main limitations to consider: 1. those of available time and resources, those imposed by systematic errors, and those imposed by nonstatistical fluctuations. ”

You *never* want to consider the limitations and always fall back on the assumption that how closely you can calculate the mean is how uncertainty that mean is physically – while ignoring systematic errors and nostatistical flucuations.

You just assume that all errors are random (even systematic ones), Gaussian, and that they cancel.

You simply don’t live in the real world. Neither do most climate scientists.

Reply to  bdgwx
June 29, 2023 5:20 pm

Bevington’s 3.13 is the standard root-sum-square equation including a covariance term. From Bevington: “The first two terms in the equation are the averages of squares of deviations weighted by the squares of the partial derivatives, and my be considered to be the averages of the squares of the deviations in x produced by the uncertainties in u and in v, respectively.”

He goes on to say that if the measured quantities u and v are uncorrelated, then, on the average, …. we should expect the term to vanish in the limit of a large random selection of observations.

You are back to cherry-picking things you think might prove you right. They don’t. You need to try harder.

BTW, Bevington, in his lead-in to the book STATES SPECIFICALLY that the following analyses assume totally random and Gaussian measurement errors. Something that simply doesn’t apply to the real world of measurement using field instruments whose calibration is questionable at best.

sherro01
Reply to  Tim Gorman
June 29, 2023 6:08 pm

Tim,
Thank you particularly for that last paragraph.
bdgwx repeatedly fails because he deals with theoretical numbers when the numbers from real life have warts and all.
As Pat Frank has noted. As you have noted over and over. As Bill Johnston has noted over and over. As I have noted over and over.
Nick Stokes needs to think about the difference between theoretical numbers and field measurements also.
Geoff S

Nick Stokes
Reply to  sherro01
June 29, 2023 6:57 pm

Nick Stokes needs to think about the difference between theoretical numbers and field measurements also.”

So what different arithmetic is to be done?

As I complained above, no-one espousing these weird notions ever does an actual worked example, like I did with the Melbourne data. I’m just told that the Melb data, direct from the AWS, is somehow not physical.

Reply to  Nick Stokes
June 29, 2023 11:21 pm

is somehow not physical.

No. Inaccurate.

sherro01
Reply to  Nick Stokes
June 30, 2023 10:55 pm

Nick.
The arithmetic that is missing is in the quantitative analysis of all factors that affect the final result. Pat Frank here has worked hard to gather such data for instruments like LIG thermometers. I have not seen such a comprehensive list before. Surely, it contains fundamentals for a useful estimate of uncertainty.
Without that, you cannot discern whether numbers cancel out, because you do not have all of the numbers, so you have to make guesses.
I do not know of ways to estimate the uncertainty of guesses.
Geoff S

bdgwx
Reply to  sherro01
June 29, 2023 7:33 pm

GS: bdgwx repeatedly fails because he deals with theoretical numbers when the numbers from real life have warts and all.

So if NIST, JCGM, etc. and the equations and tools they publish produce incorrect results for “real life” scenarios then where I do go for the math that works for “real life” scenarios?

Reply to  bdgwx
June 29, 2023 9:06 pm

They don’t; rather it is the ridiculous climastrologers who abuse these tools with their religious dogma.

Without 1/root-N, they can no longer ascribe deep meanings to every little squiggle in the UAH graph.

Reply to  bdgwx
June 30, 2023 4:52 am

Try Chapter 3 in John Taylor’s book. Chapter 4 and on typically considers situations where the uncertainty is random and Gaussian, just like Bevington does.

The very first sentence in Chapter 4 is:

“We have seen that one of the best ways to assess the reliability of a measurement is to repeat it several times and examine the different values obtained.”

This is specifically talking about multiple measurements OF THE SAME THING. That simply doesn’t apply to single measurements of multiple things!

He goes on to say: “As noted before, not all types of experimental uncertainty can be assessed by statistical analysis based on repeated measurements. For this reason, uncertainties are classified into two groups: the random uncertainties, which can be treated statistically, and the systematic uncertainties which cannot. This distinction is described in Section 4.1. Most of the remainder of this chapter is devoted to random uncertainties. Section 4.2 introduces, without formal justification, two important definitions related to a series of measured values x1, …, xn, all of single quantity x.”

Note carefully that very last sentence. Bevington says the exact same thing in his book.

The climate alarm cult, and that includes you, tries to shoehorn single measurements of different things, typically with systematic biases, into the same shoe as multiple measurements of the same thing. You do so by making the totally unjustified assumption that all uncertainty is random and Gaussian and cancels no matter what.

You, Nick, bellman, etc have never, NOT ONCE, worked through all the examples in Taylor’s book. You just cherry-pick things that look like it supports your unjustified treatment of the temperature data.

Reply to  Tim Gorman
June 30, 2023 5:12 am

In this very thread Nick pulled out the NIST TN and declared: “See? Averaging reduces uncertainty!” without understanding what he was referencing.

Reply to  karlomonte
June 30, 2023 5:44 pm

The point is, averaging can reduce uncertainty if several criteria are observed, not the least of which is that that same thing must be measured by the same measuring device, under unchanging environmental conditions, at least 100 times to justify one more significant figure. One can do that in a laboratory with a physical object, chemical solution, or electronic device. It is the essence of the idea of a ‘controlled experiment.’

However, in the real world, where none of the environmental parameters are under the control of the experimenter, one is primarily an observer, and the best that can usually be done is to make one high-accuracy/precision measurement with one calibrated instrument, or, simultaneously take several measurements with measuring devices that are very similar, but not identical.

It is the unwarranted assumption that temperatures measured at different times, in different places, with different instruments, can be treated as in the first case above, that is the issue.

Reply to  Clyde Spencer
June 30, 2023 6:00 pm

Exactly! And these people have been told this a zillion times, yet somehow it can’t penetrate their a priori pseudoscience.

Reply to  Clyde Spencer
July 1, 2023 6:30 am

“…not the least of which is that that same thing must be measured by the same measuring device.”

Same tripe, different day. For decades, petroleum reservoir simulators have “averaged” – sometimes with weightings, sometimes not – dozens of geological and rheological parameters for their inputs. They come from different instruments, at different times, most without being gathered at the same point in subspace. These values all have resolutions – often non normal – and are evaluated together. They also often correlate, which is taken into account. And many of these different parameters sometimes cross correlate, sometimes don’t – think porosity and permeability.

But somehow, in spite of the fact that the inputs are certainly worse than those under discussion here. petroleum reservoir simulation, using many of the same techniques as those used in spatial temperature and sea level evaluations, has firmly supplanted the empirical relationships used by cranky doubters like me for most of my career. And in toto, the oil and gas bizzes are trillions better off for it.

Reply to  bigoilbob
July 1, 2023 6:43 am

And this morning — a blob word salad!

*golf clap*

Reply to  karlomonte
July 1, 2023 6:48 am

“…a blob word salad”

Ah, the all purpose, thought free rejoinder you keep on speed paste. Right up there with your middle school “Hey, F*** you, man!“. BTW, sorry about the lunch money. I’m not that guy any more…

Reply to  bigoilbob
July 1, 2023 6:56 am

Not a single word from blob about Pat’s post or paper, just an inane snipe toward Clyde S.

Reply to  karlomonte
July 1, 2023 8:32 am

Correct. I was responding to a fact free claim by Mr. Spencer. Refutation of it pokes holes in the larger claims.

Reply to  bigoilbob
July 1, 2023 7:14 am

Do the measures of petroleum reservoirs change from minute to minute or are they stable over time?

Big difference!

Reply to  Tim Gorman
July 1, 2023 8:27 am

Depletion. Water influx. Fracs. Steam. Nuff said?

Bigger pic, I was responding to Mr. Spencer’s ridiculous limiting assumptions, which were not only “at the same time”. Just the plethora of techniques to snag a permability estimate number over a dozen….

Reply to  bigoilbob
July 1, 2023 9:17 am

And none of your factor vary during a day, let alone a month. Tell us the resolution you get on your simulation. Barrel, fractions of a barrel, gallons. Remember we are talking about fractional percents. At 70 degrees and 0.001 uncertainty you have 0.001/70 = 1.4×10^-5. that’s 0.0001%. Are your simulation that accurate?

Reply to  Jim Gorman
July 1, 2023 9:25 am

No, much wider. We commonly run many realizations to accommodate the referenced error distributions. But, as with climate simulations, perfectly fit for purpose. You data is orders of magnitude better than ours w.r.t. what you need to use it for…

bdgwx
Reply to  Clyde Spencer
July 1, 2023 12:23 pm

CS: which is that that same thing must be measured by the same measuring device, under unchanging environmental conditions

Says who? And why is NIST, JCGM, etc. not following this rule?

Reply to  bdgwx
July 1, 2023 2:03 pm

Says who? The NIST, JCGM, etc.

They specify a MEASURAND. That’s singular. It does *NOT* imply a conglomeration of single measurements of multiple different things.

Taylor, chptr 4: “Most of the rest of this chapter is devoted to random uncertainties. Chapter 4.2 introduces, without formal justification, two important definitions related to a series of measured values x1, …, xn, all of the single quantity x.”

A single measurand “x”. Not multiple measurands x1, …, xn but multiple measurements of x.

Bevington is more wordy but says the same thing: “If we take a measurement x1 of a quantity x, we expect our observation to approximate the quantity, but we do not expect the experimental data point to be exactly equal to the quantity. If we make another mesurement, we expect to observe a discrepancy between the two measurements because of random errors, and we do not expect either determination to be exactly correct, that is,equal to x.”

Again, multiple measurements of the SAME THING. Single measurements of different things do not describe a a distribution around a true value. They just don’t. And if they don’t than an average derived from them is basically meaningless except for mental masturbation by climate scientists. The whole concept of a global average is not fit for purpose from the very get go because it is based an averaging different things.

NIST, JCGM, etc all recognize this. It is *you* that keeps trying to fit what they have developed into a round peg you can jam into a square hole using statistical tools and descriptors that simply aren’t fit for the purpose you are trying to impose on them.

Reply to  Tim Gorman
June 30, 2023 2:37 pm

You, Nick, bellman, etc have never, NOT ONCE, worked through all the examples in Taylor’s book.

If there were an exercise that showed how averaging increases uncertainty you would point me to it. Expecting me to work through every exercise in a book just in case there may be something that proves your point is a waste of time.

Reply to  Bellman
July 1, 2023 5:15 am

None of the standard authors address averaging of different things because an average of different things is *NOT* a measurement, it is a statistical descriptor. They all *do* address the standard deviation of the sample means obtained from a random, Gaussian distribution – which assumes complete cancellation of uncertainty which leads to the average being the true value of the population. The standard deviation of sample means is only a measure of how close you are to the population mean and only has meaning if the population is random and Gaussian.

You and the rest simply can’t seem to make the leap that measurements from different things is DIFFERENT than measurements of the same thing.

It always comes back to the meme of: “All uncertainty is random, Gaussian and totally cancels.”

You can argue you don’t assume that but it just shines through everything you post. This post is no different.

Reply to  Tim Gorman
July 1, 2023 6:46 am

Its the Way Of The Trendologists — pseudoscience because their conclusions are made prior to analysis, and nothing must be allowed to interfere with the conclusions.

Reply to  Bellman
July 1, 2023 3:11 pm

Yes, its true, you can’t read English.

Reply to  karlomonte
July 1, 2023 3:24 pm

¿Qué?

Williamw247
June 29, 2023 9:59 am

The battle for truth is raging in every discipline. Thank you to the brave people waging it here against the Climate Communists.

June 29, 2023 12:03 pm

Hmm, however, satellites, radiosondes, and GPS radio occultation also show warming and similar rates. I wonder how they manage to reproduce the same mistakes.

JCM
Reply to  Javier Vinós
June 29, 2023 2:41 pm

I don’t think this paper is meant to discount the possibility of surface warming.

However, there is a discontinuity between estimated warming of the surface and that of the lower and mid tropospheric layers during the satellite era.

The surface is warming too fast, or maybe the free troposphere is warming too slow.

MIPs seem to be calibrated to the surface estimates, while the satellite tropospheric layers are allowed to deviate.

This is a problem considering the vertical tropospheric profiles are not matching model expectation. It’s usually explained away as systematic bias in MSU tropospheric satellite retrievals.

However, the are at least three distinct possibilities:

1) That the problem truly is satellite bias.

2) That there is a bias created by surface estimates.

3)That there are other physical mechanisms, not yet described, needed to explain the discrepancy.

Reply to  JCM
June 29, 2023 3:13 pm

Such as the discontinuous, spotty nature of the NOAA satellite sampling at latitudes less than about 60°,

Reply to  Javier Vinós
June 29, 2023 5:33 pm

Are they measuring the same thing? Or different things? What is actually happening where?

Do satellites, radiosondes, and GPS actually measure surface temperatures? Or only atmospheric temps?

Reply to  Tim Gorman
June 29, 2023 7:33 pm

They measure a proxy for temperature in the atmosphere which I forget the details of. It isn’t a direct temperature measurement though – accuracy depends on how closely the proxy correlates with air temperature.

Reply to  Javier Vinós
June 30, 2023 12:56 am

In Pat Frank’s paper it is mentioned that the satellite measurements were “calibrated” based on surface SST measurements from ships which at best were full of non-quantifiable systematic errors. So in any case, we’re back to START. The first thing to do would be to calibrate the satellites with carefully measured actual data where the errors are quantifiable.
There are two series of satellite measurements which don’t match exactly to one another (different instrumentation or whatever). This could also be due to calibration errors.

sherro01
Reply to  Javier Vinós
June 30, 2023 1:02 am

Javier,
As Bill Johnson shows on Bomwatch blog, usual daily temperatures have a variance that mathematically has a big component from rainfall variation. Rain cools. The satellite microwave measurements might also have a moisture variance component, but I cannot recall any study of it.
It is not likely that surface LIG temperatures have identical mechanisms to the microwave ones, so there should be a difference. Which is more correct remains debatable because of an absence of comparable absolutes, such as can exist in pure number stats, but not in the physical measurement world under discussion. Geoff S

Reply to  Javier Vinós
June 30, 2023 9:37 am

Noice.

Crickets.

Reply to  Willard
July 1, 2023 2:32 pm

Satellites were calibrated against in situ SSTs during the 20th century, Willard. Their surface temperatures are not independent. Mentioned in the paper. Maybe Javier missed that.

The paper is about the historical surface air temperature record to about year 2000. Maybe Javier missed that limit, too. it seems you did.

June 29, 2023 12:29 pm

…, and an uncertainty diminishing as 1/ÖN.

I presume that the symbol “Ö” was meant to be “sqrt( ).”

Reply to  Clyde Spencer
June 29, 2023 1:45 pm

You’re right, Clyde, Looks like the translation software didn’t recognize the special ‘sqrt’ character. All the Greek lettering is wrong, too.

youcantfixstupid
June 29, 2023 1:27 pm

Well damn! Good job Pat. I had forgotten about your ‘promise’ to take on the instrument record error after your last seminal paper on model error.

You are effectively the reason I came to WUWT and keep reading it in so much as I stumbled upon your description of your challenges getting anyone to understand, much less publish your paper on climate model error.

Ultimately the proper handling of propagation of errors in a formula and proper handling of measurement error, especially the latter is what makes GOOD science very hard. It’s why I wanted to stick to theoretical physics…no errors other then in my calculations. 🙂

In the undergrad physics course I taught when taking my degree it was the handling of errors that I dinged people for most not whether they demonstrated an understanding of the physical laws they were testing. They get the latter from the Prof & tests they had to take. I wanted them to come away with an understanding that dealing with error would be the single greatest challenge in any career they may take up whether science or not.

In any case I fear that this 2nd seminal paper will go unnoticed & ignored by the ‘powers that be’. When your whole world of action & study is basically thrown asunder in 2 papers it’s hard for people to accept that reality.

It drives me to no end of grief to read postings even by ‘skeptics’ on WUWT that don’t include a representation of the model/measurement error or unrealistically small values.

With that said I still hold hope that your papers will somehow make a difference in the community.

Reply to  youcantfixstupid
June 29, 2023 2:08 pm

Thanks, ycfs. Everyone I worked with killed themselves to get their data right. They sweated every bit of systematic error, which always showed up, and lived within instrumental resolution.

So it really frosted me to see all of that just waived away with self-serving assumptions in consensus climatology. Making things easy for themselves.

As much as anything, the abuse of method kept me involved in this mess.

Anyway, thanks for your kind words. Your knowledgeable support is very welcome.

youcantfixstupid
Reply to  Pat Frank
June 29, 2023 2:25 pm

First I need to correct 1 minor error in my previous post..it should say ‘undergrad physics LAB course’. I am not a Physics professor, never was one & didn’t play one on TV.

And frankly you are my ‘hero’ of sorts because of your dogged determination to write scuh detailed papers on something very important but for most people so ‘mind numbing’ that they couldn’t get through the first sentence of an introductory text on the subject. 🙂

You remind me of all the experimental physicists at a medium energy accelerator in Canada I hung out with in taking my degree. I couldn’t have done what they did daily. Checking, rechecking, updating calibrations. No detail was too small for these guys especially given the subject of the experiments & trying to detect very small effects.

In any case keep up the great work.

Reply to  youcantfixstupid
June 29, 2023 4:53 pm

Your experience is why physicists will commonly report findings with 6-sigma uncertainty and self-identified climatologists rarely even provide 1-sigma.

Reply to  youcantfixstupid
June 29, 2023 5:38 pm

You and your students weren’t the only ones to learn about uncertainty in a LAB course. Most engineers do. Many physics majors don’t because the professors don’t understand it either. It doesn’t help when not a single example in any textbook I own on physics or statistics analyzes experimental results with values given as “stated value +/- uncertainty”.

Reply to  youcantfixstupid
June 30, 2023 8:46 pm

Thanks, ycfs. I’m humbled.

a_scientist
Reply to  Pat Frank
June 29, 2023 8:41 pm

Pat, thanks for this paper, I really enjoyed it, well written, and very interesting in the historical aspects of thermometry. I have been to Thuringia and know their history of glass making generally and thermometers, specifically. (see table 2)
Some great materials science there, and good 19th century great physical measurements using the ice point by such scientific giants as Joule, whose name is on our basic unit for energy.

As for the uncertainty complainers, I bet these precision Engineers would run circles around these amateurs.

Reply to  a_scientist
June 30, 2023 3:28 pm

Thanks for the kind words, a_s. I’m glad you enjoyed it. Working through Joule drift and watching results drop out of the analysis was enjoyable.

At the start, I knew lead glass and soft glass existed, but had no idea about any of the details — the mixed alkali effect, etc. It took some research tracking down all those glass compositions.

One indelible impression is the competence and determination of the 19th century scientists. They worked hard, they paid attention to detail, they were candid about their instruments. Their contrast with a certain modern lot is striking.

Focused attention of competent and uncompromising precision engineers would be very welcome.

Reply to  Pat Frank
July 1, 2023 6:56 am

I often try to relate uncertainty in temperature data to what is found with car engines, especially racing engines. If you could measure the cylinder wall diameter in 10,000 engines with numerous different applications (short track racing, drag racing, short-haul freight carrying, long-haul freight carrying, street racing, endurance racing, and on and on and on) would the average value of the wear actually tell you anything if you don’t know how all of them were measured and recorded? Some using micrometers, some using analog calipers, some using digital calipers, some maybe even using a tape measure or ruler. There would be so many avenues of uncertainty to creep in the average would be of little use. Oil change intervals, oil type, piston ring specs, etc, – all unknowns and hardly able to be considered to be random and Gaussian.

Yet we are expected to believe that the global average temperature, calculated from data with at least as much uncertainty as the cylinder wear in a universe of car engines, is somehow fantastically derived with almost infinite precision by assuming all error is random, Gaussian, and cancels.

Even more basic, the standard deviation of the sample means (the SEM if you will) only tells you how close you are to the population mean. It tells you nothing about the accuracy of the population mean unless you assume all measurement uncertainty is random, Gaussian, and cancels. Even if you had every single member of the total population at hand and could calculate the population average with infinite precision you still wouldn’t know how accurate it is without considering the uncertainty associated with each and every member in the data set and propagating that uncertainty onto the average you calculate. It is that propagated uncertainty that determines the accuracy of the mean.

June 29, 2023 1:38 pm

Based on this analysis of uncertainty, the generally reported measurement changes are less than the uncertainty by a considerable amount. Therefore, the actual warming since 1850, in degrees C, could be as much as 3C+, no?

An interesting conjecture but, I suspect impossible due to lack of data, if not for other reasons in such a complicate system:
If there were enough physical data of changes in amount of ice, changes in tree line, changes in locations of animals and plants northward (northern hemisphere), and other “stuff”, might it be possible to calculate a more accurate value for temperature change?

Reply to  AndyHce
June 29, 2023 7:41 pm

“Impossible due to lack of data.” is one of the messages of Pat Frank’s paper. The temperatures given are merely a mathematical probability of what they might be but, despite their propaganda, we simply don’t know what the precise temperatures actually are.

Reply to  AndyHce
June 30, 2023 3:20 am

Yes.

The complete change in air temperature between 1900-2010 is then 0.94±1.92 C.

So we could be at 2.86°C in just 110 years.
That’s about 0.3°C warming every decade!

It’s worse than we thought. Almost a degree of warming by 2050!!!

Joe Crawford
June 29, 2023 1:43 pm

Thanks Dr. Frank, I agree with Monckton. Between this and your 2019 paper on the propagation of error you have placed the final nails in the coffin for CAGW. Of course the adherents of CAGW refuse to accept that fact until they have developed a replacement, their livelihood depends on it. Besides, there’s no way they can leave the gaggle of true believers they have assembled until they have been converted to a follow-on belief.

Reply to  Joe Crawford
June 29, 2023 4:12 pm

… you have placed the final nails in the coffin for CAGW

Now, where have I heard that before?

Reply to  TheFinalNail
June 29, 2023 7:42 pm

No idea but you hum it and I’ll join in!

June 29, 2023 4:52 pm

Pat, what would be your critique of this temperature chart (Hansen 1999)?

comment image

Fit for purpose, or not fit for purpose?

Reply to  Tom Abbott
June 29, 2023 5:16 pm

Temperature numbers accepted at face value, Tom. No error bars, no uncertainty bounds.

Instrumental detection limit and visual repeatability alone would add about ±0.4 C (95%) up through about 1980 and about ±0.2 C to 2000 (assuming instantaneous switch to MMTS at 1981), darkening much of the face of the plot.

The uncertainty would about halve, were temperatures reported rather than anomalies.

And Joule-drift renders everything prior to 1900 unreliable.

sherro01
June 29, 2023 6:24 pm

Pat,
I’ll read your paper later today.
From your comments already, it looks like a much-needed “return to the basic senses” paper.
You criticise in closing the professional societies and groups who should have made your paper unnecessary if they had acted properly. By accident of timing, I hope to finish today a 3 part article on the subject of corruption of US scientist by some with money, power and beliefs. I use the Rockefeller Foundation as an example, tracing the history of the Linear No Threshold method to relate toxin dose to harm. Ed Calabrese at Amherst has done decades of diligence on this.
I am hoping that WUWT will accept it.
Pat, thank you for your persistence in injecting reality into major issues like climate models and thermometry that reality history will show were little more than fairy tales. Geoff S

Reply to  sherro01
June 29, 2023 9:49 pm

Thanks, Geoff. Let me know what you think. 🙂

June 29, 2023 7:08 pm
Reply to  Steven Mosher
June 29, 2023 11:09 pm

The academic editor is Japanese, Steve. Do you want to denigrate him, too?

Reply to  Steven Mosher
June 30, 2023 3:26 am

BTW.
China lead the world in the number of patents this year. The quickest advancing nation should not be disparaged for their technical expertise.

https://worldpopulationreview.com/country-rankings/patents-by-country

It’s probably because they have the manufacturing industry and thus real-world problems to solve.

Reply to  MCourtney
July 2, 2023 5:29 am

It was reported that most of the technology that was found on the Chicom spy balloon was American.

Reply to  Tom Abbott
July 2, 2023 7:56 am

That is not surprising. The balloon was never going to be recovered by China.
If they had more advanced technology they wouldn’t put it on that.

Reply to  Steven Mosher
June 30, 2023 4:29 am

mosh is a racist, what a surprise.

bdgwx
Reply to  Steven Mosher
June 30, 2023 7:51 am

It is also a predatory journal that publishes over 800 articles per day.

Reply to  bdgwx
June 30, 2023 9:00 am

Idiot.

Loren Wilson
June 29, 2023 7:36 pm

To me, the first table giving the readability of a mercury in glass thermometer is really its reading precision, not its accuracy. The accuracy is a function of the construction of the thermometer and being filled correctly, and then being calibrated properly. In other words, you can read the thermometer to ±0.3°C but if it has drifted or is otherwise out of calibration, you do not know the true temperature to 0.3°C. Your reading is less accurate than that. This error does not disappear by averaging lots of readings. The reading uncertainty may be reduced if it is a random error. The out-of-calibration error is not reduced.

Reply to  Loren Wilson
June 30, 2023 5:13 am

Yep. But the climate alarm cultists just assume, with no justification, that the out-of-calibration error cancels out.

bdgwx
June 29, 2023 8:13 pm

LiG Metrology, Correlated Error, and the Integrity of the Global Surface Air Temperature Record has passed peer-review and is now published in the MDPI journal, Sensors (pdf).

MDPI is a predatory journal. And with a publishing rate of over 800 articles per day I think you’re being awfully cavalier when you say the publication passed peer-review. Who knows…if there is a class action lawsuit against MDPI like what we saw with OMICS then one day you might be able to claw back some of your publishing fees.

Reply to  bdgwx
June 29, 2023 9:10 pm

I am psychic…

Reply to  bdgwx
June 29, 2023 9:49 pm

Speaking in ignorance, bdgwx.

Richard S Courtney
Reply to  Pat Frank
June 30, 2023 8:46 am

I am amused by the posts from bdgwx.

I fail to understand why some peoplw are annoyed by the posts from bdgw when bdgwx is an internet troll who only posts nit-picking snark, snears and smears from behind the cowards’ screan of anonymity.

The future may impose dementia on any of us, so I suggest we try to be kind and laugh at the demented behaviour of bdgwx instead of getting annoyed by it.

Reply to  Richard S Courtney
June 30, 2023 9:32 am

He is 100% clue-resistant.

While I agree with your assessment, the problem is that he abuses standard publications and twists them, falsely claiming they support his a priori irrational ideas about “climate”. People reading his nonsense who are not familiar with these subjects could be led into viewing him as some kind of expert, if he is not challenged.

Reply to  Richard S Courtney
June 30, 2023 9:15 pm

While gadflies can sometimes be useful, they are still invariably annoying. They should work harder at being more informative and less annoying.

Reply to  bdgwx
June 30, 2023 1:10 pm

MDPI publishes 427 journals in total. Granting your rate, that’s just under two articles per day per journal. Hardly excessive.

Sensors registered a 3.9 impact factor in 2022, which is a good number. It seems you have no case, bdgwx.

bdgwx
Reply to  Pat Frank
June 30, 2023 1:20 pm

It wasn’t my case. It comes from predatoryreports.org.

Reply to  bdgwx
June 30, 2023 2:43 pm

Just shows you shouldn’t take other’s opinions uncritically, bdgwx.

Reply to  bdgwx
June 30, 2023 3:04 pm

A 2020 article about MDPI

I originally studied the MDPI portfolio expecting to find something nefarious behind their success, and I intended to contrast them to other “healthier” OA portfolios. Upon further study, I concluded that my expectations were misplaced, and that MDPI is simply a company that has focused on growth and speed while optimizing business practices around the author-pays APC (article processing charge) business model. As I discovered later, I was not the first skeptic to make a U-turn (see Dan Brockington’s analysis and subsequent follow-up). What follows is an analysis of MDPI’s remarkable growth.”

Reply to  bdgwx
June 30, 2023 6:03 pm

Sounds a lot like factcheck.com…

Reply to  bdgwx
July 1, 2023 6:42 am

Dr. Frank is correct in that, given the business plan of MDPI, the number of daily pubs does not necessarily make it predatory. Rather, the claim is amply backed up by your link.

The best metric for the actual contribution of this paper to our understanding will be it’s citations, Unless you are of the Dr. Evil world wide conspiracy bent, with tens of thousands of scientists ignoring their competitive drive to prevent a Stanford sinecurist with time on his hands from making a ground breaking contribution (that they could exploit for their own benefit), then this article will be widely cited. Maybe we could check back at Dr. Franks similar pubs to see how well he did – without the many self cites….

Reply to  bigoilbob
July 1, 2023 6:57 am

Another clown in the rings now…

Reply to  bigoilbob
July 1, 2023 11:10 am

The best metric for the actual contribution of this paper to our understanding will be it’s citations

Back in 2001, I published a paper on the iron-molybdenum-sulfur cofactor of Nitrogenase.

It presented a solution to what had been a vexed and unsolved problem of ligand-binding, it reconciled 20 years of data that had lain unexplained in the literature, and it proposed a rational structural model for the solution structure of the cofactor.

The behavior of the cofactor was more complicated than anyone had expected. People left the field for other research endeavors, after that paper came out.

Since then, its gotten only 5 citations.

That work was critically central to the field as it was then, got me an invitation to an exclusive conference, and earned the respect of every first class scientist working on Nitrogenase.

All this is a long-form observation that you don’t know what you’re talking about bob.

As regards LiG Met. why would anyone expect it to be cited by hubristic incompetents when their careers are down in flames consequent to it.

Work stands on its internal merits. Not on its citation index or whether a field shown obviously partisan chooses to notice it.

Reply to  Pat Frank
July 1, 2023 12:10 pm

All this is a long-form observation that you don’t know what you’re talking about bob.

As regards LiG Met. why would anyone expect it to be cited by hubristic incompetents when their careers are down in flames consequent to it.

Amen!
+7000

Reply to  bigoilbob
July 1, 2023 11:25 am

How many citations do papers from Einstein (relativity) or Planck (heat) or Newton (motion) or Maxwell (EM) receive, yet they are seminal works that have withstood the test of time.

You are doing a specious argument that has nothing to do with the validity of the paper. You would fail a high school forensics/debate class.

Reply to  Jim Gorman
July 1, 2023 2:11 pm

A different time. Very few people read Einstein’s original paper. When his work became known, the tenets of that paper have been indirectly cited almost 190,000 times.

So, sorry Mr. Gorman. Citations of peer reviewed papers are one of the main stocks in trade of modern scientific progress. To ignore good work puts you at a disadvantage. Making Dr. Frank’s work – well….

Reply to  bigoilbob
July 1, 2023 2:51 pm

Keep dancing, clown.

Unable to present a single technical objection, you instead throw this shite against the wall, hoping it will stick.

Reply to  bigoilbob
July 1, 2023 5:26 pm

Citations of peer reviewed papers are one of the main stocks in trade of modern scientific progress

No, they’re not. Accurate work is the stock in trade of scientific progress.

Citations prove nothing, as fully proved by the 1283 citations garnered by the scientific crock that is MBH98.

Reply to  Pat Frank
July 2, 2023 5:43 am

“Citations prove nothing, as fully proved by the 1283 citations garnered by the scientific crock that is MBH98.”

Bam!

I would say that MBH98 is as good an example as any, showing that “citations prove nothing”.

Robert B
June 29, 2023 10:44 pm

The rate or magnitude of climate warming since 1900 is unknowable

https://www.woodfortrees.org/graph/gistemp/from:1978/detrend:0.85/mean:6/plot/uah6/detrend:0.5/offset:0.4/mean:6

The above is a comparison of GISS LOTI and UAH 6, after detrending, offsetting and 6 month moving mean. UAH6 is a temperature calculated from radiation intensity that is dependent on air temperature and GISS is calculated from thermometer measures of maximum and minimum over a 24 hour period. The later is not an intensive property of anything, so the average after Krigging or whatever they do, is meaningless. Still, they are very similar. The overall trends differ significantly but the above plot shows that it’s mainly the estimates at earlier El Nino and and some La Nina periods that GISS has a much lower deviation from the trend.

There is no way in hell that one of them hasn’t done something dodgy.

bdgwx
Reply to  Robert B
June 30, 2023 8:31 am

UAH and GISTEMP measure different things.

Richard S Courtney
June 30, 2023 3:47 am

Pat Frank

Thanks for a fine and cogent paper. It provides a clear ‘take home message’ that global temperature “measurements” are nonsense. Your finding is not news but is additional confirmation of information which has been known for decades.
The underlying problem is that
(a) there is no agreed definition of “global temperature”
and
(b) if there were an agreed definition of “global temperature” there would be no possibility of an independant calibration standard for any measurement and/or estimate of it.

These facts (itemised here as a and b) enable the teams which provide values which purport to be ‘global temperature’ to alter their data for past years to fit whatever narrative they choose to promote. And each of the teams does change their data series in most months.

I congratulate you on your success in obtaining publication of your paper in the formal literature because – decades ago – I failed to get my paper on these matters published in the formal lierature.

However, these matters were fully explained in my submission to the UK Parliament’s Select Committee Inquiry (i.e. whitewash) of ‘climategate’ that was conducted in early 2010. The submission can be read from Hansard (i.e. the permanent record of the UK’s Parliament) at https://publications.parliament.uk/…/387b/387we02.htm

The linked submission explains the importance of one of the emails from me which were among the amails ‘leaked’ from the Climate Research Unit (CRU) of East Anglia University.

The email is provided as Appendix A of the submission which explains that the email is part of my objections to the improper activities which were used to block publication in the formal literature of my paper. Appendix B is a draft version of that paper and I think it contains much of use to all who are interested in the methods of ‘climate scirntists’ and/or ‘climate science’.

Again, congratulations on your paper and its publication. I am pleased that I have lasted long enough to see the reality of global temperature measurements (i.e. they are all bunkum) being published in the formal literature, and I hope all interested in real science will publicise your paper.

Richard

Richard S Courtney
Reply to  Richard S Courtney
June 30, 2023 3:52 am

The link did not work. I will try again
https://publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387b/387we02.htm                                                                       
Richard

Reply to  Richard S Courtney
June 30, 2023 5:27 pm

Thank-you Richard. You’re a true veteran of the climate wars, on the science side of the front lines right from the start. I’m happy to acknowledge your precedence in critiquing the mess that is the air temperature record.

I also understand your frustration at having your paper blocked from publication. I faced similar opposition during the six years and 13 submissions it took to get Propagation… competently reviewed and published.

Years ago, when starting to construct myself as an adult, I consciously chose knowledge over delusion. Perhaps naively, I decided if knowledge brought sorrow, I’d rather live knowing and in sorrow than in blissful delusion. Delusion seemed far too dangerous.

In that light, you must have lived for years with considerable sorrow, knowing as you do while seeing people living in delusion determinedly driving themselves and everyone else over a cliff.

So best wishes, Richard. When we’re both gone and the history of this benighted episode is written, your name will be there remembered for your analytical sanity when so many had volunteered into Bedlam.

Reply to  Pat Frank
July 2, 2023 5:52 am

“So best wishes, Richard. When we’re both gone and the history of this benighted episode is written, your name will be there remembered for your analytical sanity when so many had volunteered into Bedlam.”

Well, said. Thanks for all you have done, Richard.

bdgwx
June 30, 2023 7:34 am

Here is another discrepancy I noticed from your publication.

Your equation (4) is as follows.

2σ(Tmean) = 1.96 * sqrt[ (0.366^2 + 0.135^2) / 2 ] = 0.382 C.

When I plug that into a calculator I get 0.541 C.

The right answer when you plug this into the NIST uncertainty machine using u(x0) = 0.366, u(x1) = 0.135, and y = (x0+x1)/2 is 2u(y) = 0.390.

Reply to  bdgwx
June 30, 2023 9:17 am

Nitpick Nick has taught you well, young padowan.

“Correlated and non-normal systematic errors violate the assumptions of the central limit theorem, and disallow the statistical reduction of systematic measurement error as 1 / √N.” — Pat Frank.

Reply to  bdgwx
June 30, 2023 10:56 am

There is no another discrepancy.

In eqn. 4, you’ve caught a misprint, bdgwx, for which I thank you.

Given your fixation on the uncertainty in a mean, you should have caught eqn. 4 as a misprint, because it presents the uncertainty in a mean.

In particular, the division by 2 should have been outside the sqrt.

1.96 * sqrt[ (0.366^2 + 0.135^2)]/2 = ±0.382 C.

You were so insistent that the uncertainty diminished in a mean, that you should have been immediately concerned when your calculation increased the uncertainty in the mean.

How did you miss that?

bdgwx
Reply to  Pat Frank
June 30, 2023 12:27 pm

Technically it should be 2σ(Tmean) = 2 * sqrt[ (0.366^2 + 0.135^2) / 2 ] = 0.390 C, but whatever.

Anyway, so why isn’t the division by 30.417 and 12 outside the sqrt in (5) and (6)?

Reply to  bdgwx
June 30, 2023 12:49 pm

Because eqns. 5 & 6 are RMS.

Also your “/2” should be outside the sqrt. The way you’ve written it equates to ±0.552. I.e., you reiterated the misprint you found.

bdgwx
Reply to  Pat Frank
June 30, 2023 1:17 pm

PF: Because eqns. 5 & 6 are RMS.

Let me word it this way…

You seem to agree that for u(Tday) you use sqrt[ n * u(x) ] / n with the division outside the sqrt.

But you say for u(Tmonth) and u(Tannual) you use sqrt[ n * u(x) / n ] with the division inside the sqrt.

Or asked another way, why use RMS for (5) and (6) but not (4)? And what justification is there for using RMS in (5) and (6) anyway especially since it is inconsistent with what NIST, JCGM, and Bevington all say you should be doing?

PF: Also your “/2” should be outside the sqrt. 

Indeed. Copy/paste error. Let’s try that again.

2σ(Tmean) = 2 * sqrt[ (0.366^2 + 0.135^2) ] / 2 = 0.390 C

Reply to  bdgwx
June 30, 2023 2:42 pm

Because they’re the RMS of the uncertainty, not the uncertainty of the mean.

bdgwx
Reply to  Pat Frank
June 30, 2023 5:43 pm

So you did not intend to compute the uncertainty of the monthly and annual means? No?

If no then why did you call it “The uncertainty in Tmean for an average month (30.417 days)” and use it like the uncertainty of the monthly mean further down in the publication?

Reply to  bdgwx
June 30, 2023 7:14 pm

Because it’s the RMS of the resolution uncertainty over a month or over a year.

bdgwx
Reply to  Pat Frank
July 1, 2023 6:39 am

Yes. I know you used RMS in equations (5) and (6).

My questions are

1) Why did you do that?

2) Who told you to do that?

3) Why didn’t you do that in equation (4)?

Reply to  bdgwx
July 1, 2023 7:08 am

In light of how Pat’s work was published in a “predatory journal” (your words), why do you even care?

Reply to  bdgwx
July 1, 2023 10:49 am

It’s all right there in the paper, bdgwx. Suppose you try studing it for what it actually does say, instead of imposing your mistaken idea of what it doesn’t say.

bdgwx
Reply to  Pat Frank
July 1, 2023 12:07 pm

I don’t see the answers to those questions in your publication. Can you point me to where I can find them?

Reply to  bdgwx
July 1, 2023 1:36 pm

Go to Results Section 3.1 LiG Thermometers: Resolution, Linearity, and Joule-Drift, and start reading there.

bdgwx
Reply to  Pat Frank
July 1, 2023 5:40 pm

I did read that section. I don’t see anything in there that justifies using RMS to propagate the LiG resolution uncertainty through a monthly or annual mean.

Reply to  bdgwx
July 1, 2023 6:16 pm

You read but didn’t understand, bdgwx. I’ve explained here over and over again. Let’s just agree that you don’t get it.

Reply to  Pat Frank
July 1, 2023 6:20 pm

None are so blind as those who refuse to see.

bdgwx
Reply to  Pat Frank
July 1, 2023 6:45 pm

We can definitely agree that I don’t get the use of RMS for the propagation of the uncertainty through a monthly or annual mean in this case especially since it wasn’t done for the daily mean. I’ve not shied away from that. It’s the main impetus behind my challenge of equations (5) and (6). I’ve asked you to defend its use and the best responses I’ve gotten so far is “read the section” and “It’s all right there in the publication”. And now I get the sense that I’m being told to buzz off.

Reply to  bdgwx
July 1, 2023 9:40 pm

I’m quite certain you will continue to embarrass yourself.

Reply to  bdgwx
July 1, 2023 11:14 pm

I’ve explained it over and over and over, bdgwx.

The instrumental lower limit of detection (resolution) is the smallest reliable response increment of which the instrument is capable.

The lower limit of detection is a characteristic of the instrument itself. It puts a constant minimum of uncertainty in every single measurement.

Any movement of a 1 C/division LiG thermometer smaller than that increment is meaningless. See paper Table 1 and the related discussion.

I don’t want to have to keep repeating this. I won’t be explaining it further. Please figure it out.

bdgwx
Reply to  Pat Frank
July 2, 2023 5:40 am

I’ve explained it over and over and over, bdgwx.

I’ve not yet seen an explanation for the use of RMS for propagating uncertainty through a monthly or annual mean.

The instrumental lower limit of detection (resolution) is the smallest reliable response increment of which the instrument is capable.

I have no issue with that.

The lower limit of detection is a characteristic of the instrument itself. It puts a constant minimum of uncertainty in every single measurement.

I have no issue with that.

I don’t want to have to keep repeating this. I won’t be explaining it further. Please figure it out.

That’s too bad. Let it be known that I gave you the opportunity to defend the use of RMS when propagating the instrumental uncertainty through a monthly or annual mean.

Reply to  bdgwx
July 2, 2023 10:31 am

Eqns. 5 and 6 are not propagations of uncertainty, bdgwx. They’re root-mean-squares of uncertainty.

You’re wrong from the start. I’ve told you repeatedly that you’re wrong and why you’re wrong.

Then you go and repeat your original mistake.

Let it be known that you’re insistent in your ignorance, and that you push your opinion when you don’t know what you’re talking about.

Reply to  Pat Frank
July 2, 2023 8:20 am

These folks don’t believe that resolution applies to each and every measurement. To them it is a variable amenable to statistical manipulation so that it can be reduced through division by “n”, thereby allowing one to proclaim fantastic precision and accuracy.

Reply to  Jim Gorman
July 2, 2023 9:00 am

Also note they all fail to recognize what the term “lower limit” implies. The ridiculous yapping about using RMS calculations are nothing but lame attempts at Stokesian nitpicking. It is obvious they didn’t read anything, instead just skimmed without understanding looking for anything to whine about.

Reply to  karlomonte
July 2, 2023 4:27 pm

If a measurement device has a resolution of .001 units but can only recognize a change of .01 units then what good is the resolution?

That’s what they are trying to shoot holes in. If they had *ever* lived in the real world of measurement they would understand – or maybe they wouldn’t. Cultists don’t have to believe in reality.

Reply to  Tim Gorman
July 2, 2023 8:18 pm

Its the Bellmanian yardstick-micrometer, a huge breakthrough in technology.

Reply to  karlomonte
July 3, 2023 4:55 am

Yep!

Reply to  Jim Gorman
July 2, 2023 3:42 pm

Let me try a little experiment. I have the CET monthly values for June. They are given to 0.1 °C, so by your logic it isn’t possible to know the average to less than 0.1°C.

If I look at the average of all Junes since 1900 I get an average of 14.32°C. But you would insist that we can only know the average as 14.3°C. (I’m only interested in the exact average here, I’m not treating it as a sample.)

So now let’s reduces the the lower limit detection of the instrument by rounding all the monthly values to the nearest degree. Obviously that resolution applies to each and every monthly value and cannot be reduced by averaging. So when I take the average of the rounded values it comes to 14°C, to the nearest integer. But what if I cheated and looked at the next digit of the average. It comes to 14.3°C. That’s a coincidence exactly what our higher resolution data said.

What about going to the next digit.

Hi-res = 14.32, lo-res = 14.31. Not identical bit only a difference of 0.01°C.

Let’s try a larger sample size. Now I’ll use all the months and go back to the start of the 19th century, for about 2,600 monthly values.

Now to three decimal places I get

hi-res = 9.406°C, lo-res = 9.407°C. A difference of just 1/1000 of a degree, based on data that was only recorded to the nearest integer.

Let’s really push out luck and round to the nearest 10 degrees. Average monthly values are now only 0, 10 or 20°C, so I’m assuming this isn’t going to be very accurate.

Really lo res average = 9.434°C.

Even I was surprised by that. 2,600 monthly values where each had an uncertainty of ±5°C, yet we could still get the same average to within a third of a degree.

Reply to  Bellman
July 2, 2023 3:53 pm

You are joking right?

Here is a simple question, how do you know the average 14.32 is correct? Could it be 14.39 or 14.29? I need a physical answer, not just, that is what the math gives. Can you ever know what the measurements were to the hundredths digit so that you can say with 100% assurance you know the correct average?

Reply to  Jim Gorman
July 2, 2023 4:03 pm

You are joking right?

Of course. I know no amount of evidence will change your position, it’s an article of faith. I did it more for myself, becasue I was beginning to doubt my understanding. It’s always helpful to test your own believes.

Here is a simple question, how do you know the average 14.32 is correct?

I don;t and that’s not the point of the exercise. The point is to see how much reducing the resolution results in a reduction of the resolution of the mean.

I could have done this with randomly generated numbers and then we can just say the average of those numbers is the true mean, but I know you’d then object on the grounds they weren’t real values.

Reply to  Bellman
July 2, 2023 4:31 pm

I don;t and that’s not the point of the exercise”

Of course it’s the point!

You can’t just ignore uncertainty! It doesn’t go away just because you want to try and work out a hypothetical that doesn’t actually exist in the real world.

“If I look at the average of all Junes”

The average of the STATED VALUES! Where is your uncertainty?

As usual, you have assumed that all uncertainty is random, Gaussian, and cancels! YOU DO IT EVERY SINGLE TIME EVEN THO YOU CLAIM YOU DON’T!



Reply to  Tim Gorman
July 2, 2023 5:11 pm

You can’t just ignore uncertainty!

I was not ignoring uncertainty. I was adding uncertainty in the the form of reduced resolution.

It doesn’t go away just because you want to try and work out a hypothetical that doesn’t actually exist in the real world.

Are yes, that real world again. Where averages don’t exist, but you can still pontificate on their uncertainty,.

Let me explain again what the point was. You claim that if a measurement lacks resolution taking average will not increase the resolution. The point is to demonstrate by an example, why that is not the case. My rounded values have huge uncertainty but somehow were capable of reproducing the average obtained by the higher resolution.

I understand why you will have to drag up any irrelevance you can find and keep rejecting that this is even the point, because it’s a worrying test of your religious beliefs.

The average of the STATED VALUES! Where is your uncertainty?

Please try to think. It does not matter how accurate the data is. They are just numbers being used to illustrate the point that it is possible to increase resolution of an average.

As usual, you have assumed that all uncertainty is random, Gaussian, and cancels!

Is this some sort of catechism your religion demands you memorize? You just keep repeating it regardless of the context.

I have not made any assumptions about the distribution of the data or the uncertainty. And I don’t need to assume the data is random and cancels, becasue all I did was calculate the average and see what the result was.

Reply to  Bellman
July 2, 2023 5:56 pm

My rounded values have huge uncertainty

No, they don’t. They just hide your assumption of perfect accuracy. All your roundings rely on that assumption.

Reply to  Pat Frank
July 2, 2023 6:11 pm

You think rounding a monthly temperature to the nearest 10°C doesn’t increase the uncertainty?

It doesn’t matter if the data is accurate, it’s just an experiment to compare one data set against another. As I said it could just as easily been random numbers which I could state in a thought experiment were perfect. The point is to demonstrate that the resolution of an average can be greater than the resolution of individual values. A claim which some seem to take as an article of faith.

Reply to  Bellman
July 3, 2023 4:03 am

It doesn’t matter if the data is accurate,”

Which is why you will NEVER understand uncertainty. You can’t explain uncertainty using an example where you completely ignore it and assume the stated values are 100% accurate!



Reply to  Tim Gorman
July 3, 2023 5:41 am

Please at least try to understand the context. It’s a help if you at least read to the end of a sentence.

“It doesn’t matter if the data is accurate, it’s just an experiment to compare one data set against another.

You can’t explain uncertainty using an example where you completely ignore it

I can if the point of the experiment is to demonstrate how little the resolution affects the average. That is what was being claimed. That resolution imposes a limit on the resolution of the average. Nothing about if there was also a systematic error.

Reply to  Bellman
July 3, 2023 6:41 am

Argumento by Handwaving.

The truth is you and bw don’t care a fig about uncertainty, its just an inconvenient blockade in the path to The Promised Land.

Reply to  Bellman
July 3, 2023 6:59 am

“It doesn’t matter if the data is accurate, it’s just an experiment to compare one data set against another.””

Says the person that has no relationship with the real world at all!

“I can if the point of the experiment is to demonstrate how little the resolution affects the average.”

No, you can’t. You think you can but that’s because you don’t understand how resolution works at all. It’s your total lack of real world experience coming out just like it always does! Measurements live in the real world, not unreal hypotheticals that you dream up.

Reply to  Bellman
July 3, 2023 4:00 am

Please try to think. It does not matter how accurate the data is.”

That just about explains everything you say.

Reply to  Bellman
July 2, 2023 5:53 pm

I was beginning to doubt my understanding.”

A good first step. Your example is wrong from the start.

Tim is correct.

Your example assumes perfect accuracy and perfect rounding. Nick Stokes’ mistake.

Reply to  Pat Frank
July 2, 2023 6:30 pm

Then you do your simulation that demonstrates it’s impossible to ever increase resolution by averaging.

Of course, if the data isn’t true than neither will the result. Nobody says different. But what is being claimed is that resolution alone makes it impossible for the average to be more precise than the individual measurements. If you don’t like the “perfect rounding” I can easily add some extra randomness to the rounding.

Reply to  Bellman
July 2, 2023 6:45 pm

“””””But what is being claimed is that resolution alone makes it impossible for the average to be more precise than the individual measurements. “””””

To refute this, you need to explain how simple measurements with a wooden ruler marked at 1/8th of an inch can be averaged to achieve a calculated 1/1000th resolution.

Companies everywhere will pay you millions for the knowledge so the no longer need to buy expensive measurement devices.

Heck I could just use a giveaway Harbor Freight voltmeter and set my camera to take 1000 photos at a 1 second interval. Then I could average the reading to get millivolt resolution.

Reply to  Jim Gorman
July 2, 2023 7:07 pm

To refute this, you need to explain how simple measurements with a wooden ruler marked at 1/8th of an inch can be averaged to achieve a calculated 1/1000th resolution.

What do you think I’ve just done. I demonstrated how it was possible for an average using data measured to the nearest degree, or 10 degrees to be almost as precise as one measured to a tenth of a degree.

I’ve no idea what your ruler example is saying. Again are you talking about an average of different things or multiple measurements of the same thing? I’ve explained why that is different.

Companies everywhere will pay you millions for the knowledge so the no longer need to buy expensive measurement devices.

I somehow doubt it. It’s a bit to late to be patenting the idea of the standard error of the mean, or of taking multiple measurements to decrease uncertainty. And, as has to keep being pointed out there are limits, and diminishing returns.

See the chapter in Bevington I keep pointing to. It might seem like in theory you could get unlimited precision by taking multiple measurements, but that improvement is only increasing with the square root of your sample size. If you have an uncertainty or 0.1 and want to “average” it to 0.001, you would need around 10000 measurements. A million measurements wold only allow one extra digit. It’s goping to be easier to invest in a more precise measuring device. And then there there’s the inevitability of systematic errors. The more precision you try to achieve the more likely a tiny bias will start to become relevant.

Reply to  Bellman
July 2, 2023 9:36 pm

Unphysical nonsense—perfect description of the world you inhabit.

Reply to  Bellman
July 3, 2023 4:15 am

almost as precise”

By ignoring propagation of uncertainty and assuming the stated values are all 100% accurate.

“Again are you talking about an average of different things or multiple measurements of the same thing? I’ve explained why that is different.”

But you never treat them differently. Why is that?

“standard error of the mean”

One more time. The standard error of the mean only tells you how close you are to the population mean. It does *NOT* tell you the accuracy of that population mean. Yet you continue, every single time, to assert that SEM *is* the accuracy of the population mean. Thus ignoring totally the propagation of uncertainty from the individual measurements to the average! It’s always “uncertainty is always random, Gaussian, and cancels” so that how precisely we can calculate the average is the uncertainty of the average!

Reply to  Tim Gorman
July 3, 2023 6:02 am

But you never treat them differently. Why is that?

You missed typed “But you keep explaining why they are different and I choose not to remember. Why its that?

The standard error of the mean only tells you how close you are to the population mean.

Which in the real world is what is meant by the uncertainty of the mean, an indication of how close your measurement is to the population mean.

It does *NOT* tell you the accuracy of that population mean.

But in your world the population mean might be different to the population mean.

Yet you continue, every single time, to assert that SEM *is* the accuracy of the population mean.

I may well have said something like that in error. But it isn’t what I’m saying. The SEM is analogous to the precision of the mean. It is not the accuracy of the mean, just a part of it. That is it does not tell you the trueness of the mean, just the precision. If every measurement is off by 10cm, the sample mean will be off by 10cm in addition to any random uncertainty.

Reply to  Bellman
July 3, 2023 7:07 am

Which in the real world is what is meant by the uncertainty of the mean, an indication of how close your measurement is to the population mean.”

That is *NOT* what it means in the real world. In the real world it means “the standard deviation of the sample means”. If you would start to use *that* definition perhaps you could gain a glimmer of understanding for what is being discussed. If you will, it is a measure of sampling error – it has absolutely NOTHING to do with the accuracy (i.e. the uncertainty) of the mean.

You speak the words but have no understanding of what they actually mean – it is either deliberate misunderstanding or ignorance. Only you know for sure.

Reply to  Bellman
July 3, 2023 11:42 am

The SEM is analogous to the precision of the mean. It is not the accuracy of the mean, just a part of it. That is it does not tell you the trueness of the mean, just the precision.

This is ridiculous. Step by step

  1. Population {1,2,3,4,5,6,7,8,9,10}
  2. µ = 6 σ = 2.9
  3. Sample 1 [1, 3, 5], Sample 2 [4, 5, 8] Sample 3 [2, 5, 9] and Sample 4(5, 7, 10]
  4. Sample means 1-> 3 and 2 -> 5.7 and 3 -> 5.3 and 4 -> 7.3 (keep 1 decimal for rounding.
  5. Mean x̅ = 5.3 and s = 1.8
  6. µₑₛₜᵢₘₐₜₑ𝒹 = 5
  7. σₑₛₜᵢₘₐₜₑ𝒹 = 1.8*√3 = 3.1

Well let’s try it with larger samples and more samples.

  1. Population {1,2,3,4,5,6,7,8,9,10}
  2. µ = 6 σ = 2.9
  3. Samples [1,2,3,4] [5,6,7,8] [1,2,9,10] [2,6,9,4] [3,7,9,10] [4,6,7,8] [3,5,9,10] [6,7,8,9] [1,3,5,7] [2,3,5,8]
  4. Sample Means => 2.5, 6.5 ,5.5, 5.3, 7.3, 6.3, 6.8, 7.5, 4, 4.5
  5. Mean x̅ =5.6 and s = 1.6
  6. µₑₛₜᵢₘₐₜₑ𝒹 = 6
  7. σₑₛₜᵢₘₐₜₑ𝒹 = 1.6*√4 = 3.2

More and larger samples gives a better estimate of the mean. If the population σ is rounded to one significant digit, all the calculations for standard deviation match as they should.

Reply to  Jim Gorman
July 3, 2023 1:22 pm

More and larger samples gives a better estimate of the mean.

Ignoring the fact you still haven’t figured out why there is no point in taking multiple samples – you are correct. That’s the point. Larger sample size gives you a better estimate of the mean.

But that’s on the assumption that your measurements and sampling is correct. If your measuring instrument had a systematic error and added 1 to all your values the sample means would be 1 greater, and increasing the sample size will not fix that. Hence the result will be precise but not true.

Reply to  Bellman
July 3, 2023 2:00 pm

And more handwaving.

old cocky
Reply to  Bellman
July 3, 2023 3:48 pm

But that’s on the assumption that your measurements and sampling is correct. If your measuring instrument had a systematic error and added 1 to all your values the sample means would be 1 greater, and increasing the sample size will not fix that. Hence the result will be precise but not true.

Measurement error|uncertainty is orthogonal to sampling. error.

One can have uncertainty in the values due to measurement limitations even with the full (here, small) population.

One can also have sampling errors with (as here) discrete values.

Jim’s example illustrates the latter.

Something which everybody seems to have below their conscious level is that an average (in this case arithmetic mean) is just the ratio of the sum of values divided by the number of values. Converting this to decimal notation tends to give spurious precision. The mean is only rarely part of the distribution, whereas the modes are, and the median is for odd numbers of values.

Reply to  Bellman
July 3, 2023 4:40 pm

You didn’t bother to go look at the reference I gave you, did you? You haven’t read Pat’s study, you haven’t read Taylor, you haven’t read Bevington. All you’ve done is cherry pick things you can use as a troll looking for replies.

The SEM is better known as THE STANDARD DEVIATION OF THE SAMPLE MEANS. Means as in plural. If all you have is one sample mean then you do not have a standard deviation because you don’t have a distribution. You truly have no idea whether that sample represents the population mean or not. You might *hope* it does, you might *WISH* that it does, but you have exactly zero ways to show it. Multiple samples are what allows you to *show* that the sample means are honing in on the population mean.

I am glad as all git out that you have *NEVER* designed anything capable of causing harm to the world. I pray you never do.

Reply to  Tim Gorman
July 3, 2023 5:14 pm

You didn’t bother to go look at the reference I gave you

I’ve no idea what this is aid of. I was agreeing with Jim in the comment you are replying to.

The SEM is better known as THE STANDARD DEVIATION OF THE SAMPLE MEANS.

Maybe in your small part of the world. In most places it’s better known as the Standard Error of the Mean, hence SEM.

Even in the GUM it’s called the Standard Deviation of the Mean. Never “means” never “sample mean”.

Taylor only calls it the standard deviation of the mean or SDOM.

If all you have is one sample mean then you do not have a standard deviation because you don’t have a distribution.

Explained many times why this is not just wrong, it goers against every practical requirement for sampling. I really don’t know if T & J are really this dense, or just have a huge cognitive dissonance, or if they are just trolling.

Here’s a reference for you, the same one I gave to Jim, which he then ignored.

Fortunately, you don’t need to repeat your study an insane number of times to obtain the standard error of the mean. Statisticians know how to estimate the properties of sampling distributions mathematically, as you’ll see later in this post. Consequently, you can assess the precision of your sample estimates without performing the repeated sampling.

https://statisticsbyjim.com/hypothesis-testing/standard-error-mean/

Reply to  Bellman
July 3, 2023 5:27 pm

In order to have a standard deviation you HAVE to have multiple data points. That’s what “standard deviation of the mean” implies!

Even your reference states “sample estimates”, as in multiple samples. It also says “sampling distributions”, again plural distributions.

I truly believe your major problem is that you simply can’t read.

Reply to  Tim Gorman
July 4, 2023 4:47 am

I truly believe your major problem is that you simply can’t read.

Whereas I think your problem is you can read, you just choose not to.

Even your reference states “sample estimates”, as in multiple samples. It also says “sampling distributions”, again plural distributions.

Estimates and distributions, as in you may want to use the technique more than once.

Read the whole clause

Statisticians know how to estimate the properties of sampling distributions mathematically

Nothing about them needing multiple sampling distributions in order to work out one sampling distribution. It’s just saying the know how to do it for any number of different distributions.

He literally tells you don’t need repeated sampling, and goes on to show how to calculate the sampling distribution of IQ scores without needing even one sample. Yet you insist in claiming he actually says the opposite, because of your inability to understand what a plural means.

Reply to  Bellman
July 4, 2023 6:03 am

Whereas I think

No, you don’t.

your problem is you can read, you just choose not to.

Yeah, its true, the irony is thick today.

Reply to  Bellman
July 4, 2023 8:19 am

Estimates and distributions, as in you may want to use the technique more than once.”

“MAY” want to use it more than once? No, you *have* to use more than one sample in order t have a distribution. Even the CLT requires MORE THAN ONE MEAN. You simply can’t develop a distribution from one valuel

“Nothing about them needing multiple sampling distributions in order to work out one sampling distribution. It’s just saying the know how to do it for any number of different distributions.”

You *still* can’t read! The operative words are “sample distributonS”. (my capital S, tpg)

Reply to  Tim Gorman
July 4, 2023 9:31 am

No, you *have* to use more than one sample in order t have a distribution.”

Simply not worth discussing this further. It clearly has some deep religious significance to Tim, which prevents any actual words or logic to prevail.

Reply to  Bellman
July 3, 2023 6:26 pm

Back to cherry picking I see. Did you read the rest of the article?

He gave an example of how to to assess the precision of the sample estimates using the following.

“””””For this example, I’ll use the distribution properties for IQ scores. These scores have a mean of 100 and a standard deviation of 15. To calculate the SEM, I’ll use the standard deviation in the calculations for sample sizes of 25 and 100″””””

Guess what 100 and 15 are? The population mean is 100 and the standard deviation is 15.

Don’t confuse the sample size of 100 with the population mean of 100. The population mean could have been calculated from 10,000 scores or more. You sample all those to get a number of sample means to develop a distribution of all the means of the samples.

The sampling sizes are 25 and 100.

Guess what, you don’t even need to take one sample to know that IF YOU SAMPLE CORRECTLY, the SEM will be 3 for a size of 25 or it will be 1.5 for a size of 100.

That is what he meant by the clip that you posted. Wake up and do some real studying. There are free on line courses on sampling that explain these concepts. You would be wise to take one.

One last note. σ and “s” and “n” have an indirect relationship signified by:

s = σ/√n

That can be rewritten as:

σ = s•√n

Since σ is a statistical parameter of a population, it is constant. That means as “n” goes up, “s” goes down. And, as “n” goes down “s” goes up.

“s” is nothing more than a standard deviation that describes the interval within which the estimated mean will lay. For one sigma the estimated mean will have a probability of 68% of the values in the sample mean distribution. That is why large sample size and lots of samples are important.

Reply to  Jim Gorman
July 3, 2023 7:09 pm

This is just getting painful. As I say you are either incapable of understanding this or are just trolling.

The population mean could have been calculated from 10,000 scores or more.

The population mean is not calculated from any sample, that would be a sample. The population mean is the mean of the population. But in this case it’s the standard normalization for IQ scores, so they are 100 and 15 by design.

You sample all those to get a number of sample means to develop a distribution of all the means of the samples

Explicitly not what he’s doing. He says straight up, “To calculate the SEM, I’ll use the standard deviation in the calculations for sample sizes of 25 and 100.”

Guess what, you don’t even need to take one sample to know that IF YOU SAMPLE CORRECTLY, the SEM will be 3 for a size of 25 or it will be 1.5 for a size of 100.

You realize that “guess what” is exactly what I pointed out to you a short time ago, and you said was impossible. You do not need to have one actual sample to know what the sampling distribution is. It can be calculated from knowing the population mean and standard deviation.

That is what he meant by the clip that you posted.

Guess what? That’s the point I was making.

Wake up and do some real studying.

Speechless.

One last note. σ and “s” and “n” have an indirect relationship signified by:
s = σ/√n

Gosh, what an equation. Never seen that before. It’s almost as if you don’t need to take thousands of samples to get the sampling distribution. Think of all that time we could have saved.

That can be rewritten as

That’s the thing about equations, you can write them in lots of ways, some more helpful than others.

Try

N = [σ / s]²

Hay, now if we knew the population and sample standard deviation we could work out what our sample size was.

That means as “n” goes up, “s” goes down.

Hello, that gives me a cunning idea. What if we increased sample size to reduce the uncertainty of our sample mean?

And, as “n” goes down “s” goes up.

That gives me another idea, but not so useful.

For one sigma the estimated mean will have a probability of 68% of the values in the sample mean distribution.

As long as the sampling distribution is normal.

That is why large sample size and lots of samples are important.

Have you actually understood a word of what was said in the article?

Reply to  Bellman
July 2, 2023 10:12 pm

Then you do your simulation that demonstrates it’s impossible to ever increase resolution by averaging.

They’ve been done.

Demonstration 1. NIST

Demonstration 2: Inter-laboratory comparative calibrations.

Both discussed under 3.1.1. You’d have known if you’d read the paper.

If you don’t like the “perfect rounding” I can easily add some extra randomness to the rounding.

You can’t truly believe anyone here would fall for that.

Reply to  Pat Frank
July 3, 2023 5:03 am

They’ve been done.

Both articles are behind paywalls, but neither of the abstracts mention the effects on an average.

Reply to  Bellman
July 3, 2023 5:26 am

Discussed in the paper you’ve not read, Bellman.

Reply to  Pat Frank
July 3, 2023 6:55 am

bellman and bdgwx have never read either Taylor or Bevington from start to finish and worked out all the examples. They have no basic understanding of the context of any of the formula derivations in either book. They are equation cherry-pickers looking for things they can throw against the wall in the faint hope something might stick.

I even hate to call them statisticians because *true* statisticians understand that what they derive are statistical descriptors and not actual measurements. A true statistician understands that the descriptors have to be applied properly in order to fit the real world, otherwise the descriptors are useless.

Reply to  Bellman
July 3, 2023 4:09 am

Nobody says different.”

YOU do. You do it every single time you make a post.

” But what is being claimed is that resolution alone makes it impossible for the average to be more precise than the individual measurements. If you don’t like the “perfect rounding” I can easily add some extra randomness to the rounding.”

The “rounding” is for significant figure purposes, not uncertainty. You can’t even distinguish between the two! Significant figures are used to convey to others the resolution of the measurements you have made so they don’t think your measuring equipment has more resolution than it actually does. Uncertainty is meant to convey the unknowns affecting your measurements and what their magnitude might be.

The fact that you continually confuse the two or try to conflate them just confirms that you have not learned anything in the past two years about MEASUREMENT or the science of metrology!

It’s why averages can’t improve resolution. It leads to others believing that your measuring equipment has more resolution that it actually does! Averaging simply can’t increase resolution.

Reply to  Tim Gorman
July 3, 2023 5:50 am

YOU do. You do it every single time you make a post.

Then you will have no problem finding examples of posts where I’ve said an average of untrue measurements will give a true average.

The “rounding” is for significant figure purposes, not uncertainty.

Only for convenience. Do you think it would be a different result if I’d rounded to a non integer?

Significant figures are used to convey to others the resolution of the measurements you have made so they don’t think your measuring equipment has more resolution than it actually does.

Are you going to complain to Pat Frank that he rounds all his uncertainty figures to 3 decimal places? That’s claiming he knows the uncertainty to a thousandth of a degree.

It’s why averages can’t improve resolution.”

That’s what your model says. The evidence says different.

Reply to  Bellman
July 3, 2023 6:47 am

That’s what your model says. The evidence says different.

Translation from the Trendology Manual:

“The floot floot did a boom boom on the jim jam.”

Reply to  karlomonte
July 3, 2023 6:54 am

Do you think anyone here cares about your little quips?

Reply to  Bellman
July 3, 2023 8:18 am

Heh. More downvotes! Please!

Reply to  karlomonte
July 3, 2023 1:24 pm

Why would I want to downvote your comment? I think it speaks for itself.

Reply to  Bellman
July 3, 2023 8:58 pm

You missed the key word, it flew over your head, just like my little hint the other day did.

Reply to  Bellman
July 3, 2023 7:01 am

Then you will have no problem finding examples of posts where I’ve said an average of untrue measurements will give a true average.”

bellman: “It doesn’t matter if the data is accurate”

That just about says it all!

Reply to  Jim Gorman
July 2, 2023 4:25 pm

Yep. They want to call the average of different things a MEASUREMENT and then say their measuring device for the average has a higher resolution than the measuring devices used for all the individual measurements.

  1. The average is not a measurement.
  2. They don’t have a higher resolution measuring device with which to measure the average.

Never let it be said that bellman and bdgwx live in the same reality the rest of us exist in.

If the average is *NOT* a measurement then their whole argument falls to pieces.

June 30, 2023 3:18 pm

OK, I’m a bit late to this, and have only had a brief glance through the paper, but this seems to the the usual sticking point.

The uncertainty in Tmean for an average month (30.417 days) is the RMS

of the daily means

Is there any justification for the uncertainty of a monthly average for an instrument being the RMS? RMS is just the uncertainty in the daily reading, not the mean.

As an aside, whilst it’s good the Pat Frank has realized it was wrong to use (n / (n – 1)) as he did in the previous paper, it still seems rather circular to write it as √(N × σ² / N), rather than admit this just reduces to σ.

Reply to  Bellman
June 30, 2023 4:07 pm

It’s the RMS of the uncertainty from the LiG lower limit of detection. Read the paper, then comment.

There is no such realization. The 2010 paper discussed the loss of a degree of freedom, requiring the N-1 denominator.

Reply to  Pat Frank
June 30, 2023 4:56 pm

The 2010 paper discussed the loss of a degree of freedom, requiring the N-1 denominator.

My point was more that giving N is always large, N / (N – 1) is as close to 1 as you could want. Certainly nothing that will impact your 3 figure uncertainties. However, now you mention it, why are you allowing for a loss of a degree of freedom when you are looking at “adjudged” uncertainties?

While I’m being pedantic, you keep saying throughout this new paper that 2σ = 1.96 × whatever. Surely that should be 2σ = 2 × whatever, or the 95% confidence interval = 1.96 × whatever.

Reply to  Bellman
June 30, 2023 5:29 pm

The loss of a DoF is fully discussed. I recommend you read it.

On the other, you’re right. An oversight.

Reply to  Pat Frank
June 30, 2023 5:19 pm

It’s the RMS of the uncertainty from the LiG lower limit of detection. Read the paper, then comment.”

Why does it matter that the uncertainty comes from the resolution? Like your previous work you just throw this concept in with no explanation I can find, so I was hoping you could point me to the reason. You include references to most of the details, but not this one.

It seems counter-intuitive to be claiming that the uncertainty of a monthly average is identical to that for an annual average and to a 30-year average, all with uncertainties quoted to the 1/1000th of a degree.

Reply to  Bellman
June 30, 2023 7:12 pm

It’s all there, Bellman. Get back to me when you’ve read the paper.

If you don’t understand instrumental resolution, you shouldn’t be commenting.

Reply to  Pat Frank
July 1, 2023 5:17 am

“It’s all there, Bellman. Get back to me when you’ve read the paper.”

You’re the author. I was hoping you could explain, or at least point me in right direction.

I do understand instrument resolution, that’s why I’m asking for your justification for using RMS for the uncertainty if a mean. If I had 100 instruments each measuring a different temperature, or the same instrument measuring 100 different temperatures I would not assume that the uncertainty if their average was the same as the average uncertainty, especially if the uncertainty was due to resolution. The only way RMS makes sense is if you think it’s plausible that the resolution causes an identical error in each measurement.

Reply to  Bellman
July 1, 2023 6:32 am

Why do you think RMS requires identical error in each measurement? RMS stands for root-mean-square. Finding the RMS value of a sine wave doesn’t require each value to be identical, why would it for error?

Reply to  Tim Gorman
July 1, 2023 7:27 am

“Why do you think RMS requires identical error in each measurement?”

It doesn’t. It’s just the same as a standard deviation. It averages the squares of all the different errors, and gives you the uncertainty of that set of measurements.

What it doesn’t do is give you the uncertainty of the mean of all those measurements.

The only way that would make sense is if the correlation between all the measurements was 1, that is there is no independence in the errors. That’s why I’m saying that Frank’s arguement requires all errors to be identical.

Reply to  Bellman
July 1, 2023 7:57 am

It doesn’t.”

That isn’t what you said.

Bellman: “The only way RMS makes sense is if you think it’s plausible that the resolution causes an identical error in each measurement.”



Reply to  Tim Gorman
July 1, 2023 8:38 am

Not not sure what you think I said. My point is it only makes sense to consider the RMS as the uncertainty if a mean if all errors are identical.

Reply to  Bellman
July 1, 2023 1:33 pm

if all errors are identical

What is a lower limit of instrumental resolution?

Reply to  Pat Frank
July 1, 2023 2:27 pm

Resolution:

smallest change in a quantity being measured that causes a perceptible change in the corresponding indication

By lower limit you mean the stated resolution uncertainty is the smallest possible. It might be larger but can’t be smaller.

None of this has anything to do with errors being constant.

If a device has a resolution of ±0.5°C, say because it’s rounded to the nearest degree, the stated value will have an error of anything from -0.5°C to +0.5°C depending on the actual temperature. Another device measuring a different temperature will also have an error between -0.5°C to +0.5°C. There is no certainty that they will have exactly the same error.

Reply to  Bellman
July 1, 2023 5:18 pm

because it’s rounded to the nearest degree,

Resolution isn’t rounded to the nearest degree. It’s the detection limit, below which indications have no meaning.

Your understanding of resolution is lacking Bellman. Your comments show that. You should stop commenting on it here.

Reply to  Pat Frank
July 1, 2023 5:55 pm

Resolution isn’t rounded to the nearest degree.

It was an example of a type of resolution limit. It doesn’t matter why you can’t detect changes beyond a limit, the point is that you do not have identical errors.

Reply to  Bellman
July 1, 2023 11:03 pm

It was an example of a type of resolution limit.

Not in the sense we’re discussing here.

Reply to  Bellman
July 2, 2023 5:15 am

How do you know you don’t have identical errors? You *STILL* haven’t grasped the concept of uncertainty!

Reply to  Tim Gorman
July 2, 2023 6:54 am

Because the probability is vanishingly small that say 30 random values will all be identical.

I know this is a futile argument with you as you take pride in not understanding probability. Was it you or Jim who claimed that if you tossed a coin a million times it was almost certain you would get 100 heads in a row? That’s the problem here. You think that unknown means everything is equally likely.

Reply to  Bellman
July 2, 2023 6:57 am

The standard mantra of the trendologists:

“all errors cancel!”
“we don’t even have to think about them!”

Reply to  karlomonte
July 2, 2023 8:05 am

Stop whining. Errors do not all cancel and you do have to think about them. Time for you to get back under the bridge.

Reply to  Bellman
July 2, 2023 9:28 am

Hi, PeeWee!

Reply to  Bellman
July 2, 2023 10:33 am

The lower limit of instrumental detection is not random. Your analysis is wrong from the very start, Bellman.

Reply to  Pat Frank
July 2, 2023 6:04 pm

The lower limit of instrumental detection is not random.

Again, the uncertainty limits aren’t random, but I’m talking about the errors.

Your analysis is wrong from the very start, Bellman.

The thing is I’m not really doing any analysis, just looking at yours.

I’m not the one who’s written a paper, you are.
I’m not the one who want to demonstrate that every data set is fundamentally flawed, you are.
I’m not the one who it is claimed has destroyed the credibility of the official climate-change narrative, you are.

Reply to  Bellman
July 2, 2023 8:23 pm

The thing is I’m not really doing any analysis,

This is so true.

just looking at yours.

When are you going to start?

Reply to  Bellman
July 2, 2023 10:00 pm

“I’m talking about the errors.

The discussion is about uncertainty

“The thing is I’m not really doing any analysis, ...”

Agreed.

…just looking at yours.

Hard to credit, given the content of your posts.

The results fell out of the analysis, Bellman. “Want” had nothing to do with it.

The “climate-change narrative” cannot survive a temperature record demonstrated to convey no information about the rate or magnitude of warming since 1900.

Nor can it survive climate models shown to have no predictive value.

Falsification of the narrative is complete.

Reply to  Bellman
July 3, 2023 4:23 am

Again, the uncertainty limits aren’t random, but I’m talking about the errors.”

And here we go again. Systematic bias always cancels.

u_total = u_systematic + u_random. Therefore
u_random = u_total – u_systematic

If you don’t know u_systematic then you can never know u_random.

If you don’t know u_random then how can you assume *anything* cancels?

“I’m not the one who it is claimed has destroyed the credibility of the official climate-change narrative, you are.”

The *really* funny thing is that you believe you have somehow falsified the study done by Pat!

Reply to  Bellman
July 1, 2023 10:44 am

The point of eqns. 5 and 6 is to calculate the mean of the uncertainty Bellman. bdgwx can’t seem to grasp the difference, or why that calculation is pertinent, and evidently neither can you.

Part of the reason, I suspect, is that neither you nor bdgwx had the grace to read and understand the paper before launching into criticism.

Reply to  Pat Frank
July 1, 2023 11:56 am

“The point of eqns. 5 and 6 is to calculate the mean of the uncertainty Bellman.”

In that case I would have no objection. The mean of the uncertainty is obviously the uncertainty of the instrument.

But then you go on to use these mean of the uncertainties to calculate the uncertainty of an anomaly. Specifically using them as the monthly and 30 year uncertainties. I don’t see how this makes sense if the values are the average uncertainty.

Then in section 4.4 you are using the same values to calculate the uncertainty of global annual temperatures and anomalies.

Reply to  Bellman
July 1, 2023 1:31 pm

I don’t see how this makes sense if the values are the average uncertainty.

Does your inability to see the sense of it mean it’s not sensible, or does it mean you should spend time to understand the analytical logic?

Why would anyone think the resolution limit of the LiG thermometer should not condition a global mean annual LiG temperature or the annual LiG temperature anomaly?

Reply to  Pat Frank
July 1, 2023 2:06 pm

Don’t you know you can determine the width of an elemental particle if you just take enough measurements with a yardstick?

Reply to  Tim Gorman
July 1, 2023 2:56 pm

If by “you” you mean “me” I definitely don’t think you can do that. But if you think it’s possible explain how?

If you want to use averaging to improve resolution you need to ensure the variation in the thing measured is bigger than the resolution of your device. That’s why measuring the same thing multiple times with the same instrument will not necessarily improve the uncertainty, but measuring different things will have a better chance.

Reply to  Bellman
July 1, 2023 3:16 pm

That’s why measuring the same thing multiple times with the same instrument will not necessarily improve the uncertainty, but measuring different things will have a better chance.

HAHAHAHAHAHAAH — do you really believe this?

Reply to  karlomonte
July 1, 2023 3:26 pm

Yes. And you hysteria isn’t a persuasive argument against it.

Reply to  Bellman
July 2, 2023 5:24 am

If you want to use averaging to improve resolution you need to ensure the variation in the thing measured is bigger than the resolution of your device.”

Did you put in even one second of thought about this before you posted it?

How does the thing being measured vary? A measurand is a measurand. What varies is the measurement, not the measurand. At least as long as the environment during the measurements remains the same.

Once again, you somehow believe that single measurements of multiple things (for if the measurand changes then you *are* measuring a different thing) can increase resolution. Resolution is a function of the measuring device, not of the measurand.

You *still* don’t have a grasp of physical reality, do you?

Reply to  Tim Gorman
July 2, 2023 7:19 am

“Did you put in even one second of thought about this before you posted it?”

Probably not. It’s a waste of time given the quality of the inumerable comments you demand I respond to every hour.

I have however given the subject quote some thought over the years, and I stand by everything I said.

“How does the thing being measured vary? ”

I could have been clearer there however. I should have said the measurements vary. This could either be because there is random error each time you measure the same thing, or because you are measuring different things to determine, say, an average.

“Once again, you somehow believe that single measurements of multiple things …. can increase resolution”

I do believe that. Thanks for noticing. I’ve been trying to explain this for the last few years, but it probably didn’t register as you were too busy öbseesing over stud walls and lawn mowers.

If you weren’t so convinced you are right about everything and so anyone who disagrees is wrong, you might have taken some time to understand why I believe that. You might have then been in a better position to explain why you think I’m wrong, rather than just make your usual arguments by assertion and ad hominems.

Reply to  Bellman
July 2, 2023 7:48 am

Stop whining.

Reply to  karlomonte
July 2, 2023 8:06 am

A good point well made. How about I stop whining when you start thinking.

Reply to  Bellman
July 2, 2023 9:29 am

Clown. Go try to read the paper.

Reply to  Bellman
July 2, 2023 7:57 am

Here is why you are wrong.

I’ve got 1000 8′ boards. I measured each one of them to the nearest 1″. I can sell them to you and I’ll guarantee they are all 8′ ±0.005″. IOW 8.0004 feet long.

Reply to  Jim Gorman
July 2, 2023 8:18 am

Not sure what point that is making.You’re guarenteeing the length of each board, not the average.

And relevant to my point. If all your measurements are within half an inch of 8′ all will be rounded to exactly 8′. The same if you measured the same board 1000 times. No amount of averaging will help you on that case.

If on the other hand the boards vary in length be a few inches, your rounding will all differ, therefore some cancellation of errors is likely and the average can be now to a resolution better than an inch.

You could easily test this yourself. Take a load of boards of various lengths. Measure each one with a precise instrument, then round each measurement to the nearest inch. Compare the averages of the 1000 boards using each of the measurements. How different are the two averages? Do they agree to less than an inch?

Reply to  Bellman
July 2, 2023 8:38 am

You missed the whole point as usual. Would you buy the lot of boards if you needed boards that were exactly 8′? How many are shorter than 8′ by an inch? How many are longer?

Reply to  Jim Gorman
July 2, 2023 9:22 am

Then just what is your point?

I, and I assume Pat Frank, are talking about the uncertainty of a global average, and you are talking about the uncertainty of an individual board. The fact you think they are the same goes someway to explaining why you have such a hard time understanding how the uncertainty of an average can be different to the uncertainty of an individual measurement.

If I want to know the likelihood of an individual board being below a specific vale then I need to know the deviation of the individual boards, i.e the standard deviation of the boards. If I want to know how likely the mean of all the boards is less than a specific value I need to know the uncertainty of the mean, i.e. the standard error of the mean.

If your argument is I probably don’t want to know anything about the mean length of a board, you might be correct. But that just illustrates the problem with your example.

Reply to  Bellman
July 2, 2023 12:38 pm

Still you pathetic peeps have not retufted a single word of Pat’s paper.

Nothing!

Reply to  karlomonte
July 2, 2023 5:30 pm

Questioning, even snarky questions, is not refuting. Refutation takes knowledge, and ability to show where faults occur. All you see here are veiled accusations that something is wrong because it violates a tenent of faith. Showing references that support assertions would go far!

Reply to  Jim Gorman
July 2, 2023 5:55 pm

I’m not trying to refute the paper. I’ll leave that to people with more expertise. I’m just trying to resolve what appears to be a major misdirection.

All you see here are veiled accusations that something is wrong because it violates a tenent of faith.”

It’s called skepticism. You read something, see things that don’t make sense and ask questions about them. The fact that you and others get so defensive over this one paper, and insist that no-one who isn’t an expert is allowed to ask question or express doubts, feels more like an article of faith than what we are doing.

I think it’s an interesting juxtaposition of the attitude here to this paper, compared with the usual skepticism directed at any paper you consider to be on the other side. It was only a couple of days ago Pat Frank was saying anyone who wrote a paper showing too much warming should be thrown in jail.

Reply to  Bellman
July 2, 2023 9:47 pm

you and others … insist that no-one who isn’t an expert is allowed to ask question or express doubts,

That’s a lie, Bellman.

It was only a couple of days ago Pat Frank was saying anyone who wrote a paper showing too much warming should be thrown in jail.

Another lie.

You appear to exemplify your own accusation, Bellman.

You and bdgwx are not skeptics. Skeptics possess a critical understanding of the question. You and bdgwx evidence no such knowledge.

Reply to  Pat Frank
July 3, 2023 4:53 am

That’s a lie, Bellman.

Maybe an exaggeration.

If you don’t understand instrumental resolution, you shouldn’t be commenting.

Another lie.”

I agree with you, Tom. Pace Rich Davis, but to my mind we need to be juridically effective.

File charges of criminal negligence and malfeasance against the narrative purveyors for the evident harms they have caused with their global warming pseudo-science.

https://wattsupwiththat.com/2023/06/24/global-warming-has-begun-expert-tells-senate-1988-exaggerations-vs-today/#comment-3738956

This was in response to Tom Abbot saying

I personally think Hansen ought to be put in jail for his lies along with a couple of dozen of his fellow colleagues/liars.

Reply to  Bellman
July 3, 2023 5:39 am

Bellman, you wrote, “Pat Frank was saying anyone who wrote a paper showing too much warming should be thrown in jail.

You misrepresented the difference between writing a legitimate paper and criminal malfeasance.

Your lie is the attempt to obfuscate that difference.

Hansen’s malfeasance has led to 10s of thousands of premature deaths and the theft of trillions.

bdgwx
Reply to  Pat Frank
July 3, 2023 1:07 pm

Do you think Hansen should be convicted of a crime? Anyone else?

Reply to  bdgwx
July 3, 2023 6:14 pm

Did Hansen intentionally leave uncertainty bounds off the air temperature projections he presented in his 1988 testimony?

Do you think he consciously knew that his testimony before the Senate Committee that, “Global warming has reached a level such that we can ascribe with a high degree of confidence a cause and effect relationship between the greenhouse effect and the observed warming.” was scientifically indefensible and therefore false?

bdgwx
Reply to  Pat Frank
July 4, 2023 11:42 am

I was just curious if Hansen should be prosecuted. Do you think he should? What about Dr. Spencer, Dr. Christy, and others?

Reply to  bdgwx
July 4, 2023 12:01 pm

An infamous loaded question from bee’s wax — hoping to play another fun round of Stump the Professor.

His problem is that he doesn’t realize that he is the one who is stumped.

Reply to  bdgwx
July 4, 2023 12:39 pm

You know what, engineers and architects are put jail sometimes and certainly endure civil penalities for not using due diligence in their analysis of data. That means sometimes saying data is not fit for purpose and starting over. Most scientists do the same thing even though they may not suffer the indignity of being found guilty of negligence. I’ll bet Dr. Frank can tell about experiments that he has thrown out because of faulty data or calculations.

The hysteria that Hansen initiated causing untold trillions of dollars to be spent and making poor people poorer may very well be found to be not doing due diligence in his use of data.

If this turns out to be the case should he suffer some consequences?

Reply to  Bellman
July 3, 2023 5:46 am

If the thing being measured is smaller than the detection limit of all the instruments, then yes, it’s impossible to see it so it won;t show up in an average.

You hid your ‘yup’ in the middle of your equivocation.

Reply to  Pat Frank
July 3, 2023 5:35 am

They are both trolls at heart. No understanding of the subject but a lot to say about it.

Reply to  Jim Gorman
July 2, 2023 8:25 pm

All they do is rant about RMS and “random” then push the downvote button.

Success achieved!

Reply to  Bellman
July 2, 2023 12:40 pm

Great, a word salad answer. I gave you a complete description of the product. Even the mean was 8′. Yet you couldn’t answer with a simple yes or no.

Maybe because you know the uncertainty was unreasonable knowing the uncertainty in each individual measurement.

Reply to  Jim Gorman
July 2, 2023 2:23 pm

You keep coming up these board obsessed analogies, and I keep explaining why they miss the point. But if you want a detailed answer:

I’ve got 1000 8′ boards. I measured each one of them to the nearest 1″.

Are you saying they are 8′ or that you don;t know what length they are but they all measured 8′ when you rounded to the nearest inch? I’ll assume the later, and you have no prior so the actual length could be anything, so the best we can assume is that none of the 1000 was less than 7′ 11.5″ or grater than 8′ 0.5″.

So maybe the person you bought them from is honest and they are all close to 8′, or maybe he’s sold you 1000 boards that area ll slightly larger than 7′ 11.5″, knowing you can’t afford a better tape measure.

I can sell them to you and I’ll guarantee they are all 8′ ±0.005″. IOW 8.0004 feet long.

You can’t make that guarantee. Maybe you’re the one trying to con me. But even if you are honest and the average length of a board is 8′, you’ve already told me the best you can do is guarantee none are shorter than 7′ 11.5″.

Would you buy the lot of boards if you needed boards that were exactly 8′?

Obviously not.

How many are shorter than 8′ by an inch? How many are longer?

No idea. As I said they could all be shorter than 8′ or all longer. If you are truly random lengths then you might be able to assume there’s an equal probability of each being either shorter or longer than 8′.

Now – do you have any actual point to make about how to determine the average length or how this is applicable to the uncertainty of a global temperature average?

Reply to  Bellman
July 2, 2023 4:18 pm

Are you saying they are 8′ or that you don;t know what length they are but they all measured 8′ when you rounded to the nearest inch? “

You are exhibiting your total ignorance of uncertainty for all to see! The 1″ is an UNCERTAINTY. It is a function of the measuring device, not of the boards themselves! Included in that uncertainty is the resolution capability of the measuring device.

All you have is a bunch of sophistry and an inability to read simple English to offer as a rebuttal!

“You can’t make that guarantee.”

Then how can you make it for temperatures?

Reply to  Tim Gorman
July 2, 2023 4:40 pm

The 1″ is an UNCERTAINTY. It is a function of the measuring device, not of the boards themselves!

And here we go again. A pointless example with vague questions that has no relevance to the issue under discussion. But now if I ask for clarification, I’m the ignorant one.

I know. But the question stated the boards were all 8′ long, and I’m trying to figure out if you know they are all exactly 8′ long, or if they are varying within an inch of 8′.

“Then how can you make it for temperatures?”

I’m not. No-one is. Absolutely nobody with the intelligence they were born with, which might exclude some here, thinks that if you say the average has an uncertainty of ±0.1, then it means that all temperatures are guaranteed to be within 0.1°C of the average.

Reply to  Bellman
July 2, 2023 5:13 pm

You can’t even make sense of simple English.

 I measured each one of them to the nearest 1″”

That’s 8′ +/- 1″.

It’s just one more indication that you don’t have even a basic understanding of uncertainty – not a clue!

Reply to  Bellman
July 2, 2023 4:15 pm

 understanding how the uncertainty of an average can be different to the uncertainty of an individual measurement.”

The RESOLUTION of the average can’t be any greater than that of the individual members. If your resolution is the limiting factor than the average will have the same limiting factor!

Reply to  Tim Gorman
July 2, 2023 4:43 pm

And I disagree. The problem is all you have done these last two years is angrily shout the same slogans. If they didn’t work the first 1000 times, why do you think they will work now. Maybe you need to come up with an example that will demonstrate what you are saying rather than just keep asserting it.

Reply to  Bellman
July 2, 2023 6:00 pm

And I disagree.

In which case you suppose data can appear out of thin air.

Reply to  Pat Frank
July 2, 2023 6:06 pm

Nope.

Reply to  Bellman
July 2, 2023 9:40 pm

Yup.

You disagree that “The RESOLUTION of the average can’t be any greater than that of the individual members

That disagreement amounts to an assertion that data can appear out of thin air.

Instruments cannot produce physically real data intervals smaller than their detection limit.

Reply to  Pat Frank
July 3, 2023 4:37 am

Still nope. The resolution of an average can be greater than that of the individual members, but that doesn’t mean the data was pulled out of thin air. The information comes from the data.

Instruments cannot produce physically real data intervals smaller than their detection limit.

Instruments can’t but the average of many instruments can.

I’m assuming by detection limit you actually mean resolution. If the thing being measured is smaller than the detection limit of all the instruments, then yes, it’s impossible to see it so it won;t show up in an average. And for resolution, as I keep saying , of the variance in the data is less than the resolution, averaging is of no use.

Reply to  Bellman
July 3, 2023 5:48 am

If the thing being measured is smaller than the detection limit of all the instruments, then yes, it’s impossible to see it so it won;t show up in an average.

You hid your ‘yup’ in the middle of your equivocation.

Reply to  Pat Frank
July 3, 2023 6:24 am

Do you think the variation in temperatures across the globe is smaller than the resolution of the thermometers?

Reply to  Bellman
July 3, 2023 7:21 am

When the average global temp anomaly is given out to the hundredths digit when the resolution of the instruments only justifies values in the units digit or perhaps in the tenths digit then WHO KNOWS for certain?

You *still* don’t grasp the basics of uncertainty or of significant digits.

Reply to  Bellman
July 3, 2023 11:26 am

irrelevant.

Reply to  Pat Frank
July 3, 2023 1:26 pm

Why?

Reply to  Bellman
July 3, 2023 2:01 pm

Because.

Reply to  karlomonte
July 3, 2023 3:33 pm

Thought so.

Reply to  Bellman
July 3, 2023 4:53 pm

No, you don’t think. This is your problem.

Reply to  karlomonte
July 3, 2023 7:30 pm

I’ll let you have the last word. Otherwise this will go on forever.

Reply to  Bellman
July 3, 2023 5:53 pm

Because the topic is instrumental resolution, not the variation in temperatures across the globe.

Reply to  Pat Frank
July 4, 2023 4:34 am

That is an impossibility in the alternate universe they live in. *ALL* uncertainty cancels and doesn’t have to be considered in their universe. Trend lines only have to consider stated values because there is no uncertainty in the stated values – it all cancels out.

Reply to  Pat Frank
July 4, 2023 5:38 am

To sum up this thread:

My claim is that if measurements vary by more than the resolution, then it is possible to reduce the resolution in the average. I also said that this isn’t the case if the variation is less than the resolution.

Pat Frank tried to claim that the second point meant I was agreeing with him. I pointed out that in the real world temperatures do indeed vary by a lot more than the instrument resolution (they would serve no purpose if that wasn’t the case.)

To which PF claimed it didn’t matter because his topic was instrument resolution and not the variation in temperatures across the globe.

That seems to be the problem all along. Pat Frank wants to give the impression he’s talking about the uncertainty in the global average, but in reality all he’s ever talking about is the uncertainty of individual measurements.

Reply to  Bellman
July 4, 2023 6:05 am

My claim is that if measurements vary by more than the resolution, then it is possible to reduce the resolution in the average.

And as always, you spread bullshite propaganda, trying to keep your political agenda alive.

“He’s dead, Jim.”

Reply to  Bellman
July 4, 2023 8:33 am

My claim is that if measurements vary by more than the resolution, then it is possible to reduce the resolution in the average.

Wrong. Complete misportrayal. Instrumental resolution is a constant lower limit of uncertainty.

“…all he’s ever talking about is the uncertainty of individual measurements.

About resolution, — the constant instrumental lower limit of uncertainty (not error), and about calibration — the uncertainty due to field inaccuracy of the instruments.

Both of which condition every single measurement and condition the global average temperature.

You wrote of problems, Bellman. Your problem, very evidently, is that you don’t understand the topic, do not recognize your ignorance, and fail to apply modesty thereto.

Instead, you impose your confused view as though it is a mistake made by others. And a partisan fervor for one outcome prevents you ever correcting yourself.

Reply to  Pat Frank
July 4, 2023 11:03 am

Wrong. Complete misportrayal. Instrumental resolution is a constant lower limit of uncertainty.

Then provide some evidence, reason, demonstration or proof. All I get is people insisting it must be true.

Reply to  Bellman
July 4, 2023 11:46 am

You whine about people giving you detailed explanations, and whine about getting short rebuffs — is there length of answer to which you would pay attention?

Reply to  Bellman
July 4, 2023 5:29 pm

Systematic uncertainty can’t be reduced through statistical means. A common systematic uncertainty that exists because of the design of a measuring apparatus can’t be eliminated through statistics, it doesn’t “cancel out”, and it conditions every single measurement taken by that apparatus.

It’s not just “people” asserting this. Recognized experts like Taylor, Bevington, and Possolo assert this as well. The fact that *you* don’t believe it to be true is *your* issue, not an issue of “people”.

You’ve been given the references and quotes over and over for two years. And you still refuse to learn.

Reply to  Tim Gorman
July 4, 2023 6:36 pm

Systematic uncertainty can’t be reduced through statistical means.

I’m glad you agree.

But we are not talking about a systematic error here. We are talking about instrumental resolution.

Reply to  Bellman
July 5, 2023 6:45 am

Your cognitive dissonance never ceases to amaze me. Uncertainty due to the physical constraints of the measuring device *is* systematic uncertainty. It colors every single measurement made with the device in the same manner every single time. Resolution *IS* a physical constraint of the measurement device!

Do you *ever* stop to think before making such idiotic statements?

Reply to  Tim Gorman
July 5, 2023 9:13 am

Bellman and bdgwx worship at the altar of JCGM. Let’s see what 100:2008 says about instrumental resolution.

3.3.1 … Thus the uncertainty of the result of a measurement should not be confused with the remaining unknown error.

3.3.2 In practice, there are many possible sources of uncertainty in a measurement, including:
f) finite instrument resolution or discrimination threshold;

F.2.2.1 The resolution of a digital indication
One source of uncertainty of a digital instrument is the resolution of its indicating device. For example, even if the repeated indications were all identical, the uncertainty of the measurement attributable to repeatability would not be zero, for there is a range of input signals to the instrument spanning a known interval that would give the same indication. If the resolution of the indicating device is δx, the value of the stimulus that produces a given indication X can lie with equal probability anywhere in the interval X − δx/2 to X + δx/2. The stimulus is thus described by a rectangular probability distribution (see 4.3.7 and 4.4.5) of width δx with variance u² = (δx)²/12, implying a standard uncertainty of u = 0,29δx for any indication.

The division by 12 involves the assumption of a triangular distribution — a guess that the physically true value lies closer to the middle of the ignorance width than to the sides.

This assumption is unjustifiable, in part because it falsely implies that the researcher knows the probability regions within the ignorance width.

Physical scientists and engineers are typically conservative about uncertainty; very reluctant to assume what is not in evidence, such as claiming precocious knowledge of an unknown.

The adherence to the standard of knowledge in physical science and engineering (the courage to admit ignorance) requires division by 3 (a box ignorance width) rather than 12, in turn requiring a standard uncertainty of u = 0.58δx for the uncertainty due to resolution (the detection limit).

And there’s no getting inside that 0.58δx.

Reply to  Pat Frank
July 5, 2023 11:43 am

Bellman and bdgwx worship at the altar of JCGM.

Within the space of a view hours I’m accused of being a heretic because I said I disagree with some point in the GUM, and then I’m accused or worshiping at at it’s altar.

The only reason I even heard of the GUM was because I was told to follow equation 10 in order to understand how uncertainty inevitably increased with sample size. To my mind there’s not much to disagree with in the GUM but I find a lot of it confusinly stated, and it feels a lot like the proverbial horse designed by a committee.

The division by 12 involves the assumption of a triangular distribution

What?! No, they’ve just told you “the value of the stimulus that produces a given indication X can lie with equal probability anywhere in the interval X − δx/2 to X + δx/2. ” and that this means “The stimulus is thus described by a rectangular probability distribution

That’s why they divide by 12. For a uniform distribution

\sigma = \sqrt{\frac{(b - a)^2}{12}}

Reply to  Bellman
July 5, 2023 12:22 pm

To my mind there’s not much to disagree with in the GUM but I find a lot of it confusinly stated, and it feels a lot like the proverbial horse designed by a committee.”

That’s because you have no basic knowledge of what they are talking about.

You still can’t read. PF: “The adherence to the standard of knowledge in physical science and engineering (the courage to admit ignorance) requires division by 3 (a box ignorance width) rather than 12″ (bolding mine, tpg)

Reply to  Tim Gorman
July 5, 2023 7:12 pm

You still can’t read.”

He said they were using a triangular distribution when state they were using a uniform one. He claimed this was because they divided by 12 becasue he didn’t understand the equation for standard deviation. Then he posted a couple of irrelevant sections from the GUM that just confirmed the division by 12 was correct. But sure, I’m the one who can’t read.

If he means by that a p-box, I’m not sure why he would think there is any epistemic uncertainty in a value lying randomly between two other values. Or what he’s dividing by 3 to get the width.

And presumably you will be yelling at him for claiming the GUM is wrong, and insisting he writes to NIST to explain why that section is wrong.

Reply to  Bellman
July 5, 2023 1:46 pm

JCGM 100:2008 4.3.9 Note 1: “By comparison, the variance of a
symmetric rectangular distribution of half-width a is a²/3 [Equation (7)] and that of a symmetric triangular distribution of half-width a is a²/6 [Equation (9b)].

H.6.3.3: “Also let the maximum difference be described by a triangular probability distribution about the average value xz′/2 (on the likely assumption that values near the central value are more probable than extreme values — see 4.3.9)”

Search for triangular.

Reply to  Pat Frank
July 5, 2023 4:16 pm

the variance of a symmetric rectangular distribution of half-width a is a²/3 [Equation (7)]”

Which is exactly what I said.

In F.2.2.1 δx is the resolution of the instrument, so it’s half width is δx / 2. Hence the variance is

(δx / 2)²/3 = (δx)²/(2² × 3) = (δx)² / 12

If you wanted a triangular distribution you would end up with (δx)² / 24.

Also let the maximum difference be described by a triangular probability distribution”

Is from H.6.3.3: Uncertainty of the correction due to variations in the hardness of the transfer-standard block.

It has nothing to do with resolution. They explain why they assumed a triangular distribution, and the variance is given by

U²(Δb) = (xz’)² / 24

As I said, dividing by 24.

Reply to  Bellman
July 5, 2023 10:31 pm

The full discussion is in 4.3.7 and 4.3.9.

If the half-width is δx/2, the variance is [(δx/2)₊ − (δx/2)₋]²/12 = [2δx/2]²/12 = [δx]²/12 for a symmetric (box) distribution and σ =±δx/3.5

F 2.2.1 is nevertheless misleading. It says, “If the resolution of the indicating device is δx, … leading to your “interval X − δx/2 to X + δx/2.”

But the standard notation for a single instrument is resolution = ±δx. The notation in F 2.2.1 introduces a “/2,” which increases the divisor by way of a formalism artifact.

More properly, [(δx)₊ − (δx)₋]²/12 = [2δx]²/12 = 4δx²/12 = δx²/3.

H.6.3.3 eqn. H.37 discusses the special case of a difference resolution of test and calibration instruments ∆_b, not the resolution of a single instrument.

Reply to  Pat Frank
July 5, 2023 11:46 am

The first thing a smart lawyer would ask when dealing with how a given measurement was obtained, is why did you divide by 3 instead of 12 thus allowing the use of cheaper material than what was needed.

I showed the Instagram for a reason. After checking the link, i couldn’t get video and audio at the same time. The real points were that the valve guide seating requirement is 25/32″, not 26/32″ or 24/32″‘. The valve seats had 30° and 60° slopes. Not 29° and 61°. These are what machinists deal with every day. By the way, 25/32″ = 0.78125″. That’s having resolution out to the 1/100,000ths place. One isn’t going to get that by using a ruler with 1/16″ markings. Lest anyone wonder why such an odd specification, the manufacturer could have said 12/16″ = 3/4″ or 13/16″ = 0.8125, but they chose something in between for a reason. They needed a precise depth for longevity. Consequently, very good resolution tools are needed to achieve the requirement.

Reply to  Tim Gorman
July 5, 2023 11:50 am

Uncertainty due to the physical constraints of the measuring device *is* systematic uncertainty.

Argument by assertion carries little weight. If say the physical constraint is that values are rounded up or down to the nearest unit, and there is no reason to suppose a systematic propensity for the values to be in any specific part of the uncertainty window, then it’s as likely that a value will be rounded down as up. Hence it is random.

Reply to  Bellman
July 5, 2023 12:05 pm

Have you patented the Bellmanian yardstick-micrometer yet? Better get on it, don’t want the “predatory chinese” to beat you to it.

Reply to  karlomonte
July 5, 2023 12:23 pm

Is that the same method as saying we can take the 2000 crap jokes you make each day the and average them out to a coherent point? Because they have about the same likelihood of working.

Reply to  Bellman
July 5, 2023 1:06 pm

Pat Frank as well as Tim have shown you multiple times why resolution is a hard limit to what you can know, but it goes against what you “feel” so they can’t be right. And in peak irony you then go on to accuse them of argument by assertion!

Do you see the problem?

As always, attempting to educate you is proven a waste of time; now you whine about me pointing out your pseudoscience.

Reply to  karlomonte
July 5, 2023 2:31 pm

I have a hard time even crediting to psuedoscience. It’s not any kind of science. It’s religious dogma. It’s got to be written on a tablet somewhere that only the CAGW cult knows about that says “all uncertainty is random, Gaussian, and cancels”.

Reply to  Tim Gorman
July 5, 2023 4:14 pm

Thinking back, this started after Pat’s model uncertainty paper came out when Stokes waved his hand and dismissed it by proclaiming “…the error can’t that large…”. His acolytes followed suit with the same line whereby they were told that error isn’t uncertainty which has to be propagated and all.

They kicked and screamed, claiming that subtracting a baseline removes all “bias”, thus the integrity of the anomalies was intact. Desperate to show that averaging removes all warts, they latched onto sigma/root-N as the path to the promised land, which of course including stuffing the average formula into GUM 10. bgw rants about this unceasingly.

Overnight they became instant experts in uncertainty, and to this day still refuse to consider they don’t know squat. So yes you are right, the tiny anomalies are in front of the mule, and implicitly, even if they deny it, everything has to be Gaussian. But their religion is pseudoscientific in the sense that they know the answer prior to analysis (although they aren’t capable of doing any analysis beyond a linear least-squares fit, and the computer does this for them).

Reply to  karlomonte
July 5, 2023 4:46 pm

Nice. bdgwx doesn’t realize that when he writes

q_avg = Σx_i/N

he is finding an average.

when he then tries to say the uncertainty of the average is

u(q_avg) = sqrt[ (Σu(x_i)/N )^2 ] he is finding the AVERAGE UNCERTAINTY, not the uncertainty of the average. They are not the same.

u(q_avg) = Σu(x_i) + u(N) by the rules of propagation. (assuming direct addition, of course. but quadrature won’t make any difference, u(N) still drops out)

With them its all psuedo-science, as you say. What a joke.

Reply to  karlomonte
July 5, 2023 8:36 pm

I’ve done some digging into bdgwx’s “functional relationship”.

1) Temps are not determined via multiple measurements of fundamental quantities that are used to calculate the combined value. Temperatures are a fundamental SI base unit that is measured directly. Consequently, fudging up a “functional relationship” consisting of an average of temperatures is not finding a temperature, it is finding a mean of several temperatures.

2) bdgwx’s little trick of a so called “functional relationship” is nothing more than finding a mean of a series of numbers by breaking them into small groups and then averaging the group values.

This does not justify dividing the uncertainty by “n”. They still add according to a sum (or by quadrature if justified).

Example:

1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Mean = 55/10 = 5.5

m1 = (1 + 2) / 2 = 1.5
m2 = (3 + 4) / 2 = 3.5
m3 = (5 + 6 ) / 2 = 5.5
m4 = (7 + 8) / 2 = 7.5
m5 = (9 + 10) /2 = 9.5

(m1 + m2 + m3 +m4 + m5) / 5 = 5.5

11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 +20. Mean = 15.5

m1 = (11 + 12) / 2 = 11.5
m2 = (13 + 14) / 2 = 13.5
m3 = (15 + 16 ) / 2 = 15.5
m4 = (17 + 18) / 2 = 17.5
m5 = (19 + 20) /2 = 19.5

(m1 + m2 + m3 +m4 + m5) / 5 = 15.5

Reply to  karlomonte
July 5, 2023 4:45 pm

Pat Frank as well as Tim have shown you multiple times why resolution is a hard limit to what you can know

Have they? Or have they just repeated the claim ad nauseam?

but it goes against what you “feel” so they can’t be right

No. It goes against what I understand of the maths, and what I can demonstrate.

Do you see the problem?

Yes a few people here believe that the resolution of an average cannot be better than the resolution of the instruments. They believe it so strongly that they can;t accept the possibility they are wrong even when it is demonstrated to be wrong.

They believe so strongly that any argument against their belief is rejected as pseudoscience. It becomes the classic unfalsifiable hypothesis. Any evidence against it must be wrong because they know it to be impossible.

Reply to  Bellman
July 5, 2023 5:20 pm

As I said, you need to talk with the Forestry Department.

Reply to  Bellman
July 5, 2023 12:13 pm

You do realize that means you don’t know, right? Whoops, unknown! That doesn’t help in extending resolution dude.

Reply to  Jim Gorman
July 5, 2023 12:19 pm

You do realize that means you don’t know, right?

Right, absolutely.

That’s why it’s called uncertainty. Uncertainty means you don’t know. Not knowing means you are uncertain.

For some reason people seem to think that not knowing / being uncertain, means it’s impossible to say anything about the scope of not knowing. But that’s the whole point of all this uncertainty analysis. To put limits and conditions on what you don’t know.

Saying I don’t know if this coin will come up heads if I toss it, is not the same as saying I don’t know if I toss the coin 100 times it will come up heads each time. They are both “don’t knows”, but not the same “don’t know”.

Reply to  Bellman
July 5, 2023 12:43 pm

But that’s the whole point of all this uncertainty analysis. To put limits and conditions on what you don’t know.”

You just don’t get it at all. Uncertainty puts limits on WHAT YOU KNOW, not on what you don’t know!

A coin flip is not a measurement using a measurement device. It is not even a counting situation to determine how many “something” happens in an interval.

Why do you compare taking a measurement to a coin flip?

Reply to  Tim Gorman
July 5, 2023 1:07 pm

He can never understand because it goes against what he wants to be the truth. Ergo pseudoscience.

Reply to  Tim Gorman
July 5, 2023 5:04 pm

Uncertainty puts limits on WHAT YOU KNOW, not on what you don’t know!

Two sides of the same coin.

A coin flip is not a measurement using a measurement device.

It’s an illustration of probability.

Why do you compare taking a measurement to a coin flip?

We were talking about uncertainty caused by by rounding. I pointed out that in most circumstances there’s the same chance (0.5) of rounding up or down, say by 1. With 100 measurements it’s as certain as it could be that not all readings will be rounded up, and a lot more likely that close to 50 will be up and 50 down. This inevitably means the result of the average will be closer to zero than to plus or minus 1.

Reply to  Bellman
July 5, 2023 1:49 pm

I am reminded of sophomore year fall semester, EE201: the professor handing out the results of about the 2nd or 3rd exam. He quipped “…and some of you need to go talk to the Forestry Dept…”; at the time I thought he was having a bit of fun at the expense of those who had crashed and burned, but he was right. They were never going to get it, and they needed to go switch majors before he would have to fail them.

This is you — you will never get this stuff, and you need to switch majors.

Reply to  Bellman
July 5, 2023 1:58 pm

But that also means that whatever distribution you think you know won’t matter one iota. What will happen next is not under your control, it is unknown, i.e., uncertain.

Reply to  Bellman
July 5, 2023 12:24 pm

Rounding is *NOT* systematic uncertainty. Can you get *anything* right?

Can you even state what the rule for rounding *is*?

Reply to  Tim Gorman
July 5, 2023 1:11 pm

I am reminded of the Dan Aykroyd fish malt blender.

Reply to  Bellman
July 5, 2023 1:50 pm

If say the physical constraint is that values are rounded up or down…

Rounding is not a physical constraint.

Reply to  Pat Frank
July 5, 2023 4:31 pm

Then what is your argument. This thread started becasue you said

Wrong. Complete misportrayal. Instrumental resolution is a constant lower limit of uncertainty.

and then Tim insisted

Systematic uncertainty can’t be reduced through statistical means.

and I said

But we are not talking about a systematic error here. We are talking about instrumental resolution.

To which Tim replied

Your cognitive dissonance never ceases to amaze me. Uncertainty due to the physical constraints of the measuring device *is* systematic uncertainty.

At which point I returned to rounding as a result of instrument resolution.

So is uncertainty caused by rounding to the limit of a display systematic uncertainty or not?

Reply to  Bellman
July 5, 2023 4:53 pm

Pat already answered you on this. Your short memory is failing again.

Rounding is not uncertainty. Rounding is a function of resolution, i.e. what’s the next value after the 3 1/2 digit display on a voltmeter. Resolution *is* uncertainty – systematic uncertainty. Resolution is instrumental uncertainty. It affects each and every measurement. But none of this is the uncertainty Pat analyzed. What Pat is looking at is minimum detection limits. That too is systematic uncertainty but it is *not* the same as resolution uncertainty.

Why you can’t understand this is beyond me. It’s readily apparent that you have absolutely ZERO experience in metrology yet you feel you have enough expertise in the subject to become the arbiter of what is right and what is wrong with the statements of all the experts in the field. Unfreakingbelievable.

Reply to  Tim Gorman
July 5, 2023 6:26 pm

Rounding is not uncertainty.

Of course rounding is uncertainty. If you round to a particular mark you can’t know where the value lied with the interval. That’s the whole point of all these SF rules you are so fond of.

Pat even quoted a part of the GUM where they talk about the uncertainty from rounding. F.2.2.1 The resolution of a digital indication “One source of uncertainty of a digital instrument is the resolution of its indicating device.”

Resolution *is* uncertainty

Word games. The uncertainty of resolution caused by a fixed interval is caused by rounding to the nearest point.

systematic uncertainty.

And just saying it doesn’t make it true. Look at the GUM passage. It specifically points it’s described by a rectangular probability distribution.

It affects each and every measurement.

You keep saying that as if it means something to you.

It’s readily apparent that you have absolutely ZERO experience in metrology…

Never denied that. But it doesn’t mean I can’t read.

yet you feel you have enough expertise in the subject to become the arbiter of what is right and what is wrong

No, I just have enough understanding of maths and statistics to see when you are wrong.

with the statements of all the experts in the field

I;m not arguing with any experts in the field. But if I was so what? You’re the one who keeps objecting to arguments from authority.

You seem to have no problem telling all the experts in the field of climate science they are wrong, even though you have no expertise – apart from building stud walls.

Reply to  Bellman
July 5, 2023 6:52 pm

You have not progressed one micron past “the error can’t be that big!”

“experts in the field of climate science” — got a list of these?

Reply to  karlomonte
July 6, 2023 5:05 am

You have not progressed one micron past “the error can’t be that big!”

Not suire when I said that, but it’s a useful smell test. If the errros seem impossibly big it’s a good indication you should look closely at your assumptions and workings. In your case, if you are claiming the average of 10000 thermometers can have an uncertainty of ±50°C, then yes, that’s impossibly big, if an individual thermometer’s uncertainty ±0.5°C.

You seem to have no problem saying things are wrong because the uncertainties can’t be that small.

Reply to  Bellman
July 6, 2023 5:35 am

Why is it impossibly big? Annual temps certainty vary at least that much at one location. Temps between the SH and NH can have that kind of variance on a daily basis.

If you don’t know the variance of your distribution and chase that variance through all the averaging then how do you know the variance can’t be that big?

Once again, you are stating a “belief” not a fact. A belief founded on cult dogma from the CAGW religion – not on actual reality.

Reply to  Tim Gorman
July 6, 2023 3:15 pm

Annual temps certainty vary at least that much at one location.”

I doubt there are many places that vary by 50°C a year. But as always it’s irrelevant as we are not talking about the uncertainty at one location or even in this case the uncertainty of the range of temperature. All we are discussing is the uncertainty caused by a ±0.5°C. There is no way that magnitude of error could increase 100 fold by averaging thousands of instruments.

If you don’t know the variance of your distribution and chase that variance through all the averaging then how do you know the variance can’t be that big?

I’ll assume you no longer want to talk about how measurement uncertainty propagates, and actually talk about the good old SEM, or ESDOM or what the current woke term is.

So lets say you do have a random set of thermometers reading a value on one day across the globe. The obvious point is you do know the variance – it’s the variance in the data. And more usefully you have the standard deviation of the data – and you should know by now how to use that standard deviation. With a sample size of 10000, the SEM is SD / 100, so unless your standard deviation around the world is 50,000°C, I still say an uncertainty of 50°C is too big.

But ignoring the maths for a moment – if you say for example that the actual average is 14°C, and you occasionally see temperatures on the earth that are as much as 50°C warmer or colder than that, then you have to consider how likely it is that most of your 10000 random samples all happened to come from that one spot.

Reply to  Bellman
July 6, 2023 4:16 pm

I doubt there are many places that vary by 50°C a year. “

Have you *ever* been out of a climate controlled environment?

It isn’t unusual here on the central plains of the US to see near 100F (about 40C) temps in the summer and 0F (-17C) temps in the winter. That’s a 57C annual change!

But as always it’s irrelevant as we are not talking about the uncertainty at one location “

Of course we are! The variance of every component has to be considered and propagated when combining them! Variance grows just like uncertainty does.

You are *still* laboring under the misconception that all uncertainty (variance) is irrelevant because it all somehow cancels. Simply unfreakingbelieable.

And you believe we can’t prove why climate science is a joke?

The obvious point is you do know the variance – it’s the variance in the data”

The variance is the combination of the variance of each individual piece! Especially when each component is the mid-range value of two daily temperatures!

Do you *ever* stop to think about what you say?

“And more usefully you have the standard deviation of the data”

That standard deviation is based on mid-range values which have variances – which you just blissfully ignore!

“With a sample size of 10000”

You have a sample made up of mid-range values, i.e. two samples from a daily temperature profile that aren’t even average values. That mid-range value changes based on the variance of the daily temperature.

This is why everything you say is a joke. You want to ignore those statistical descriptors that are inconvenient, like variance and uncertainty! Just like climate science!

“if you say for example that the actual average is 14°C”

That average is useless without a variance associated with it. And you think you do statistics?

Reply to  Bellman
July 6, 2023 6:25 pm

Another waft of idiotic hand waving.

Reply to  Bellman
July 6, 2023 6:26 am

You sit there in front of the computer lecturing experienced professions about subjects for which you have no ability to grasp even the basics.

Do you see the problem?

Reply to  karlomonte
July 6, 2023 2:47 pm

You sit in front of your device and keep telling the world how every climate scientist, or anyone who’s produced a global temperature analysis, is wrong and doesn’t understand anything about the subject. Yet you won’t actually explain your concerns to them. Do you see the problem?

Reply to  Bellman
July 6, 2023 3:05 pm

I participate here and on Twitter. If climate scientists refuse to read sites that have criticisms, that is their perogitive.

Have you communicated your criticism of UAH to Dr. Christy or to the NOAA STAR satellite data who now agrees with UAH! Show us what you told them!

Reply to  Jim Gorman
July 6, 2023 4:20 pm

I’ve not criticized UAH – maybe you are confusing me with karlo. I use the data as is. I also comment from time to time on Spenser’s blog.

Reply to  Bellman
July 6, 2023 4:00 pm

And it can be proven! Climate science assumes all uncertainty cancels, both systematic and random. Climate science assumes variance doesn’t grow when combining random variables with different distributions. Climate science thinks a daily mid-range temperature value is an “average daily temperature”. Climate science thinks combining NH temps with SH temps, both of which have different variances, is just fine and no regard has to be paid to the different variances. Climate science thinks calculating a trend line from the stated temperature values give a “true trend line” with no regard for the uncertainties associated with those stated values.

You do exactly the same!

Reply to  Bellman
July 6, 2023 8:16 pm

Are you a parrot?

Reply to  Bellman
July 5, 2023 10:35 pm

Rounding is not a result of instrument resolution.

Resolution is something an instrument has, Rounding is something you do.

Reply to  Pat Frank
July 4, 2023 5:25 pm

bellman believes that there is nothing statistics can’t do. That’s probably true in his alternate universe.

Reply to  Tim Gorman
July 4, 2023 7:24 pm

There are lots of things statistics can’t do. Figure out how many times you will lie about me in the next 15 minutes, for one.

Reply to  Bellman
July 4, 2023 9:11 am

My claim is that if measurements vary by more than the resolution, then it is possible to reduce the resolution in the average. I also said that this isn’t the case if the variation is less than the resolution.”

You are just plain wrong. Resolution has to do with MEASUREMENTS. The average is *NOT* a measurement, it is a statistical descriptor. As a statistical descriptor it should not be given a value that misleads anyone as to the resolution of the actual measurements used to develop the average. Doing so is just plain lying.

“I pointed out that in the real world temperatures do indeed vary by a lot more than the instrument resolution (they would serve no purpose if that wasn’t the case.)”

And you didn’t understand the answer. The uncertainty of a measurement is a combination of *all* uncertainty factors. Just because the stated value variation is more than the measurement uncertainty it doesn’t mean the measurement uncertainty can be ignored. Possolo *told* you this in TN1900 which you refuse to read and understand. He assumed there was no systematic error in any measurement and that the random error cancelled because the same thing was being measured by the same device.

I.E. – NO MEASUREMENT UNCERTAINTY TO CONSIDER.

The very same meme you *always* assume even though you say you don’t!

To which PF claimed it didn’t matter because his topic was instrument resolution and not the variation in temperatures across the globe.”

And the implications of that went flying by right over your head. You never even looked up and tried to understand why.

“That seems to be the problem all along. Pat Frank wants to give the impression he’s talking about the uncertainty in the global average, but in reality all he’s ever talking about is the uncertainty of individual measurements.”

See what I mean about you always assuming all uncertainty is random, Gaussian, and cancels? YOU JUST DID IT AGAIN!

The uncertainty of the individual measurements DETERMINES THE UNCERTAINTY OF THE GLOBAL AVERAGE!

The uncertainty of the global average is *NOT* the SEM which is based solely on the stated values and ignores the uncertainty in those stated values. It’s part and parcel with your ASSUMPTION THAT ALL UNCERTAINTY CANCELS!

You just can’t get away from that meme no matter how hard you try. It’s become a true joke and you don’t even know it.

Reply to  Bellman
July 3, 2023 5:55 am

 that doesn’t mean the data was pulled out of thin air. The information comes from the data.”

Nope! That *IS* creating data out of thin air. You can’t glean more information from the data than it provides. That’s the whole concept of significant figures – which I guess you still don’t believe in.

“Instruments can’t but the average of many instruments can.”

Not if you follow significant digit rules.

Taylor, Rule 2.9: “Rules for Stating Answers – The last significant figure in any answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.

Taylor, Rule 2.5: “Experimental uncertainties should almost always be rounded to one significant figure”

You have never actually studied Taylor on ANYTHING. You’ve just cherry-picked things with no conceptual understanding of the context in which they exist.

Nearly everything you assert is at odds with Taylor, Bevington, and Possolo even while you say you believe and understand what they say. Your cognitive dissonance is huge.

Reply to  Tim Gorman
July 3, 2023 6:18 am

You can’t glean more information from the data than it provides.

Indeed not. But the information the data does provide is much more then you seem to think. There’s a lot more information in 1000 measurements then there is in one.

That’s the whole concept of significant figures – which I guess you still don’t believe in.

Not this again. I believe in significant figure guidelines as convince. A style guide. I do not think they provide a better way of determining uncertainty than actually working out the uncertainty and stating it. And I think in some places the “rules” are inadequate or just wrong. Especially if you think it should be a rule that an average cannot be stated to more decimal places than the individual measurements.

Tim then goes on to quote Taylor, oblivious to the fact that the quotes demonstrate why the so called rules about averaging are wrong. He even gives a couple of exercise to demonstrate that the uncertainty of an average can be written to more decimal places than the measurements.

“Experimental uncertainties should almost always be rounded to one significant figure”

I take it you’ve pointed out to Pat Frank why he’s wrong to quote all the uncertainty figures to 3 significant figures.

Reply to  Bellman
July 3, 2023 7:18 am

Not this again. I believe in significant figure guidelines as convince. A style guide. I do not think they provide a better way of determining uncertainty than actually working out the uncertainty and stating it.”

I *gave* you the reason for significant figures. As usual you just blew it off. It is *NOT* a style guide. It is so you do not mislead others into believing you used higher resolution equipment than you actually had. There is a REAL WORLD reason for the use of significant digit rules.

From Libretexts on chemistry: “ignificant Digits – Number of digits in a figure that express the precision of a measurement instead of its magnitude.”

When you include more significant digits in a result than justified by the precision of the measurements you made then you *are* misleading people on the precision with which the base measurements were made.

I know that probably makes no sense to you and that you simply don’t care – because you don’t live in the real world let alone the real world of physical science. Most climate scientists don’t ether.

Reply to  Bellman
July 3, 2023 8:21 am

I take it you’ve pointed out to Pat Frank why he’s wrong to quote all the uncertainty figures to 3 significant figures.

Go read the paper, fool, he explains why. Just tossing more shite at the wall.

Reply to  Tim Gorman
July 3, 2023 7:14 am

Nope! That *IS* creating data out of thin air.

Bingo! Another +7000!

Reply to  Bellman
July 3, 2023 6:14 am

The resolution of an average can be greater than that of the individual members

Read this from NIST.

2.4.5.1. Resolution (nist.gov)

Note the following statement.

The number of digits displayed does not indicate the resolution of the instrument.

from:Accuracy, Precision, and Resolution; They’re not the same! | Phidgets (wordpress.com)

Resolution is easily mistaken for precision, but it’s not always the case that you will have a high precision just because you have a high resolution. Even with many many decimal places in the values you are getting from a sensor, you may still find there is a lot of variability in the data, or in other words that there is low precision despite the high resolution.

Your statement really ignores the basis of resolution. The minimum change in the subject that can cause a change in the reading. You can not obtain this information from averaging, nor can you eliminate the uncertainty because the uncertainty remains in the average.

The only time your statement makes sense is in the presence of noise, true noise and not signal variations. Noise in a temperature signal would be variation in the measurement due to external heat sources. I’m not clear where those noise sources would occur as long as the thermometer was sited correctly. Are there incorrectly sited thermometers? Sure there are, but then one must define the frequency of the noise and make multiple measurements frequently enough to recognize the noise and determine its quantity. Only with the new digital thermometers could this be done on a mechanized basis. One example would be where jet exhausts impact a thermometer momentarily. Another would be HVAC exhaust impacting the thermometer. HVAC could be done with samples of a minute. Jet exhaust, probably would require 1 second sampling. Certainly not out of the question. The administrative problem would be worst, that is, trying to agree on a world wide algorithm.

Reply to  Bellman
July 3, 2023 7:13 am

An apt analogy for your averaging idiocy is seen often in TV shows: the cops get some grainy low-res CCTV image, pass it off to the IT Guru, who then types away for a bit on his/her keyboard.

Voila! The license plate number is clear a day and the perp is revealed!

You think you can manufacture data that isn’t there — should be expected from the crowd who “adjust” historic data to remove “biases”, though.

Reply to  Bellman
July 2, 2023 8:28 pm

The problem is all you have done these last two years is angrily shout the same slogans.

When all else fails, run to the whitewash brush.

Reply to  karlomonte
July 3, 2023 4:19 am

Wow, you nailed it. Never an actual refutation of anything. Just examples that totally ignore propagation of uncertainty and assumes resolution in a measurement can be increased mathematically.

Reply to  Bellman
July 2, 2023 4:12 pm

Not sure what point that is making.You’re guarenteeing the length of each board, not the average.”

Unfreaking believable! He TOLD you what the average is – 8′! Can’t you read simple English?

“If all your measurements are within half an inch of 8′ all will be rounded to exactly 8′.”

How is that any different than temperatures?

“If on the other hand the boards vary in length be a few inches, your rounding will all differ, therefore some cancellation of errors is likely and the average can be now to a resolution better than an inch.”

Unfreaking belivable! A few inches? What is that? To change the rounding it would have to be greater than or equal to 6″ difference!

the average can be now to a resolution better than an inch.”

Wow! Just wow! How did the resolution of the average change if the boards have a wider standard deviation? You *still* can’t measure any board to more than +/- 1″. That includes boards that are 6″ longer and 6″shorter.

Reply to  Tim Gorman
July 2, 2023 4:24 pm

All just hand-waving trying to support untenable and illogical ideas.

And of course, relying on “cancellation of errors” that he denies he relies on.

Reply to  Tim Gorman
July 2, 2023 5:31 pm

Unfreaking believable! He TOLD you what the average is – 8′! Can’t you read simple English?

Calm down. No he said he had 1000 8′ boards. If that’s correct the average will be 8′, but as you keep saying it’s unknown. If it’s not unknown why point to the uncertainty in the measurement, and why ask how many were bigger or smaller than 8′.

Honestly these examples are always so badly thought through. I’m sure they mean something to you when you come up with them, but you never describe them in a meaningful way or explain their relevance. And then get apoplectic when I try to clarify the specifications.

““If all your measurements are within half an inch of 8′ all will be rounded to exactly 8′.”
How is that any different than temperatures?”

Because temperatures are not all between 7.5 and 8.5°C.

“Unfreaking belivable! A few inches? What is that? To change the rounding it would have to be greater than or equal to 6″ difference! “

Perhaps if you didn’t use these antiquated units there wouldn’t be this confusion. I thought it was said the rounding was to the nearest inch, not foot. I didn’t realize American tape measures where so useless.

Let me check.

I’ve got 1000 8′ boards. I measured each one of them to the nearest 1″. I can sell them to you and I’ll guarantee they are all 8′ ±0.005″. IOW 8.0004 feet long.

I’m sure in the dark ages the ” was meant to mean inches not feet. Maybe it’s different now.

How did the resolution of the average change if the boards have a wider standard deviation?

If you haven’t figured this out by now, I doubt you ever will.

Reply to  Bellman
July 2, 2023 8:29 pm

You’re an idiot, this is the only possible logical conclusion.

Reply to  Bellman
July 3, 2023 3:49 am

If that’s correct the average will be 8′, but as you keep saying it’s unknown.”

You can’t even get this straight! There is no reason you can’t know the average. But you can’t know the average to more decimal places than the measuring device allows! Thus 8′! Nor is the uncertainty unknown – it is +/- 1″!

Honestly these examples are always so badly thought through.”

There is nothing wrong with the examples. The root of the problem is that you simply refuse to learn about measurement uncertainty so you can’t figure the examples out properly. If you would take Taylor and work out ALL of the examples and compare your answers to the ones in the back of the book, you might gain some actual knowledge. Stop just cherry-picking things that you think might support your religious beliefs.

Taylor says in his intro to Part I: “Chapter 3 describes error propagation, whereby uncertainties in the original measurements propagate through calculations to cause uncertainties in the calculated final answers. Chapters 4 and 5 introduce statistical methods with which the so-called random uncertainties can be calculated” (bolding mine, tpg)

Nowhere in Chapter 3 does he show averaging reducing uncertainty, NO WHERE. Nor does he show in Chapter 3 how systematic bias can cancel. That never seems to faze you in any way, shape, or form.

Because temperatures are not all between 7.5 and 8.5°C.”

Unfreakingbelievable. So what? You think that is a refutation of anything? To get to the GAT you *still* average all the temperatures together – and the uncertainty of the measurements propagates to that average, just like with the boards!

“I thought it was said the rounding was to the nearest inch, not foot.”

And here we go with the nit-picking. You aren’t even bright enough to understand when you are seeing a typo!

“If you haven’t figured this out by now, I doubt you ever will.”

That’s not an answer. But it is ALWAYS your fallback so you don’t actually have to explain how your assertions always fail. Pathetic.

Reply to  Tim Gorman
July 3, 2023 4:54 pm

There is no reason you can’t know the average. But you can’t know the average to more decimal places than the measuring device allows! Thus 8′

Can you not see the contradiction here. You are claiming you know the average, then saying you only know it to the nearest inch. As you keep telling me, uncertain means you do not know.

Nor is the uncertainty unknown – it is +/- 1″!

How do you know that. The initial premise was that you were measuring to the nearest inch, no other mention of uncertainty. If the rounding to nearest inch is the only uncertainty than it’s ±0.5″.

If there any other sources of uncertainty you just don’t what they are at this point.

And this argument doesn’t seem to be going anywhere as we are just back to the usual pathetic personal insults.

Reply to  Bellman
July 3, 2023 5:21 pm

The uncertainty interval, be it Type A or Type B is, at its base, an educated guess. It should be wide enough that subsequent observations should fall into the interval at least seven out of ten times.

There is no reason why you can’t estimate the uncertainty of a measuring device. Go look at the GUM for Type B uncertainty.

You simply can’t get ANYTHING right about uncertainty and how it is used, can you?

Reply to  Tim Gorman
July 3, 2023 5:59 pm

There is no reason why you can’t estimate the uncertainty of a measuring device.

This is a pretend example using fictitious boards to engage in some thought experiment, which nobody seems to know the purpose of.

If there’s an uncertainty in the measurement device you can tell me as it can be anything you want it to be. All I want to know is what is the point of this nonsense and why do you keep insisting I give meaningful answers about your fantasy world.

Just specify the parameters of this experiment, explain what you want me to solve and then say what the point is.

And please stop finishing every one of your increasingly deranged comments by saying I can’t get anything right.

Reply to  Bellman
July 3, 2023 6:21 pm

Tim’s right, you can’t.

Reply to  Bellman
July 4, 2023 4:56 am

This is a pretend example using fictitious boards to engage in some thought experiment, which nobody seems to know the purpose of.”

Now that’s a real fine refutation there!

“If there’s an uncertainty in the measurement device you can tell me as it can be anything you want it to be”

I didn’t say that at all. I said you can estimate what it is! You simply can’t read even simple English. It’s no wonder you can’t make heads or tails out of Pat’s study!

“All I want to know is what is the point of this nonsense and why do you keep insisting I give meaningful answers about your fantasy world.”

I don’t live in a fantasy world, you do! I live in the real world where the average value of a pile of random boards is of little use to me since it can’t tell me an accurate total length of the three boards I pick from the pile. I need to know the uncertainty of each so I can ensure that they will span the distance I need them to. The “average uncertainty” only tells me that someone has tried to spread the total uncertainty equally over all the boards – a physical impossibility.

“Just specify the parameters of this experiment, explain what you want me to solve and then say what the point is.”

The parameters include not assuming that all error is random, Gaussian, and cancels.

I’ll stop saying you can’t get anything right when you actually say something right concerning measurement uncertainty and then actually incorporate it in your assertions instead of always falling back on your standard meme – and we all know what that is.

Reply to  Tim Gorman
July 4, 2023 5:26 am

Now that’s a real fine refutation there!

There’s nothing to refute. You still haven’t explained what your point is.

Reply to  Tim Gorman
July 4, 2023 6:06 am

Again and again it has been proven that the word “estimate” is not in bellcurveman’s vocabulary.

Reply to  Tim Gorman
July 1, 2023 5:10 pm

If you can believe it, I was pretty much told exactly that by an MIT physicist in the Q&A after giving a talk on temperature measurement uncertainty.

Reply to  Pat Frank
July 2, 2023 5:35 am

Oh, I can believe it. The education of physical science and its principles is sadly lacking today. I often tell the story of my youngest son who, when starting his college study in microbiology, was told by his advisor not to worry about taking any math or statistics classes. If he needed something done with data he could get a math major or grad student to analyze it for him.

I was stunned. That winds up with the blind leading the blind. A microbiologist with no way to judge whether the statistical analysis is believable and a math major who has no idea of the underlying realities of biological data, including the uncertainties associated it with it – the old “the stated values are 100% correct” bugaboo of statisticians today. Which we continually see in climate science.

Thankfully I convinced my son to take at least two semesters of statistics and probability. It has served him well in his career.

Reply to  Pat Frank
July 1, 2023 2:34 pm

Does your inability to see the sense of it mean it’s not sensible

No, it’s quite possible I’m misunderstanding something. That’s why I’m trying to get you to explain what you mean.

Why would anyone think the resolution limit of the LiG thermometer should not condition a global mean annual LiG temperature or the annual LiG temperature anomaly?

What do you mean by “condition”? If you mean contribute to, I’d agree. If you mean dominate, I’d disagree.

Reply to  Bellman
July 2, 2023 5:17 am

What do you mean by “condition”? If you mean contribute to, I’d agree. If you mean dominate, I’d disagree.”

How do you know what dominates if you don’t know the magnitude of each factor contributing to the uncertainty?

As usual, you are making an unjustified assumption based on your religious beliefs in CAGW.

bdgwx
Reply to  Pat Frank
July 1, 2023 5:51 pm

PF: Why would anyone think the resolution limit of the LiG thermometer should not condition a global mean annual LiG temperature or the annual LiG temperature anomaly?

Assuming condition means contribute to then it does condition temporal (daily, monthly, annual, etc.) means; just not via RMS. It also conditions spatial (regional, global, etc.) means as well; just not via RMS.

Speaking of the spatial domain…I don’t see anywhere in the publication where you propagate the uncertainty in the spatial domain (ie grid mesh). That’s another issue and rabbit hole on its own.

Reply to  bdgwx
July 1, 2023 11:01 pm

Condition there means ‘sets the knowledge bounds.’

The paper is restricted to instrumental analysis.

Reply to  Bellman
July 1, 2023 7:16 am

As bgdwx has pointed out, the obvious problem with the instance on RMS being correct for resolution uncertainties is that just before you do this, you use a calculation for the average if max and min that is reducing the uncertainty. You have to explain why that’s correct for averaging two readings each with resolution uncertainty, but not for averaging thousands if instruments on a single day, ir the global average over a month, or a year ir 30 years.

Reply to  Bellman
July 1, 2023 9:28 am

Let’s [tinu] get real here: without tiny uncertainty numbers for your GAT lines, you and the rest of the global warming hoaxers are bankrupt.

So you and bgwxyz whine about minutia like “you are using RMS!”.

Reply to  karlomonte
July 1, 2023 10:25 am

Minutia? It’s the entire reason for the claim the global uncertainties are so large.

I’ve no idea why you think I have to believe all uncertainties are “tiny”. I’d like it if we could be more certain about the actual temperature change but I doubt that’s possible short of a time machine. That’s why I think it’s good there’s a range of different estimates, using different methods. I’ve always said I think the real uncertainty is probably greater than most of the estimates, just by comparing different sets.

What I don’t agree with is coming up with self serving, implausible uncertainty figures based on dubious statistics. I’m not even sure why you think this is good for your cause. If you want to pursuade people there has been no global warming, claiming that we might have had three times as much as previously thought doesn’t seem like a good strategy.

Reply to  Bellman
July 1, 2023 12:13 pm

What I don’t agree with is coming up with self serving, implausible uncertainty figures based on dubious statistics.

Another irony overload!

Reply to  karlomonte
July 1, 2023 1:08 pm

They simply can’t understand that when the use the formula

(x1 + x2)/2 they are finding an average. When they find the uncertainty for that formula they are finding the average uncertainty. I.e. the total uncertainty divided by the number of members — q.e.d. the average uncertainty. The average uncertainty is *not* the uncertainty of the average!

Reply to  Tim Gorman
July 1, 2023 2:43 pm

(x1 + x2)/2 they are finding an average.

Says someone who insists (max + min) / 2 is not an average.

the total uncertainty divided by the number of members

How many more times. Total uncertainty is not the uncertainty of the total. The total uncertainty is usually (assuming random errors) is found by adding in quadrature – that it is it less than the sum of the uncertainties. The uncertainty of the average is only the same as the average uncertainty if you just add all the uncertainties and divide by N. There is only one person in the argument who is doing that.

Reply to  Bellman
July 1, 2023 3:19 pm

Says someone who insists (max + min) / 2 is not an average.

Just quit now while you’re behind, its easier.

Reply to  Bellman
July 2, 2023 10:43 am

add all the uncertainties in quadrature and divide by N”

Eqns. 5 and 6.

Reply to  Pat Frank
July 2, 2023 12:06 pm

5 and 6 are not doing that. They would be doing that if you took the divisor from outside the square root sign. But what you are doing is adding in quadrature and dividing by root N.

Reply to  Bellman
July 2, 2023 12:41 pm

“Yes, it’s true, I really am this clueless” — the hapless Bellman.

Reply to  Bellman
July 2, 2023 1:13 pm

No. Eqns. 5 & 6 divide by N inside the root.

Reply to  Pat Frank
July 2, 2023 2:29 pm

Argh. My mistake. Yes, I meant you need to take the divisor out, from inside the root.

You are saying

√[(N × σ²) / N] = σ

Which is not adding in quadrature and dividing by N.

Adding in quadrature and dividing by N would be

√(N × σ²) / N = σ / √N

Reply to  Bellman
July 2, 2023 6:06 pm

No, Bellman.

Using your method of removal, it becomes (√N/√N)×√σ² = ±σ.

Exactly eqns. 5 & 6.

Reply to  Pat Frank
July 2, 2023 6:13 pm

Splat!

Reply to  Jim Gorman
July 2, 2023 6:34 pm

Well said.

Reply to  Bellman
July 2, 2023 8:30 pm

Face plants—you have a special talent here.

Reply to  Pat Frank
July 2, 2023 6:17 pm

Using your method of removal, it becomes (√N/√N)×√σ² = ±σ

At least one of us is very bad at maths.

(√N/√N)×√σ²

That is not the same as

√(N × σ²) / N

You see, in “my” method the divisor N is outside the square root symbol, so doesn’t get square rooted. Whilst the multiplier N is inside the square root symbol so will be √N.

Hence

√(N × σ²) / N = (√N × σ) / N = σ / √N

Reply to  Bellman
July 2, 2023 9:32 pm

“in “my” method the divisor N is outside the square root symbol, so doesn’t get square rooted.”

In that case, the rules of algebra require you to have written it as [√(N × σ²)]/N rather than as √(N×σ²)/N.

Hoisted by your own petard, Bellman.

√(N × σ²)/N is, in fact, [adding] all the uncertainties in quadrature and [dividing] by Nafter which the square root is taken. Eqns. 5 & 6.

Reply to  Pat Frank
July 3, 2023 3:55 am

What rules of algebra are you talking about?

In BODAS, O comes before D.

Even if I should have put the brackets I’d have thought the meaning was clear enough, given I’d told you in so many words that I was taking the divisor out of the root.

Maybe I need to go back to using LaTeX with all the problems that entails.

Reply to  Bellman
July 3, 2023 5:49 am

Your notation was wrong.

Reply to  Pat Frank
July 3, 2023 6:21 am

Could you point me to a reference explaining why it was wrong. I’m not that happy with writing inline equations, so maybe it could be clearer.

But just asserting it was wrong, seems to look like a distraction from the actual point. Which is that your equation is not doing what you claimed – adding in quadrature and dividing by N.

Reply to  Bellman
July 3, 2023 11:23 am

Why is a reference necessary? Your abbreviated root-sign implicitly covered the whole equation.

Nothing indicated division by N was excluded from the root.

In the textual formalism, to indicate division by N only after taking the root, the independent parts should have been separated with parentheses or brackets.

Reply to  Pat Frank
July 3, 2023 1:43 pm

Why is a reference necessary?

So you couldn’t find one either.

Your abbreviated root-sign implicitly covered the whole equation.

I don’t think it does. But if you found it confusing I apologize. I was trying to type out the comment on a tiny phone whilst waiting for a train. At least I didn’t make the same mistake in a peer-reviewed publication.

Still well done in keeping up this distraction. Regardless of whether I made a mistake in the equation, do you accept you were wrong to claim equations 5 and 6 were adding in quadrature and dividing by N?

Reply to  Bellman
July 3, 2023 5:48 pm

you couldn’t find one

The answer is obvious. There was no need to find a reference.

“But if you found it confusing I apologize.”

You wrote the equation incorrectly.

At least I didn’t make the same mistake in a peer-reviewed publication.

Misrepresenting a small triumph. Very shallow, Bellman. Have you ever achieved a peer-reviewed publication?

do you accept you were wrong to claim equations 5 and 6 were adding in quadrature and dividing by N?”

Why would anyone accept that? Eqns. 5 & 6 do in fact add uncertainties in quadrature and divide by N. All within the root. Visual inspection alone is enough to prove that.

Maybe you should restrict comment on the paper to times when you have it before you for reference.

Reply to  Pat Frank
July 3, 2023 6:40 pm

you couldn’t find one
The answer is obvious. There was no need to find a reference.”

You missed the word “either” at the end of my quote.

Eqns. 5 & 6 do in fact add uncertainties in quadrature and divide by N.

Then you wrote the equations wrong.

All within the root.

You can’t divide by N “within the root” the root is part of the adding in quadrature.

Maybe you should restrict comment on the paper to times when you have it before you for reference.

I have it here. I’ll post a screen shot of equation 5.

comment image

This is not the same as adding 0.195 in quadrature, and dividing by N. It’s equivalent to adding in quadrature and dividing by √N.

And given how much grief I’ve been getting over the perceived missing bracket in comment. I feel I should alert the author to the fact that the brackets in his equation are all over the place.

Screenshot 2023-07-04 023736.png
Reply to  Bellman
July 3, 2023 10:46 pm

This is not the same as adding 0.195 in quadrature, and dividing by N. It’s equivalent to adding in quadrature and dividing by √N.”

Wrong.

What’s the equation for root-mean-square, Bellman?

Reply to  Pat Frank
July 4, 2023 4:30 am

What’s the equation for root-mean-square, Bellman?

The clues in the name. it’s the root of the mean of the squares. Square each value add them all up and divide by N to get the mean of the squares then take the square root.

\sqrt{\frac{x_1^2 + x_2^2 + \dots + x_n^2}{N}}

If the mean of all the x’s is zero then this is the same as Standard Deviation.

Adding in quadrature, also know as RSS or Pythagorean addition. Square all your values, add them take the square root of the sum.

\sqrt{x_1^2 + x_2^2 + \dots + x_n^2}

Adding in quadrature and dividing by N. Does what it says on the tin.

\frac{\sqrt{x_1^2 + x_2^2 + \dots + x_n^2}}{N}

IF you think I’m wrong please explain why, rather than just saying “wrong”.

Reply to  Bellman
July 4, 2023 6:08 am

bellcurveman doubles down, again!

Reply to  Bellman
July 4, 2023 8:43 am

Your first equation — for RMS — correctly defines eqns. 5 & 6. The very equations you claimed are wrong.

Notice that when your xᵢ are constant, the sum within the root is merely N×(x²). And that’s Q.E.D.

Reply to  Pat Frank
July 4, 2023 9:44 am

Your first equation — for RMS — correctly defines eqns. 5 & 6. The very equations you claimed are wrong.

I did not say they were wrong. They are the correct equations for RMS. What I said, and it’s there in the comment you linked to, was that they were not “adding in quadrature and dividing by N”, which is what you claimed they were.

Notice that when your xᵢ are constant, the sum within the root is merely N×(x²). And that’s Q.E.D.

Yes that’s why I was saying using the equation to calculate the average uncertainty was redundent, given that you are already assuming your uncertainty is the average uncertainty.

Reply to  Bellman
July 4, 2023 12:51 pm

they were not “adding in quadrature and dividing by N”

That’s prima facie exactly what they are.

… you are already assuming…

That’s a demonstration, not an assumption.

Always assertive, always wrong, Bellman.

Reply to  Pat Frank
July 4, 2023 1:04 pm

That’s prima facie exactly what they are.

And what I’m I doing if you take a second look?

That’s a demonstration, not an assumption.

The point stands. You know what the average uncertainty is, but then want to average it again to find out what the average uncertainty is.

Reply to  Pat Frank
July 4, 2023 5:32 pm

I’m surprised he hasn’t claimed you are using “voodoo math”!

Reply to  Tim Gorman
July 4, 2023 6:32 pm

What? I’m just pointing out where he seems to have misunderstood what I was saying, and still seems to have a problem with where the N is in his equation.

Reply to  Bellman
July 4, 2023 3:41 am

Oh my gosh — bellcurveman grabs the ball, shoots deep and throws … a brick. And then face-plants as the ball is brought back the other direction.

Obsessing over a typo — great job by the trendologist! The GAT hoax is saved! Yay!

/me hands out party kazoos

The really question you should be asking — why would an average uncertainty be needed? (its a really simple answer)

And you have to extend a huge e-hug to bg-whatever, the NIST uncertainty machine can’t tell him an average U, poor guy.

/me hands a hankie

Reply to  karlomonte
July 4, 2023 4:32 am

Not obsessed, just pointing out the irony.

Reply to  Bellman
July 4, 2023 6:22 am

The real danger is that someone unfamiliar with metrology and uncertainty might read these threads and think you know WTF you yap about.

You don’t.

Reply to  Bellman
July 4, 2023 4:31 am

Wow! You simply can’t figure out finding the mean uncertainty, can you?

You are blinded by your cult dogma!

Reply to  Tim Gorman
July 4, 2023 5:48 am

Finding the mean uncertainty is easy when you are saying all your uncertainties are the same. Just multiply by N and divide by N.

You are blinded by your cult dogma!

Scientologists are always telling me that.

Reply to  Bellman
July 4, 2023 8:45 am

Just multiply by N and divide by N.

Right. And show your work for publication.

Reply to  Pat Frank
July 4, 2023 4:30 am

He can’t help himself. He is caught up in a cult and only knows the cult dogma.

Reply to  Tim Gorman
July 4, 2023 6:23 am

His agenda trumps reality, just like Stokes.

Reply to  Tim Gorman
July 1, 2023 3:01 pm

Another great line:

“I’ve no idea why you think I have to believe all uncertainties are “tiny””

Is he really this dense or is it an act? These guys hang on every little squiggle in the UAH graph (and whine (a lot) when CMoB calculates a pause). And bellman is the dude who when apoplectic when I replotted the graph with some estimated uncertainty limits, which looked very much like Pat’s graph above.

Reply to  karlomonte
July 1, 2023 3:21 pm

These guys hang on every little squiggle in the UAH graph

Only becasue it’s the only one allowed to be mentioned here. It’s the one accurate graph whilst all others have gigantic uncertainties.

In case it hasn’t registered with you yet, the main reason I look at every twist and turn is to mock Monckton. Seeing how much this pause has extended or contracted by, when we know that the uncertainty around the trend line is humongous. Laughing at the people here who think Monckton is a genius for being able to put some numbers in a spreadsheet and find a mystical start point for a meaningless trend. Made all the sweeter by seeing the same people insist that the average global temperature doesn’t even exist.

And bellman is the dude who when apoplectic when I replotted the graph with some estimated uncertainty limits

I think you have a different definition of apoplectic. I merely suggested that if you thought there was so much uncertainty you should point it out to Dr Spencer, and Monckton.

which looked very much like Pat’s graph above.

And you still don’t see the irony of this. Hint,

The pathetic Bellman, who lacks the courage to publish in its own name, is perhaps unfamiliar with the difference between the surface and space, where the UAH satellite measurements are taken. The microwave sounding units on the satellites do not rely on glass-bulb mercury thermometers but on platinum- resistance thermometers. We use the satellite record, not the defective terrestrial record, for the Pause analyses.

Reply to  Bellman
July 1, 2023 9:44 pm

is to mock Monckton. 

When are you going to start? All you do is face-plant when you whine about his articles.

Reply to  Bellman
July 2, 2023 5:27 am

In case it hasn’t registered with you yet, the main reason I look at every twist and turn is to mock Monckton.”

So, in other words, you really don’t care what Monckton is doing, you just want to mock him. I think most of us have already figured that out.

Reply to  Tim Gorman
July 2, 2023 6:25 am

Oh yeah.

Reply to  Tim Gorman
July 2, 2023 8:27 am

” you really don’t care what Monckton is doing”

Not really no. I doubt many who take him seriously now. There was a time in the past when he was being offered up as some sort of expert, appearing as a witness in hearings and such like, but now his claims are just an amusing irrelevance. The only people who take him seriously are the true believers.

“you just want to mock him.”

I just want to mock his meaningless arguments. I try not to mock him as a person. I’m sure he’s Avery nice man if you get to know him. But it’s a bit difficult not to take all his insults and libels directed at me a little personally.

Reply to  Bellman
July 2, 2023 9:31 am

Poor baby bellboy, doesn’t get the respect he thinks he deserves.

Reply to  Bellman
July 4, 2023 12:50 am

Bellman,
If, like I have, you had the pleasure of time face-to-face with Viscount Monckton, you would quickly realise that you are dealing with a an intellect that is unusually impressive. As a trivial example, when we were discussing some Chemistry, he recited the Elements song by Tom Lehrer.
https://www.youtube.com/watch?v=AcS3NOQnsQM

Not many people can do that, off the cuff. I can only recite the periodic table up to the rare earths after a lifetime of exposure to the elements.That’s 56 out of 114, so only the first half.
People with excellent recall are commonly found among the top scientists. I have just finished the book “Surely You’re Joking, Mr Feynman”. His ability in math and memory started in childhood. He was a Prof by age 23. I was left with the feeling that his eminence in math was because he remembered more numbers than most people did, so when presented with a challenge he had a huge mental database of recall on which to build. Few of us carry the cube root of 48 or whatever in our minds.
In conversation, Viscout Monckton can go far deeper and wider than his posts on the latest T pause. You need to find a better example if derision is your aim,though I fail to grasp why you are doing the tall poppy thingo.
Geoff S

Reply to  Geoff Sherrington
July 4, 2023 4:49 am

Impressive writing Geoff S!

Reply to  Bellman
July 1, 2023 10:41 am

The reason is obvious to anyone who has read the paper and understands the argument, Bellman. The LiG resolution uncertainty is constant for every T_mean.

You haven’t read the paper.

Like bdgwx, you focuse in on something unfamiliar, impose your own mistaken meaning on it, and then criticize from your lack of understanding.

Reply to  Pat Frank
July 1, 2023 12:08 pm

In all my years working in the software industry, rtfm was my least favourite excuse. It’s usually just blaming the customer for your own bad design.

When I’m reading a paper and there’s something I don’t understand or just seems wrong, and the author is around and asking questions it seems sensible to ask them for clarification. At worst the author can explain my mistake and point to the part of the paper that resolved the issue. I’ll feel embarrassed but will thank them for their help.

If the author keeps deflecting, insists I have to read the entire paper for enlightenmen, and says I shouldn’t be allowed to comment if I don’t understan, then I begin to have my suspicions that the real explanation is that they don’t know the answer.

Reply to  Bellman
July 1, 2023 2:10 pm

‘Read the paper’ is not an excuse when it’s clear you’ve not read the paper. Start at the beginning.

I’ve never deflected. I’m just annoyed at having to repeat the same explanation yet again. The lower limit of LiG resolution is a constant uncertainty conditioning every measurement.

Don’t ask again.

Reply to  Pat Frank
July 1, 2023 3:01 pm

The lower limit of LiG resolution is a constant uncertainty conditioning every measurement.

But uncertainty is not error. Having a constant uncertainty just means the interval remains the same, not that you will get the same error each measurement.

Don’t ask again.

Fine I won’t. I’ll just draw my own conclusions.

Reply to  Bellman
July 1, 2023 10:55 pm

not that you will get the same error each measurement.

You get the same lower detection limit of instrumental uncertainty with each measurement. Uncertainty is not error.

Reply to  Pat Frank
July 2, 2023 4:03 am

You are trying to convert a committed cultist. He will never understand uncertainty because he doesn’t want to understand uncertainty.

I wish you good luck in enlightning him but am pessimistic about the chances of it happening.

Reply to  Tim Gorman
July 2, 2023 10:53 am

Yeah, you’re right, Tim.

Bellman and bdgwx are focused on the same sort of equations here as they focused on from my 2010 paper.

They do so while completely ignoring the analytical context. The insistent ignorance seem to indicate some sort of numerological obsession

bdgwx
Reply to  Pat Frank
July 2, 2023 11:47 am

PF: Bellman and bdgwx are focused on the same sort of equations here as they focused on from my 2010 paper.

They are similar. In that case you told me Bevington 4.22 was the justification you used for the equations in 2010. And as Bellman and I pointed out 4.22 is nothing more than an intermediate step whose result is used in 4.23 to calculate the variance of the mean which can be square rooted for the uncertainty of the mean as described in example 4.2.

Reply to  bdgwx
July 2, 2023 12:45 pm

Where is the NIST uncertainty machine in this post?

Reply to  bdgwx
July 2, 2023 1:25 pm

Bevington 4.22 is nothing more than the correct equation for the weighted average variance.

B 4.23 is its transformation for random error. Systematic error is not random. Use of B. 4.23 is wrong when the error is not random.

You’re wrong, bdgwx.

You and Bellman are wrong at every turn. You find different ways to be invariably wrong.

Insistent ignorance is pathological.

Reply to  Pat Frank
July 2, 2023 5:00 pm

It’s because of their cultist belief that all uncertainty is random, Gaussian, and cancels. There *is* no such thing as systematic bias in their world view. There’s no room for it!

Reply to  Tim Gorman
July 2, 2023 5:19 pm

Just keep lying. You know full well that I’ve discussed systematic errors plenty of times. I certainly think there are both systematic and correlated uncertainties in the instrumental data.

But that doesn’t justify assuming that there are zero random errors.

Reply to  Bellman
July 3, 2023 3:30 am

You may talk about systematic bias but you never actually post anything that shows you understand the consequences of them. You *always*, every single time, fall back to the meme that all uncertainty is random, Gaussian, and cancels. If that wasn’t an inbuilt worldview that you can’t break out of it might dawn on you that you can’t increase resolution or decrease uncertainty by averaging!

YOU are the only one that assumes there are zero random errors, that they all cancel to zero in every case. You do it EVERY SINGLE TIME you make a post!

Reply to  Tim Gorman
July 3, 2023 5:22 am

You may talk about systematic bias but you never actually post anything that shows you understand the consequences of them.”

We probably both have failing memories, and you often seem to blank out anything I say that upsets your assumptions.

Let me try again.

I say that if a measurement uncertainty (U) is caused entirely by random errors, the measurement uncertainty of an average will be U / √N.

I say that if the measurement uncertainty is caused entirely by systematic errors the measurement uncertainty of an average will be U.

More correctly, if the correlation between measurement errors is 0, the uncertainty of the average will be U / √N, and if it’s 1, the uncertainty will be U.

And in between you can use the formula in the GUM.

I also say this is mostly irrelevant to the actual uncertainty of an average, that is if you are not interested in the actual average of a sample, but what it says about the population average.

I also say that there’s a logical fallacy in applying the concept of a systematic error to an anomaly. Any truly systematic errors will cancel when you subtract one value from another.

I also say there’s a problem using the concept of systematic errors to justify large uncertainties in the global average over a period of time, and then claiming that this uncertainty will affect the uncertainty of the trend.

I also say, that all of this is a distraction from the real problem, which would be any systematic error that changes over time, which will affect the trend.

Reply to  Bellman
July 3, 2023 5:27 am

YOU are the only one that assumes there are zero random errors, that they all cancel to zero in every case. You do it EVERY SINGLE TIME you make a post!

I’m sure this is what you want to believe I think. But could you for once point to any comment I have ever made where I have said anything cancels to zero?

How can you look at a statement saying the uncertainty is U / √N and think that means it will always be zero?

Reply to  Bellman
July 3, 2023 6:51 am

How can you look at a statement saying the uncertainty is U / √N and think that means it will always be zero?”

U / √N is the SEM. Theoretically the SEM *can*approach zero. But as Bevington points out, you cannot actually approach this because of statistical fluctuations that can’t be identified in most measurement protocols.

Again, for the umpteenth time, the SEM is *NOT* the accuracy of the population mean. It is the accuracy with which you have calculated the population mean.

They are *NOT* the same thing.

Reply to  Tim Gorman
July 3, 2023 3:50 pm

U / √N is the SEM.

Only for the measurement uncertainty.

Theoretically the SEM *can*approach zero.

As I say, very much only theoretically. That would assume no systematic errors, and it would still only approach zero. It’s never actually going to be zero unless you take an infinite number of measurements. For any finite number of measurements the SEM is strictly greater than zero, unless U is 0. So I’ll ask again, why do you think I believe that all random errors cancel to zero?

because of statistical fluctuations

He says nonstatistical fluctuations.

Again, for the umpteenth time, the SEM is *NOT* the accuracy of the population mean.

And for the same number of times, I agree.

It is the accuracy with which you have calculated the population mean.

And again, you do not calculate the population mean. You estimate it from the sample mean.

Reply to  Bellman
July 3, 2023 5:06 pm

Yes, nonstatistical flucuations. Things statistics can’t identify and can’t account for. So no, an infinite number of observations won’t get you to zero!

The SEM * sqrt(N) GIVES YOU THE POPULATION MEAN. You can’t even get this one right!

Reply to  Tim Gorman
July 3, 2023 7:15 pm

The SEM * sqrt(N) GIVES YOU THE POPULATION MEAN. You can’t even get this one right!”

If that’s right I’d prefer to be wrong.

Reply to  Bellman
July 3, 2023 6:21 am

I say that if a measurement uncertainty (U) is caused entirely by random errors, the measurement uncertainty of an average will be U / √N.”

This is a perfect example of you ALWAYS ignoring systematic bias in measurements. You don’t even realize that you do it!

“I say that if the measurement uncertainty is caused entirely by systematic errors the measurement uncertainty of an average will be U.”

Therein lies your total and complete lack of understanding of the physical world and of uncertainty! No measurement can consist of only systematic bias or of random error. NONE.

More correctly, if the correlation between measurement errors is 0, the uncertainty of the average will be U / √N, and if it’s 1, the uncertainty will be U.”

You still can’t understand the difference between the average uncertainty and uncertainty of the average. U / √N is the SEM! It is how accurately you have calculated the population average. It is *NOT* the uncertainty of the population average!

Why is that so hard to understand?

I know you won’t do it but you *really* should go here: https://statisticsbyjim.com/hypothesis-testing/standard-error-mean/

Read it over and over till you understand it. “For the standard error of the mean, the value indicates how far sample means are likely to fall from the population mean using the original measurement units. Again, larger values correspond to wider distributions.”

And in between you can use the formula in the GUM.”

The formula in the GUM is for measurements with only RANDOM ERROR! You’ve been given the quotes from Taylor and Bevington about this often enough that it should have sunken it.

“I also say that there’s a logical fallacy in applying the concept of a systematic error to an anomaly. Any truly systematic errors will cancel when you subtract one value from another.”

That is total and utter crap. If q = u – v then the uncertainty of q is the sum of the uncertainties of u and v. It’s not the difference of the uncertainties in u and v but the SUM!

Taylor addresses this in Section 2.5 and Rule 2.18! Once again, it’s obvious that you have NEVER studied Taylor at all or worked out any examples in his book. If you had you would find that your answers would never match the answers he gives!

Bevington never addresses this because his entire book is based on all uncertainty being from random errors with no systematic bias. The GUM is exactly the same!

I also say there’s a problem using the concept of systematic errors to justify large uncertainties in the global average over a period of time, and then claiming that this uncertainty will affect the uncertainty of the trend.”

That’s because you ALWAYS use only the stated values of the measurements to calculate your trend line. It’s your meme of “all uncertainty is random, Gaussian, and cancels”. I’ve given you graphs at least twice showing that trend lines inside the uncertainty intervals can be up, down, sideways, or even a zig-zag. YOU DON’T KNOW WHICH AND YOU CAN NEVER KNOW WHICH.

But you stubbornly cling to the belief that all stated values are 100% accurate so the trend lines must be what they show.

I also say, that all of this is a distraction from the real problem, which would be any systematic error that changes over time, which will affect the trend.”

You can’t even discern the difference in systematic bias that comes from calibration drift and that which comes from instrumental resolution. One is fixed, the other is not. Instrumental resolution doesn’t change over time, calibration drift does. Pat has looked only at the fixed uncertainty from instrumental resolution – *that* part of uncertainty that *can* be known.

Reply to  Tim Gorman
July 3, 2023 6:37 am

“I say that if a measurement uncertainty (U) is caused entirely by random errors, the measurement uncertainty of an average will be U / √N.”

This is a perfect example of you ALWAYS ignoring systematic bias in measurements.

Read the rest of the comment.

Reply to  Tim Gorman
July 3, 2023 6:43 am

Therein lies your total and complete lack of understanding of the physical world and of uncertainty! No measurement can consist of only systematic bias or of random error. NONE.

What do you think the word “if” means?

I’m presenting the two extremes. In the real world the truth lies somewhere in between.

Really stop being so hysterical. Read what I say, try to understand it and if you still don’t ask me questions.

I don’t have the time or desire to plough through the rest of your insults and misunderstanding at this point.

Reply to  Bellman
July 3, 2023 7:24 am

What do you think the word “if” means?”

It means you’ve wandered off, once again, into an alternate universe where all uncertainty is random, Gaussian, and cancels.

“I’m presenting the two extremes. In the real world the truth lies somewhere in between.”

No, you are creating an alternate universe. The truth lies no where in that alternate universe.

“Read what I say, try to understand it and if you still don’t ask me questions.”

What you say comes from that alternate universe. I don’t need to understand it, most of us don’t live there.

Reply to  Tim Gorman
July 3, 2023 3:53 pm

You still can’t understand the difference between the average uncertainty and uncertainty of the average.

Sure I can. uncertainty of the average is what you want, average uncertainty is what Pat Frank gives you.

One is a lot bigger than the other. Guess which one we are talking about in this paper.

Reply to  Bellman
July 3, 2023 5:10 pm

That mean instrumental uncertainty goes into each and every observational value. When the population uncertainty is calculated that mean instrumental uncertainty gets added into the total uncertainty for each and every observation. The average simply can’t have any less uncertainty than the mean instrumental uncertainty but it can definitely have *more* uncertainty than the mean instrumental uncertainty!

Reply to  Tim Gorman
July 3, 2023 4:04 pm

I know you won’t do it but you *really* should go here:

What makes you think I haven’t already seen it? What part of it do you think disagrees with anything I’ve said? This is all basic statistics.

For the standard error of the mean, the value indicates how far sample means are likely to fall from the population mean using the original measurement units. Again, larger values correspond to wider distributions.

Exactly what I’m trying to tell you. Honestly, I have no idea what point you think you are making or arguing against. I’m not even sure you do.

The formula in the GUM is for measurements with only RANDOM ERROR!

Keep up. I’m talking about correlation at that point. You can look at a systematic error as a type of correlation, but the GUM points out it’s better to try to eliminate them and then include an uncertainty factor for the uncertainty caused by the correction. It also points out that the term systematic and random uncertainties are not always a helpful distinction.

Reply to  Bellman
July 3, 2023 5:12 pm

You just don’t get it. The words “sample means” is plural. You have to have more than one sample in order to have a distribution and a standard deviation of the SAMPLE MEANS.

When you say that all you need is one sample you are violating what the standard deviation of the sample means requires!

Reply to  Tim Gorman
July 3, 2023 6:13 pm

The words “sample means” is plural.

Whereas as sample mean is singular. Are we down to remedial English now?

You have to have more than one sample in order to have a distribution and a standard deviation of the SAMPLE MEANS

Firstly, almost nobody apart from you and Jim call, it the SAMPLE MEANS. Secondly, no you don’t. I’ve quoted the very things you insist I read spelling out you only need one sample to estimate the standard error of the mean.

When you say that all you need is one sample you are violating what the standard deviation of the sample means requires!

What you don’t understand is that I’m a super hero, who as god like powers that you mere mortals lack. This includes the ability to calculate a sampling distribution from a single sample, a gift knwon only to me, and anyone who’s read a basic statistics text book.

Reply to  Bellman
July 4, 2023 6:06 am

Whereas as sample mean is singular. Are we down to remedial English now?”

“Sample means” is not equivalent to “sample mean” no matter how much you wish it to be.

Firstly, almost nobody apart from you and Jim call, it the SAMPLE MEANS”

You have been given at least two references with internet links saying the standard deviation of the sample MEANS.

Here is another: https://www.khanacademy.org/math/ap-statistics/sampling-distribution-ap/sampling-distribution-mean/v/standard-error-of-the-mean

Take a sample from a population, calculate the mean of that sample, put everything back, and do it over and over. How much do those sample means tend to vary from the “average” sample mean? This is what the standard error of the mean measures. Its longer name is the standard deviation of the sampling distribution of the sample mean.” (bolding mine, tpg)

here is another: https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.02%3A_The_Sampling_Distribution_of_the_Sample_Mean

hat we are seeing in these examples does not depend on the particular population distributions involved. In general, one may start with any distribution and the sampling distribution of the sample mean will increasingly resemble the bell-shaped normal curve as the sample size increases. This is the content of the Central Limit Theorem.” (bolding mine, tpg)

In order for the CLT to apply you HAVE TO HAVE a sampling distribution. One sample alone is *NOT* a distribution no matter what *you* think. If you have only one sample you simply don’t know if it is representative of the population mean or not, no matter how large your sample size is. As Bevington points out as your sample size grows so does its uncertainty because of nonstatistical fluctuations.

Now, tell us all again how no one else talks about the use of SAMPLE MEANS.

I’ve quoted the very things you insist I read spelling out you only need one sample to estimate the standard error of the mean.”

You never provide a reference link for *anything*. And one sample can *ONLY* estimate the population mean, it cannot estimate the standard error of the mean because you don’t have a distribution of means that can form a distribution! If you don’t know the population mean then you can’t calculate the SEM.

What you are actually saying, even if you don’t realize it, is that *YOU* can *GUESS* at what the population mean is from *YOUR* one sample. Without a distribution of sample means you actually have no way to calculate the standard deviation of the sample means so you have no way to evaluate how close you actually are to the population mean. The CLT *requires* you to have multiple samples in order to approach a Gaussian distribution of sample means. The mean of one sample does *NOT* make a Gaussian distribution.

Nor can you “bootstrap” into the SEM by “assuming” you KNOW the population mean from one sample and can use it to calculate the SEM. That’s a form of circular logic fallacy.

What you don’t understand is that I’m a super hero, who as god like powers that you mere mortals lack. This includes the ability to calculate a sampling distribution from a single sample, a gift knwon only to me, and anyone who’s read a basic statistics text book.”

No, it’s known only to *YOU*, not anyone else. Most people recognize circular logic when they see it – but not you.

Reply to  Tim Gorman
July 4, 2023 6:48 am

[I’m not sure which is worse at the moment. These 1000 line essays from Tim to every comment I make, or the 1000 1 line cries for attention karlo makes every day. At least it’s easy to just bin all of karlo’s]

Reply to  Bellman
July 4, 2023 7:13 am

Translation: “I don’t understand, but I’ll continue making a fool of myself regardless.”

Reply to  Tim Gorman
July 4, 2023 7:07 am

All this is about the fact that the Gorman’s insist that the correct term is “standard deviation of the sample means”, which they think proves you can only figure it out if you have multiple samples. As an aside I pointed out that almost nobody calls it the standard deviation of the sample means (plural), it’s either the standard error of the mean, or in some circles standard deviation of the mean (in either case singular). Ity’s not even that relevant, but as usual this is blown up into some major issue with lost of insults about my inability to read English.

So Tim now gives me a couple of references to prove that it’s really called the standard error of the means. Let’s see how good his English really is.

Reference 1

https://www.khanacademy.org/math/ap-statistics/sampling-distribution-ap/sampling-distribution-mean/v/standard-error-of-the-mean

is headed Sample Error of The Mean. And then calls it the standard deviation of the sampling mean – singular

Take a sample from a population, calculate the mean of that sample, put everything back, and do it over and over. How much do those sample means tend to vary from the “average” sample mean? This is what the standard error of the mean measures. Its longer name is the standard deviation of the sampling distribution of the sample mean.

The lesson is called “Lesson 6: Sampling distributions for sample means”

Not sampling distribution of the means. Not “of” but “for”. Not “deviation” but “distributions”. As usual Tim wants to pick up on anything plural to make his point, but ignores the obvious point that if you are talking about distributions in general you use the plural, and the same with means.

The part he want to highlight is the part that says “How much do those sample means tend to vary from the “average” sample mean?”. Once again he’s completely unable to distinguish between an explanation of what a sampling distribution is, in a hypothetical sense, and how you actually use it in practice.

Reference 2
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.02%3A_The_Sampling_Distribution_of_the_Sample_Mean
is headed “The Sampling Distribution of the Sample Mean”. Again in the singular.
Here’s the part that Tim draws attention to

“hat we are seeing in these examples does not depend on the particular population distributions involved. In general, one may start with any distribution and the sampling distribution of the sample mean will increasingly resemble the bell-shaped normal curve as the sample size increases. This is the content of the Central Limit Theorem.” (bolding mine, tpg)

The bold says sampling distribution, it’s followed by the word “mean” singular.

Reply to  Bellman
July 4, 2023 7:14 am

Oh my, hypocrisy now: 1000 line essays from Tim”

Reply to  Bellman
July 4, 2023 4:36 pm

Once again, you can’t even read!

Take a sample from a population, calculate the mean of that sample, put everything back, and do it over and over.”

What in Pete’s name do you think this is describing if not multiple samples?

Once again he’s completely unable to distinguish between an explanation of what a sampling distribution is, in a hypothetical sense, and how you actually use it in practice.”

A distribution has multiple values, not just one! A sampling distribution is developed from multiple samples.

“sampling distribution”

I don’t know what you think you are asserting! *YOU* said you only need one sample to have a sampling distribution. Now you are changing that to saying you need only one SAMPLING DISTRIBUTION! The issues is *not* how many sampling distributions you need but how many samples you need to form a distribution.

Once again you have face planted. You think you can fool everyone with your argumentative fallacy of Equivocation? I.e. changing the definition of what is being discussed?

You are as big of a fool as everyone believes!

Reply to  Tim Gorman
July 4, 2023 5:01 pm

Once again, you can’t even read!

Once again, why do you think I have any interest in reading another hate filled missive from you, when your first words are to insult me once again.

I’ve pointed to the sections in your own sources that say in so many words you do not need multiple samples to estimate the sampling distribution. That the maths allows you to do that from a single sample, or the population. Believe it or not, that’s up to you – but it’s pathetic to just keep ignoring it and then pointing to every mention of a plural as proof that you can’t do what you have just been shown.

Reply to  Bellman
July 5, 2023 3:09 am

You’ve pointed to *NOTHING* that says you don’t need multiple samples to create a sampling DISTRIBUTION. Every reference you can find on the internet plus Taylor, Bevington, and Possolo says you do. Even the CLT requires you to have multiple samples in order to form a Gaussian distribution around the estimated population mean.

That the maths allows you to do that from a single sample, or the population.”

You simply cannot do that. defining a Gaussian distribution requires you to have both an average and a standard deviation. The only way you can find the SEM is by dividing the population standard deviation by the sample size. And if you already know the population standard deviation then you also already know the population average because they go together. If you already know both the population average and standard deviation then the SEM is meaningless!

Reply to  Tim Gorman
July 5, 2023 5:55 am

You’ve pointed to *NOTHING* that says you don’t need multiple samples to create a sampling DISTRIBUTION.

I’ve no interest in prolonging this discussion at this time, and I worry it’s not good for our mental health.

But for the record here’s a quote again:

Fortunately, you don’t need to repeat your study an insane number of times to obtain the standard error of the mean. Statisticians know how to estimate the properties of sampling distributions mathematically, as you’ll see later in this post. Consequently, you can assess the precision of your sample estimates without performing the repeated sampling.

https://statisticsbyjim.com/hypothesis-testing/standard-error-mean/

I suspect Tim is now fixating on the idea of creating a sampling distribution rather than knowing what it is. As always he seems to be incapable of understanding that a distribution is an abstract concept.

Reply to  Bellman
July 5, 2023 7:03 am

sampling distributions”

Only in bellman’s alternate universe is “distributions” singular.

Reply to  Tim Gorman
July 5, 2023 8:24 am

This is just sad, and I worry about you.

Statisticians know how to estimate distributions, meaning they know how to estimate the properties of more than one distribution. That does not mean there are multiple distributions for any one sample.

You look at the heights of trees in a forest and estimate the sampling distribution for that. Then you look at the IQ of participants in a discussion forum, and estimate that distribution. Hey, now you know how to estimate two different distributions -plural.

Reply to  Bellman
July 5, 2023 12:27 pm

Those are things you can see. How about the widths of hair strands from a discussion forum or hundredths of a degree change. Those are things you need measurements for. Can they estimate the uncertainty by just examining the measurements?

Reply to  Jim Gorman
July 5, 2023 6:50 pm

Talk about missing the point.

Reply to  Bellman
July 6, 2023 4:08 am

If you only take samples of the trees then you need MULTIPLE samples in order to find the average height accompanied with a measure of how accurately that height represents the population of trees, i.e. the standard deviation of the sample means, incorrectly known as the Standard Error,

Estimating distributions is GUESSING! That’s not physical science – its voodoo statistics!

You can’t look at one sample of the IQ of the participants and KNOW the distribution with any certainty at all. You simply don’t know if your single sample actually represents the population.

What if your life depended on knowing the average height of the trees in the forest? Would you settle for ONE sample and guess at the average height?

Here’s another one for you. I am a Type 2 diabetic. My endocrinologist recently changed my medication and we both worried about what that would do to my glucose levels and night when my metabolism bottoms out. I risk going into diabetic shock and dying at 55 or lower.

So at 3:30AM I get up and measure my glucose levels with my Freestyle Libre 3 sensor and with my finger-stick meter. Libre says my glucose is 65 +/- 10. The finger-stick meter says 80 +/- 10.

Now what do I tell the endocrinologist my blood sugar is?

73 +/- 5
73 +/- 10
73 +/- 15
73 +/- 20
Or something else entirely?

Remember the 55 danger level for diabetic shock and death.

Reply to  Tim Gorman
July 7, 2023 3:12 am

I see Bellman has decided to not answer my post. Could it be that he knows his assertions concerning uncertainty are not only wrong but dangerous when applied to reality?

Reply to  Tim Gorman
July 7, 2023 4:00 am

Good grief man. You post countless aggressive posts every hour, each more demented than the last. I reply to far mor than is healthy but each time it just results in more deranged insults. And then if I don’t answer one of them you claim it as a victory.

I’ll look at you post when I have time, but the short answer is yo’re wrong. It’s the safest assumption.

Reply to  Bellman
July 7, 2023 4:56 am

I anxiously await your advice on what I should tell my endocrinologist. But I’m not holding my breath.

Reply to  Tim Gorman
July 7, 2023 5:47 am

Remember this is the guy who all my inner thoughts!

Reply to  Tim Gorman
July 7, 2023 6:39 am

If you are waiting on me for medical advice, you are as good as dead. I’m certainly not going to ask you to deal with any of my pressing medical conditions.

Reply to  Tim Gorman
July 7, 2023 6:37 am

OK, let’s start.

Part 1:

Forests.

Wrong on most points. Pointless going through all this again. I’ve explained why you don’t need not than one sample. I’ve explained why taking multiple samples would be pointless and a waste of time and money, and on some cases unethical.

I’ve shown how your own sources point this out. Even the infallible GUM tells you to take a sample of measurements and divide the SD by √N to get the “experimental standard deviation of the mean”.

You would be right to call any of these figures a guess, but it’s a highly educated guess, based on solid mathematics and centuries of practice. But it’s also a guess if you use you sample of samples method. You are literally relying on the same concept of looking at the standard deviation of a sample to estimate the standard deviation of the sampling distribution. You could always try taking a sample of samples of samples.

If my life for some reason depended on knowing the average height of the trees in a forest I would far sooner it was based on one large sample then one smaller sample. If you can repeat a sample of size 30, 30 times just to estimate the uncertainty of a sample of size 30, then why not just take a sample of size 900, which will be more certain?

Reply to  Bellman
July 7, 2023 6:55 am

Part 2

“I am a Type 2 diabetic”

I’m really sorry to hear that. You have my condolences. But I’m definitely not the person to ask for medical advice. Please speak to your doctor, not some anonymous internet source such as me.

I don’t know what would be expected from two different instruments on two different parts of the body, but it’s possible that one of the readings is wrong. Please don’t look at the average of the two. This is s lorance issue, not a question of the mean, and if it’s critical the value doesn’t get too low, I would say the safer option is to assume the lower figure is correct.

It’s a similar problem with uncertainty in temperature rises. If you are worried about top much warming, but you are told there are huge uncertainties in the record, you will have to assume the worst case is correct until you have better data.

Reply to  Bellman
July 7, 2023 8:51 am

Fortunately real engineering normally includes a cost-benefit analysis that would prevent spending hundreds of trillions of dollars, planting tens of thousands of wind turbine that don’t work, to fix a problem that can’t be detected (see Fig. 1), based on a number that exists nowhere in the real world.

This is the essence of the GAT hoax.

Reply to  Bellman
July 7, 2023 3:53 pm

I don’t know what would be expected from two different instruments on two different parts of the body, but it’s possible that one of the readings is wrong”

ROFL!! But not for temperatures? Or LIG thermometers? The difference in readings from taking measurements on the fingertip vs the back of the arm IS EXACTLY THE SAME AS TAKING SINGLE TEMP READINGS AT DIFFERENT LOCATIONS!

Which one is correct? Which one is least correct? Which one is most correct?

“It’s a similar problem with uncertainty in temperature rises. If you are worried about top much warming, but you are told there are huge uncertainties in the record, you will have to assume the worst case is correct until you have better data.”

Except climate science assumes all the uncertainties cancel! So they have no “worst case”!

Reply to  Bellman
July 7, 2023 3:50 pm

Wrong on most points. Pointless going through all this again. I’ve explained why you don’t need not than one sample. I’ve explained why taking multiple samples would be pointless and a waste of time and money, and on some cases unethical.”

Unless your sample consists of the entire sample then you can only ASSUME the sample distribution is the same as the population distribution. With only one sample you have no way to judge how close you are to the population mean. You can’t calculate the SEM unless you know the population standard deviation and if you already know the population standard deviation then you also know the populatio mean then the SEM is meaningless.

You fail all the way around.

You would be right to call any of these figures a guess, but it’s a highly educated guess”

It’s not a guess at all let alone an educated guess. It’s an assumption to make things easier – something statisticians do but not people who have to live with the results of the measurements.

It’s like saying I can measure the diameter of all the pistons in my car at 100,000 miles and assume the distribution of wear in that sample is the distribution of wear for all cars.

The CLT allows you to *measure* how close you are. That’s hardly a guess. With just one sample the CLT is not in play!

“based on one large sample then one smaller sample.”

You keep forgetting Bevington. By the time you make the single sample large enough nonstatistical flucuations will have already set in making your “guess” meaningless.



Reply to  Bellman
July 6, 2023 4:13 am

A distribution is *NOT* an abstract concept. You are trying to get out from under your idiotic statements.

The distribution can be extremely important in physical science. For example, the insulating rings in the booster on the Challenger or the infection factor for COVID in healthy youth, or the shear strength of the steel beams used in a bridge span.

As usual you just totally ignore reality and remain in your alternate universe where guessing at things or ignoring things have no consequence for you.

Reply to  Bellman
July 3, 2023 7:30 am

Stuffing the average formula into the GUM (or the vaunted NIST uncertainty machine) is an abuse of these tools.

You don’t know what you are doing, as well evidenced by this:

I also say that there’s a logical fallacy in applying the concept of a systematic error to an anomaly. Any truly systematic errors will cancel when you subtract one value from another.

Just another indication that you don’t know what you don’t know.

affect the trend.

Ah yes, the only thing that really matters. Damn the torpedoes, defend the trend!

Reply to  karlomonte
July 3, 2023 12:35 pm

“Stuffing the average formula into the GUM (or the vaunted NIST uncertainty machine) is an abuse of these tools.”

Strange, it seems like only a little while ago karlo was the one insisting we had to use those equations to understand why uncertainties grew with sample size. It was his answer to all objections, just use equation 10 of the GUM.

Then when we pointed out that it produced the same resulte, a decrease in uncertainty with sample size, he made a vallient effort to demonstrate how he could make to show increasing uncertainties.

Only when that failed, does he go round saying it’s the wrong equation, and it’s an abuse to use it.

Reply to  Bellman
July 3, 2023 2:05 pm

Strange, the trendology lot no longer points to the NIST TN after it was pointed out that it doesn’t say what they claimed.

Reply to  karlomonte
July 3, 2023 3:17 pm

What NIST TN? Do you mean TN1900 example 2 that Jim keeps banging on about, whilst Tim insists doesn’t work in the real world?

I don’t know about the “trendology lot”, but I have mentioned it lots of times. I think it’s correct with a few minor caveats. I’ve really no idea why you think I don’t. After all it’s just saying take the SEM of a collection of maximum temperatures as the uncertainty of the average.

Reply to  Bellman
July 3, 2023 4:08 pm

You do realize that TN 1900 has nothing to do with measurement uncertainty don’t you? Both random and systematic uncertainty is assumed to be negligible.

The purpose is to evaluate the effects of experimental uncertainty as caused by obtaining different varying values for the measurand under the same conditions of measurement. IOW, experiments.

The implication is that measurement uncertainty may be large enough to affect the results and needs to be added into the total uncertainty.

Every time another experimental daily data is added, the results need to be recalculated to find the new mean and experimental uncertainty.

This exactly how GUM Section 4 deals with experimental uncertainty.

Reply to  Jim Gorman
July 3, 2023 5:48 pm

You do realize that TN 1900 has nothing to do with measurement uncertainty don’t you?

I think i pointed that out to you when you first brought it to my attention. But it is called “Simple Guide for Evaluating and
Expressing the Uncertainty of NIST Measurement Results”. Really it depends on how you are defining the measurand, and measuring model. They define it as an observational model, which looks just like a statistical sample to me.

Which one makes more sense would depend on what question you were asking.

Both random and systematic uncertainty is assumed to be negligible.

They assume the calibration uncertainty is negligible. In terms of resolution, the readings are give to the nearest 0.25°C, but the temperatures vary by over 10°C, so it seems unlikely that would have any noticeable effect.

This exactly how GUM Section 4 deals with experimental uncertainty.

And is the same method you would apply to a random sample of different things. But for some reason you seem to think that needs different maths. The problem is none of these sources talk about random sampling as such, becasue they are focused on measurements, and you refuse to use statistical books becasue they don’t deal with measurement.

Reply to  Bellman
July 4, 2023 4:28 am

They assume the calibration uncertainty is negligible.”

Possolo also assumed no random error. Did you see *any* statement of uncertainty in the document? He assumed *no* uncertainty which means both systematic and random uncertainty.

It is just like climate science – and you – where it is assumed that all uncertainty is random, Gaussian, and cancels and can therefore be ignored in any statistical analysis of the temperature data sets. Trend lines based on stated values only are 100% accurate because the stated values are 100% accurate!

You just can’t help yourself, can you? You need to abandon your meme of all uncertainty being random, Gaussian, and cancels.

Reply to  Tim Gorman
July 4, 2023 5:45 am

Possolo also assumed no random error.

He does not. It would be stupid if he did as the measurements are only to the nearest 1/4 degree. What he assumes, is that any random uncertainty is negligible compared to what you are actually measuring, the daily the variance in temperatures.

I’ll ignore the rest of the credo. There’s only so many times I can heart the same lies repeated as a mantra.

Reply to  Bellman
July 4, 2023 9:22 am

He does not. It would be stupid if he did as the measurements are only to the nearest 1/4 degree. What he assumes, is that any random uncertainty is negligible compared to what you are actually measuring, the daily the variance in temperatures.”

As usual you haven’t even bothered to read TN1900 but here you are making comments about it!

For example, Possolo says: “This so-called measurement error model (Freedman et al., 2007) may be specialized further by assuming that E1, . . . , Em are modeled independent random variables with the same Gaussian distribution with mean 0 and standard deviation (. In these circumstances, the {ti} will be like a sample from a Gaussian distribution with mean r and standard deviation ( (both unknown).” (bolding mine, tpg)

This meets all requirements for assuming random error with a Gaussian distribution that cancels. Thus allowing the variation in the stated values to be used in defining the uncertainty of the average.

This has been pointed to you own three separate occasions that I know of, one of them being from me. And yet you INSIST on ignoring the implications of what Possolo laid out in the example. I can only assume that is deliberate. No one can be that unintenionally ignorant or forgetful.

Reply to  Tim Gorman
July 4, 2023 10:42 am

Why don’t try reading what I said rather than answering points I never made?

Your quote says nothing about the size of the measurement uncertainties, just that it’s assumed that the actual temperatures follow a Gaussian distribution.

Reply to  Bellman
July 3, 2023 4:56 pm

Go look in the mirror.

bdgwx
Reply to  Pat Frank
July 2, 2023 6:57 pm

That’s patently false Pat. It is unequivocal that Bevington intends his readers to use either 4.10, 4.14, or 4.23 to compute the uncertainty of the mean. 4.13 is but an intermediate step used in 4.14 just like 4.22 is an intermediate step used in 4.23.

Your criticism that 4.23 is for random error only cannot be solved with equation 4.22. To handle both random and systematic error you have to use general law of propagation of uncertainty 3.13 which has the covariance term. Of course, JCGM 100:2008 equation 16 does the same thing but uses the concept of correlation via r instead of the covariances.

Reply to  bdgwx
July 2, 2023 8:34 pm

bee’s wax doubles-down on his stupidity.

Reply to  bdgwx
July 2, 2023 9:17 pm

It’s exactly right, bdgwx, both by inspection and according to Bevington’s own text.

Propagation of error is an irrelevant nonsequitur. Shifting ground is dishonest.

bdgwx
Reply to  Pat Frank
July 3, 2023 7:11 am

PF: It’s exactly right, bdgwx, both by inspection and according to Bevington’s own text.

Where does Bevington say 4.22 is for systematic error and 4.23 is for random error?

Reply to  bdgwx
July 3, 2023 11:03 am

Bevington is all about random error.

A given researcher must bring knowledge of the sort of error to the calculation.

Doing science is making decisions based on training, knowledge, and standards of practice, bdgwx.

You’re second guessing all of those without possessing any of them.

bdgwx
Reply to  Pat Frank
July 3, 2023 1:01 pm

PF: Bevington is all about random error.

Let me get this straight. Bevington is all about random error except for this one isolated equation on page 58 in a section on relative uncertainties. Is that what you are saying? Where does Bevington say that?

And why would use an equation that is only for systematic error anyway? Surely you don’t believe that all measurements from all instruments have the EXACT same error?

Reply to  bdgwx
July 3, 2023 1:37 pm

What in Pete’s name makes you think anything on Page 58 has to do with systematic uncertainty?

YOU ARE CHERRY PICKING AGAIN!

Relative uncertainty doesn’t mean “systematic uncertainty”!

Bevington says right in the start of the book that systematic uncertainty is not amenable to statistical analysis! How many times have you been given the exact quote? Do you want it again? Will you remember it if I give it to you ONE MORE TIME?

You are totally lost in the weeds of uncertainty. You hvee been so for a long time!

Reply to  Tim Gorman
July 3, 2023 2:06 pm

There is no way out for them.

bdgwx
Reply to  Tim Gorman
July 3, 2023 4:41 pm

TG: What in Pete’s name makes you think anything on Page 58 has to do with systematic uncertainty?

Nothing. That’s my point. It sounds like you’re just as incredulous about as am I. Can you help me convince that Pat that Bevington does not mention anything about systematic uncertainty on page 58 or in relation to equation 4.22?

Reply to  bdgwx
July 3, 2023 4:59 pm

So what if he doesn’t mention it on Page 58. He says on Page 3, right at the start of the book, “The accuracy of an experiment, as we have defined it, is generally dependent on how well we can control or compensate for systematic errors, errors that will make our results different from the true values with reproducible discrepancies. Errors of this type are not easy to detect and not easily studied by statistical analysis.”

On Page 6 he says: “As we make more and more measurements, a pattern will emerge from the data. Some of the measurements will be too large, some will be too small. On the average, however, we expect them to be distributed around the correct value, assuming we can neglect or correct for systematic errors.” (bolding mine, tpg)

In the book, he assumes distributions that are random, with *NO* systematic uncertainty. As he says, systematic errors are not easy to detect and not easily studied by statistical analysis.

You and climate science want to ignore systematic bias in measurements so you can assume everything is random, Gaussian, and that it all cancels. Then you can play with your statistical analysis with no problem.

Pat Frank has shown that you simply 1. can’t ignore systematic uncertainty in the form of instrumental uncertainty and 2. that systematic uncertainty does not cancel.

And that just chaps your backside no end and you will make any claim, no matter how stupid it is, to try and refute those simple facts. Wake up and smell the roses the rest of us enjoy in the real world!

Reply to  bdgwx
July 3, 2023 5:00 pm

Time to sharpen the point on your head.

Reply to  Tim Gorman
July 3, 2023 4:59 pm

Incredible, he’s the self-proclaimed “expert” who doesn’t even grasp the basic concepts let alone the terminology. And now he has the temerity to lecture Pat Frank.

Reply to  karlomonte
July 3, 2023 5:24 pm

These are people that have never, NOT ONCE, put their knowledge of measurement and measurement uncertainty on the line for their reputation, financial well-being, or for personal/criminal liability if their estimates of uncertainty should prove to be wrong.

And yet, here they are lecturing us on how all uncertainty cancels. What a freaking joke!

Reply to  bdgwx
July 3, 2023 5:25 pm

Is that what you are saying?

No.

Reply to  bdgwx
July 3, 2023 4:51 am

To handle both random and systematic error you have to use general law of propagation of uncertainty 3.13 “

You are back to cherry-picking with no actual understanding of the context of what you are cherry-picking.

Bevington says in Section 3.1: “However, we may have some control over these uncertainties and can often organize our experiment so that the statistical errors are dominant.”

Both Bevington and the GUM develop Eq 3.13 with the understanding that systematic bias does not exist, only random error (Bevington’s “statistical errors”).

This carries over into Chapter 4 of Bevington as well. In the lead-in to Chapter 4 he states: “In Chapter 2 we defined the mean u of the parent distribution and noted that the most probable estimate of the mean u of a random set of observations is the average x_bar of the observations.”

This implies multiple measurements of the same thing with no systematic bias in the measuring device.

Yet you continue trying to apply these methods of statistical analysis to a situation where there is both systematic bias and likely no Gaussian distribution of random error – i.e. the temperature record.

It just confirms that you have the very same belief as bellman and Stokes: all uncertainty is random, Gaussian, and cancels.

You folks simply can’t get out the box you have built for yourself using that meme!

Reply to  Bellman
July 2, 2023 4:01 am

Of course you don’t understand what uncertainty is so your statement that the interval remains the same is meaningless.

Pat has never said you get the same error, only that you get the same uncertainty! The actual magnitude of “error” is unknown and forever unknowable. It will exist somewhere in the interval and you can’t know it – ever.

Reply to  Bellman
July 2, 2023 10:46 am

I’ll just draw my own conclusions.

You did that right from the start and have promulgated them ever since.

Reply to  Bellman
July 1, 2023 3:02 pm

“Unskilled and Unaware”

Look it up.

Reply to  karlomonte
July 1, 2023 3:09 pm

“Unskilled and Unaware”

At he risk of using a worn out cliche

IRONY OVERLOAD.

Reply to  Bellman
July 1, 2023 9:45 pm

Heh, more amusement from the pseudoscience GAT hoax crowd.

Reply to  Bellman
July 2, 2023 8:32 pm

In all my years working in the software industry, rtfm was my least favourite excuse. It’s usually just blaming the customer for your own bad design.

Logical fallacy of the Inappropriate Analogy—good job.

Reply to  Bellman
July 2, 2023 10:41 am

Because they’re not, “averaging thousands if instruments on a single day, ir the global average over a month, or a year ir 30 years.

Eqns. 5 and 6 obviously calculate the root-mean-square of the uncertainty over different time scales.

Obvious on its face, and obvious to everyone except you and bdgwx, apparently.

bdgwx
Reply to  Pat Frank
July 2, 2023 11:35 am

PF: Eqns. 5 and 6 obviously calculate the root-mean-square of the uncertainty over different time scales.

Obvious on its face, and obvious to everyone except you and bdgwx, apparently.

Yeah. We know. It’s obvious to Bellman and I too.

The question has always been…why? Why did you choose to use RMS?

Reply to  bdgwx
July 2, 2023 1:29 pm

Why did you choose to use RMS?

Because the analysis at that point required the rms of the uncertainty.

Reply to  Pat Frank
July 2, 2023 12:12 pm

Yes that’s what they do. The question is why?

RMS only tells you the individual measurement uncertainty over the different time periods, and as you already knew that your equations are just leaving you with the number you first thought of.

But you then try to pass these off as the uncertainty of the mean, and I think that’s misleading at at the least needs to be justified, rather than just being taken for granted.

Reply to  Bellman
July 2, 2023 12:47 pm

Hah! How can two humans possibly be this neutronium dense?

Reply to  Bellman
July 2, 2023 1:33 pm

as the uncertainty of the mean

No. The mean of the uncertainty.

You and bdgwx are caught in an infinite mistake loop. The prime failing of the ideologically committed.

Reply to  Pat Frank
July 2, 2023 2:55 pm

What you actually say in the paper is

The uncertainty in Tmean for an average month (30.417 days) is the RMS of the daily means

Likewise, for an annual land-surface air-temperature mean

If you meant you are calculating the average uncertainty then that seems an odd way of phrasing it. Why talk about an annual mean if you are not calculating the uncertainty of that mean?

And all you are doing here is saying the average uncertainty is the average uncertainty you already calculated in (4).

But if you do intend it to be the average uncertainty, how can you then claim in the next paragraph

Noteworthy is that the measurement uncertainty conditioning a temperature anomaly based upon the uncertainty in Tmean alone is, (TMmean − T30−yearnormal ) = TManomaly, and 2σManomaly = 1.96 × ±√0.1952 + 0.1952 = ±0.540 ◦C, where M is month.

and then go on to use this average uncertainty figure in 4.4 compute “the lowest limit of uncertainty in any global annual LiG-derived air-temperature anomaly prior to 1981”.

Whatever you may claim, somehow the average uncertainty is becoming the uncertainty of the average.

Reply to  Bellman
July 2, 2023 6:25 pm

Resolution provides the constant lower limit of uncertainty in each LiG measurement.

The resolution uncertainty in a daily mean is identical to the resolution uncertainty in a monthly mean because the uncertainty in every daily mean enters e.g., 30 times per month and is divided by 30 for the mean of uncertainty, i.e., eqn. 5.

Likewise, it’s identical to the resolution uncertainty in an annual mean of uncertainty, by eqn. 6, and necessarily the same in a 30-year normal.

When the anomaly is calculated, the 30 year mean of resolution uncertainty in the normal must be added in quadrature with the mean of uncertainty in, e.g., the annual mean temperature.

Reply to  Pat Frank
July 2, 2023 6:34 pm

Why is this hard to understand, especially about anomalies. It an article of faith that anomalies can exist in the thousandths.

Reply to  Jim Gorman
July 2, 2023 7:09 pm

I’ve really no idea why you keep repeating this nonsense. Nobody claims an anomaly for an individual station is more precise than the absolute value. That isn’t the point of using anomalies.

Reply to  Bellman
July 2, 2023 8:58 pm

Rohde, et al, 2013, p. 4: “For temperature differences, the C(x) term cancels (it doesn’t depend on time) and that leads to much smaller uncertainties for anomaly estimates than for the absolute temperatures. (my underline)”

Nick Stokes
Reply to  Pat Frank
July 3, 2023 2:53 am

Bellman is correct. Rohde is saying that an anomaly estimate of global average temperature has much smaller uncertainty than the global average of absolute temperatures. Which it does. The reason is that the main component of GAT uncertainty is coverage error – what if you had sampled in different places. Anomalies are more spatially homogeneous, so coverage error is smaller.

Reply to  Nick Stokes
July 3, 2023 4:32 am

Anomalies are more spatially homogeneous, so coverage error is smaller.”

Winter time temperatures have a different variance than summertime temperatures. Thus it follows that the anomalies calculated for each will also have different variances. That difference in variance is a measure of the uncertainty of the result. If the anomalies have the same variance as the absolute temperatures then they will have the same uncertainty.

Meaning the GAT has the same uncertainty as the absolute temperatures.

In climate science you can’t tell because no one ever calculates the variances of anything! Not of the base data, not of the daily averages, not of the montly averages, or of the annual averages.

Now, come back and tell us that variances cancel when you take an average or that they cancel when you do a subtraction to obtain an anomaly.

Reply to  Nick Stokes
July 3, 2023 5:59 am

Bellman is prima facie wrong. Rohde et al., produced exactly what Bellman supposed does not happen.

Reply to  Nick Stokes
July 3, 2023 7:40 am

Which it does. 

Oh look, Nitpick shows up in a vain attempt to rescue his flailing acolytes.

the main component of GAT uncertainty is coverage error

Bullshite — as usual, the trendology clowns throw away the variances from their myriad of averages.

bdgwx
Reply to  Pat Frank
July 3, 2023 7:09 am

And the law of propagation of uncertainty gives us the answer why anomalies have a lower uncertainty than absolute values. If you look at the correlation term for the measurement model y = a – b you’ll see 2*c_a*c_b*u(a)*u(b)*r(a, b) where c is the partial derivative of y. Notice that c_a = 1 and c_b = -1. So when r(a, b) > 0 then the whole term is negative thus reducing the uncertainty. And when r(a, b) = 1 then there is full cancellation of error thus u(y) = 0.

Reply to  bdgwx
July 3, 2023 7:34 am

And the law of propagation of uncertainty gives us the answer why anomalies have a lower uncertainty than absolute values. “

Malarky. Look at Taylor, Section 2.5. It doesn’t matter if you add or subtract, the uncertainties ADD!

It’s easily explained by variances. It doesn’t matter whether you add or subtract random variables, when you combine them their variances add. Variance is just a measure of uncertainty. The wider the variance the higher the uncertainty.

How many statistic textbook quotation do you need before this sinks in? I can give you quotes from five textbooks.

Why do you assume *any* correlation between temperatures measured in different locations by different measuring devices. Do you understand what a “confounding variable” is? Temperatures generally correlate to the seasons, not to themselves. Temperatures at Station A have many contributing factors, including pressure, elevation, humidity, geography, and terrain. Each of these can be different for each location leading to highly uncorrelated temperatures. The temperature on Pikes Peak is, generally, not correlated to the temperature in Colorado Springs, except for seasonal variations at both. The temperatures in Boston are not generally correlated to those in Kansas City. KC temps may go up and down with no relationship at all to what is happening in Boston (other than seasonal variation) because of Boston’s closeness to the ocean.

You keep trying to bring in things that make no physical sense. So do the climate scientists.

Reply to  Tim Gorman
July 3, 2023 12:17 pm

“It doesn’t matter whether you add or subtract random variables, when you combine them their variances add. ”

Independent random variables.

It’s always the same with you. You always come back to the meme that all uncertainties are random and independent. [\sarc]

Reply to  Bellman
July 3, 2023 4:16 pm

When you are measuring different things using different things they *are* random, independent, and uncorrelated.

We’ve been down the road before. Go look up confounding variables again.

Reply to  Tim Gorman
July 3, 2023 5:19 pm

When you are measuring different things using different things they *are* random, independent, and uncorrelated.

You keep forgetting which side you are supposed to arguing. I must be difficult to keep all your stories straight when you are trolling.

You are the one who keeps insisting that all measurements have some systematic error. You want to argue that averaging different things does not result in any cancellation. This only makes sense if you think there is perfect correlation between all values, or measurements, and they are not independent or random.

Reply to  Bellman
July 4, 2023 4:05 am

The *stated values +/- uncertainty interval” are what are random. The uncertainty interval *does* include systematic uncertainty.

You can’t even get this simple fact straight. It’s no wonder you can’t get *any* of it right!

You want to argue that averaging different things does not result in any cancellation. This only makes sense if you think there is perfect correlation between all values, or measurements, and they are not independent or random.”

Unfreakingbelievable. If I pick up two boards out of the ditch as I travel exactly how are their lengths correlated? How are they not independent and random? How do I expect any cancellation of uncertainty if I use them as part of a a roof truss for a shed I am building?

If I use two different measuring tapes when I get home how are the measuring devices correlated.

You really *are* lost in your cultists dogma!

Reply to  bdgwx
July 3, 2023 8:26 am

More tool abuse.

Reply to  bdgwx
July 3, 2023 11:00 am

30-year normals are independent of any monthly or annual temperature mean.

bdgwx
Reply to  Pat Frank
July 3, 2023 12:57 pm

PF: 30-year normals are independent of any monthly or annual temperature mean.

That’s not true at all. If there is a time invariant systematic effect on measurements (and there will be) then both the 30 yr average and monthly/annual anomaly based off it will contain that systematic effect. That portion cancels out when doing the subtraction.

The real issue is in how we quantify the components of uncertainty that do not cancel on the anomaly subtraction.

Reply to  bdgwx
July 3, 2023 2:22 pm

That portion cancels out when doing the subtraction.

Pardon me while I laugh.

bdgwx
Reply to  Pat Frank
July 4, 2023 9:01 am

PF: Pardon me while I laugh.

You can laugh all you want. The math is indisputable. Prove me wrong. Set y = a – b and r > 0 and show that the uncertainty is not reduced via the law of propagation of uncertainty via the law of propagation of uncertainty via JCGM 100:2008 equation 16.

Reply to  bdgwx
July 4, 2023 1:20 pm

Your math is irrelevant, your understanding of the systematic error due to environmental variables is wrong.

bdgwx
Reply to  Pat Frank
July 4, 2023 3:00 pm

It’s not my math. It’s the law of propagation of uncertainty. It appears in many texts on the topic of uncertainty analysis like Bevington, Vasquez, and JCGM which you’ve cited in your publications. And because you cited them I presume it is the math you wanted your readers to use.

Reply to  bdgwx
July 4, 2023 5:37 pm

The GUM equations are based on multiple measurements of the same thing and assumes no systematic bias in the measurements.

You simply can’t accept that it doesn’t apply to multiple measurements of different things using different devices that have different systematic uncertainties.

It would mean that you have to abandon the claims of climate science as being physically and mathematically unsound – you would, in essence, become a heretic to that which you are defending. And you simply can’t make that leap, no matter what proof you are given.

bdgwx: All uncertainty is random, Gaussian, and cancels. The first commandment of climate science. I’m sure its written on a tablet somewhere!

Reply to  bdgwx
July 3, 2023 2:39 pm

Dude, you are subtracting two random variables. There is NO cancelation. They add, simply add. You can’t even prove that they are orthogonal so you can use RSS

bdgwx
Reply to  Jim Gorman
July 4, 2023 9:02 am

JG: Dude, you are subtracting two random variables. There is NO cancelation.

Prove me wrong. Set y = a – b and r > 0 and show that the uncertainty is not reduced via the law of propagation of uncertainty via JCGM 100:2008 equation 16.

A simple request…please…use a computer algebra system to verify your work before posting.

Reply to  bdgwx
July 3, 2023 4:34 pm

You are dead wrong on this. Go look at Taylor, Section 2.5. He covers this in detail. Uncertainties add, they *always* add.

bdgwx
Reply to  Tim Gorman
July 4, 2023 9:04 am

TG: You are dead wrong on this.

Prove me wrong. Set y = a – b and r > 0 and show that the uncertainty is not reduced via the law of propagation of uncertainty via JCGM 100:2008 equation 16.

A simple request…please…use a computer algebra system to verify your work before posting.

Reply to  bdgwx
July 4, 2023 11:54 am

Someone needs to hit the big red switch on bgwxyz, he’s stuck in a loop again.

Reply to  karlomonte
July 4, 2023 5:45 pm

He’s never out of the loop!

Reply to  bdgwx
July 4, 2023 1:46 pm

That is an ill posed question. Start over.

If the variables have the same effect on “y”, then the uncertainty will will add . r > 0 – could mean anything from little correlation to to perfect correlation.

Have you never written an hypothesis and worked out the math from the data you have, and then designed an experiment to verify it. Then have to say shite something is wrong.

You don’t start with a half baked equation with no data, assumptions , or planned result. All you are wanting to is troll. You have plenty of equations in the essay.

Why don’t you write an essay explaining what you think uncertainty should be from Tmax and Tmin all the way through to GAT as shown in the database of your choice. You obviously think there are errors in everything done here. Put your money where your mouth is and show how you would do it.

Reply to  bdgwx
July 4, 2023 5:43 pm

I’ve given you the proof. I’ve pointed you to where it’s laid out in Taylor, Chapter 2. And you absolutely refuse to accept it for some reason.

If you have a +/- u1 and b +/- u2 then the range of the uncertainty interval for a is from a + u1 to a – u1. Similarly for b, b+ u2 to b – u2.

When you add the two, a + b, the uncertainty interval for the sum becomes (a-u1) to (b+u2) or (a+u1) to (b-u2). The interval increases.

When you subtract the two the same thing happens. The uncertainty interval becomes (a+u1) to (b-u2). The interval increases.

It doesn’t matter if you subtract or add two anomalies, the resultant uncertainty interval is the sum of the two component uncertainty intervals. You can add them directly if you think there is no cancellation between the two or you can add them in quadrature if you think there is *some* cancellation between the two. What you can’t do is assume they cancel!

Reply to  bdgwx
July 3, 2023 5:02 pm

Who are “we”? The voices inside your head.

You have no idea what you are talking about.

Reply to  Bellman
July 3, 2023 4:37 am

Nobody claims an anomaly for an individual station is more precise than the absolute value. That isn’t the point of using anomalies.”

Nick Stokes claims EXACTLY this!

If the precision of the anomaly is the same as the absolute temps then why use anomalies? Stokes claims its because it increases homogeneity somehow. Yet the variance of temps in a single month is different for the NH than for the SH. That carries through to the anomalies as well. So how does averaging those together increase homogeneity?

Reply to  Tim Gorman
July 3, 2023 12:01 pm

“Nick Stokes claims EXACTLY this!”

When? Please quote the exact words and the entire context.

You have demonstrated your imperfect undestanding of most of what you are told to allow me to trust your assertion.

Reply to  Bellman
July 3, 2023 4:14 pm

Stokes: “Because some deviate high and some deviate low, for whatever reason. And when you add them, the deviations cancel.”

“Errors cancel even if not normally distributed (which is not the “same as random).

If you would bother to follow the entire thread you would already know this.

Reply to  Tim Gorman
July 3, 2023 5:24 pm

Because some deviate high and some deviate low, for whatever reason. And when you add them, the deviations cancel

Talking about averaging not anomalies.

Errors cancel even if not normally distributed (which is not the “same as random).

Talking about averaging not anomalies.

I’ll take this as an admission you were lying about Stokes.

Reply to  Bellman
July 4, 2023 4:15 am

Talking about averaging not anomalies.”

OMG! When you add them they cancel but they don’t cancel when you subtract them? Or do they always cancel when you add them or subtract them?

Which is it?

Talking about averaging not anomalies.”

Averages of skewed distributions don’t sit at the left/right equality point. The median is a far better descriptor. What is the median of the climate databases? And again, averages are calculated from sums while anomalies are calculated from differences. You are apparently saying that in one case the uncertainties cancel and in the other they don’t. In which one do the uncertainties cancel?

Nope, no lying about Stokes. He believes even systematic errors cancel!

Reply to  Tim Gorman
July 4, 2023 5:18 am

Do you actually talk like this in real life?

OMG! When you add them they cancel but they don’t cancel when you subtract them?

No. When you add them they increase, when you subtract them they increase. It’s only when you take an average they decrease.

Reply to  Bellman
July 4, 2023 6:32 am

These subjects are completely beyond your ken, just admit it.

Your GAT hoax political agenda is completely transparent.

Reply to  Bellman
July 4, 2023 8:38 am

Sad. Truly, truly sad.

The average uncertainty is *NOT* the uncertainty of the average. You’ll never understand this, will you?

Reply to  Tim Gorman
July 4, 2023 9:35 am

The really sad thing is that you still can;t see I’m agreeing with you. The average uncertainty is *NOT* the uncertainty of the average. It’s what I’ve been trying to tell Pat Frank all this time.

Reply to  Bellman
July 4, 2023 1:43 pm

It’s what I’ve been trying to tell Pat Frank all this time.

Pardon me (again) while I laugh.

Delusional or extremely short attention span, one can’t decide which.

Reply to  Pat Frank
July 4, 2023 2:04 pm

Bellman: “The average uncertainty is *NOT* the uncertainty of the average. It’s what I’ve been trying to tell Pat Frank all this time.”

Pat Frank: “Delusional or extremely short attention span, one can’t decide which.”
(pointing to comment that says):.

In that case I would have no objection. The mean of the uncertainty is obviously the uncertainty of the instrument.

But then you go on to use these mean of the uncertainties to calculate the uncertainty of an anomaly. Specifically using them as the monthly and 30 year uncertainties. I don’t see how this makes sense if the values are the average uncertainty.

Then in section 4.4 you are using the same values to calculate the uncertainty of global annual temperatures and anomalies.

Could someone point to where the contradiction is?

bdgwx
Reply to  Pat Frank
July 4, 2023 2:43 pm

PF: Pardon me (again) while I laugh.

You can laugh all you want. It does not change the fact that u(Σ[x]/N) is different than Σ[u(x)^2/N] or Σ[u(x)]/N.

Reply to  Bellman
July 4, 2023 6:28 am

bellcurveman tries to rescue Nitpick Nick!

I love it.

Reply to  Tim Gorman
July 3, 2023 5:24 pm

It was short hand. It is fair to incorrectly impute that he inferred that they totally cancelled with less than an infinite sample size. But he explained it completely and correctly elsewhere in the thread. See his comment that ended his description of the few distributions that did not so tend towards the mean with increasing sample size.

Reply to  bigoilbob
July 4, 2023 4:23 am

You are confused. Either uncertainties cancel or they don’t. He did *NOT* explain it completely or correctly ANYWHERE. The standard deviation of the sample means gets smaller with larger sample sizes, no one disputes that. What is being disputed is whether the standard deviation of the sample means describes the accuracy of the population mean or if it just tells you how precisely you have calculated the population mean from your sampling.

It appears that in climate science it is usually assumed that the standard deviation of the sample means *is* the uncertainty associated with the population mean. That can only make sense if all the uncertainties are random, Gaussian, and cancel – leaving the population mean as the “true value”. That simply doesn’t apply when you have skewed distributions or systematic error in the dataset members. Temperature dataset *are* skewed and they all contain systematic error – as Pat has shown the instrumental systematic uncertainty is large enough to make the assumption that either there is no systematic error or that it cancels. Uncertainty is not error, it doesn’t have a scalar value and simply can’t cancel out.

Reply to  Tim Gorman
July 4, 2023 6:35 am

And to top it all off, the climastrologers throw away all the standard deviations from their myriad of averages.

Voila! milli-Kelvin “uncertainties”.

Reply to  karlomonte
July 4, 2023 9:45 am

If nothing else, every baseline has a variance/standard deviation. When subtracted from a monthly/annual average to calculate an anomaly, there should at least be a variance value from the baseline. I’ll guarantee this far exceeds the anomaly value in the 1/100ths decimal point. It will be something like 0.002 ±0.7

Reply to  Jim Gorman
July 4, 2023 11:56 am

Ever see any of them try to calculate an appropriate degrees of freedom?

Reply to  Pat Frank
July 2, 2023 6:43 pm

The resolution uncertainty in a daily mean is identical to the resolution uncertainty in a monthly mean because the uncertainty in every daily mean enters e.g., 30 times per month and is divided by 30 for the mean of uncertainty, i.e., eqn. 5.

Are you now saying equation 5 is the uncertainty of the monthly mean, and not the average uncertainty? Or are you saying it’s both?

Adding an uncertainty 30 times and then dividing by 30 is still only giving you the mean uncertainty, not the uncertainty of the mean.

Reply to  Bellman
July 3, 2023 6:02 am

How many times have you been told that the mean uncertainty is the desired quantity.

Reply to  Pat Frank
July 3, 2023 7:03 am

He simply doesn’t grok what you are saying at all.

Reply to  Tim Gorman
July 3, 2023 10:57 am

Agreed, Tim. An idée fixe in action.

Reply to  Tim Gorman
July 3, 2023 11:56 am

Indeed I don’t. It makes no sense to me and I think it’s an evasive answer. I have suspicions about why he thinks the mean uncertainty is the desired quantity, but I would like to give him the chance to explain it first.

Reply to  Bellman
July 3, 2023 2:19 pm

It makes no sense to me …

because you’re ignorant of the subject Bellman and are evidently unable to parse a clarifying explanation.

It’s been explained over and over, but to no evident avail.

Were someone like you in any class I’ve taught, I’d advise transferring to another major because scientific waters are far too deep for you.

…and I think …

A vast overstatement.

Reply to  Pat Frank
July 3, 2023 3:08 pm

Were someone like you in any class I’ve taught, I’d advise transferring to another major because scientific waters are far too deep for you.”

And if you were my teacher I’d drink it.

Reply to  Bellman
July 3, 2023 5:05 pm

What a comeback—does nothing to heal your abject ignorance.

But someone gave you an upvote!

Success is yours!

Reply to  Bellman
July 3, 2023 5:21 pm

Good. Let’s leave it there.

bdgwx
Reply to  Pat Frank
July 3, 2023 8:18 am

PF: How many times have you been told that the mean uncertainty is the desired quantity.

That’s absurd. The mean uncertainty is nearly useless. It tells you very little about the uncertainty of the mean.

Reply to  bdgwx
July 3, 2023 10:55 am

The analysis required the mean uncertainty, bdgwx. it’s far from useless. See Figure 17 and Figure 19.

Realize you don’t get it, and turn the page.

bdgwx
Reply to  Pat Frank
July 3, 2023 12:46 pm

PF: The analysis required the mean uncertainty, bdgwx. it’s far from useless. See Figure 17 and Figure 19.

I’m not saying the mean uncertainty is useless as a building block for one or more steps of the analysis.

I’m saying it is useless as a proxy for the uncertainty of the global average temperature anomaly because it doesn’t tell you the dispersion of errors typical of a global average temperature (GAT). Yet you are using it as a proxy for the GAT in figures 17 and 19. That is misleading at best.

I will repeat again and again. The mean uncertainty is not the same thing as the uncertainty of the mean. The mean uncertainty does not tell us the dispersion of errors typical of a mean. Only the uncertainty of the mean does that.

Reply to  bdgwx
July 3, 2023 2:13 pm

It tells you the minimum of uncertainty in every single measurement.

Uncertainty that does not average away. Uncertainty that increases when combined with the uncertainty in a normal on calculating an anomaly.

The uncertainty due to instrumental resolution and calibration should have been the first inventory of those compiling the air temperature record.

But it wasn’t. They ignored it. Perhaps they lacked the understanding of instrumental methods. Whatever.

You give no evidence of understanding instrumental methods either — of how to evaluate the resolution of instruments or the reliability of measurements.

You criticize from ignorance, bdgwx, and your partisanship evidently causes you to reject a clarifying explanation.

Reply to  bdgwx
July 3, 2023 6:29 pm

Another face-plant by the “expert”.

Reply to  Pat Frank
July 3, 2023 8:56 am

How closely you have calculated the mean, the SEM, is NOT* the desired quantity. The accuracy of the population mean is the desired quantity. That has to be propagated from the individual members of the data set (i.e. the temperatures) onto the average. How precisely you calculate the population average (i.e. the average of the stated values without considering the measurement uncertainty of those stated values) has nothing to do with the accuracy of the average.

Reply to  Pat Frank
July 3, 2023 11:52 am

“How many times have you been told that the mean uncertainty is the desired quantity.”

You keep saying it, but I’m trying to figure out why it would be the desired quantity. What makes it desirable to you? What question are you trying to answer?

Saying you are only interested in the mean uncertainty doesn’t explain how you are calculating the uncertainty of an anomaly, or why you publish graphs showing mean temperatures along side your average uncertainties. Nor does it explain why you are questioning the fact that estimates of the uncertainty of the mean are smaller than your mean uncertainty.

Reply to  Bellman
July 3, 2023 2:05 pm

but I’m trying to figure out why

Here’s why

Reply to  Pat Frank
July 3, 2023 5:09 pm

Thank-you for leaving indelible proof that you understand none of it.

Says it all, and it still boils down to Unskilled and Unaware.

Reply to  Bellman
July 2, 2023 8:35 pm

Who did you pay to give you two upvotes?

Reply to  Bellman
July 1, 2023 10:47 am

Peculiar. Your comment evidences no understanding of resolution.

There’s no point explaining bits of a paper to someone who hasn’t the grace to study it first,

bdgwx
Reply to  Pat Frank
July 1, 2023 12:18 pm

Our issue isn’t with the LiG resolution uncertainty. We understand that it exists and is similar if not the same for each measurement.

The issue is in how you combined that uncertainty into daily, monthly, and annual means.

What you have effectively done, whether you realized it or not, is to assume r = 0 for the daily average (after the typo in equation 4 is corrected anyway) and assume r = 1 for the monthly and annual averages. Why would r = 0 for daily average and r = 1 for a monthly and annual average? That makes no sense.

Note that the law of propagation of uncertainty reduces to RMS when r = 1. See JCGM 100:2008 section 5.2.2 pg. 21 note #1 directly underneath equation (16).

Reply to  bdgwx
July 1, 2023 1:22 pm

Why would you think that section of JCGM applies?

The resolution limit is a constant of every measurement.

bdgwx
Reply to  Pat Frank
July 1, 2023 2:12 pm

PF: Why would you think that section of JCGM applies?

It’s the law of propagation of uncertainty. It is the backbone of all uncertainty analysis and is the basis from which all other uncertainty formulas are derived. It all starts there.

PF: The resolution limit is a constant of every measurement.

I know. But that actually doesn’t matter. Even if the resolution limit were different the law of propagation simplifies to RMS when r = 1 when the measurement model is y = Σ[x_i, 1, N] / N.

Reply to  bdgwx
July 1, 2023 3:03 pm

No plug for the NIST machine?

What gives?

Reply to  bdgwx
July 1, 2023 5:06 pm

of measurement uncertainty.

The resolution limit is a property of the instrument, not of the measurement.

Reply to  Pat Frank
July 1, 2023 5:24 pm

How do you convince someone who has never had to live by their measurements that resolution is part of the instrument and you can’t statistically increase resolution. A measurement just is what it is.

Reply to  Jim Gorman
July 1, 2023 5:58 pm

How do you convince someone who has never had to live by their measurements that resolution is part of the instrument and you can’t statistically increase resolution

You think that’s hard. Try convincing someone with only a limited understanding of statistics that the resolution of a mean does not have to be the same as that of the instruments.

Reply to  Bellman
July 1, 2023 6:17 pm

If you only knew how insane your comment is you would be embarrassed.

Here is a story. It is a parable about resolution limits. It will also explain to you the importance of Significant Figures.

http://www.ruf.rice.edu/~kekule/SignificantFigureRules1.pdf

A student once needed a cube of metal that had to have a mass of 83 grams. He knew the density of this metal was 8.67 g/mL, which told him the cube’s volume. Believing significant figures were invented just to make life difficult for chemistry students and had no practical use in the real world, he calculated the volume of the cube as 9.573 mL. He thus determined that the edge of the cube had to be 2.097 cm. He took his plans to the machine shop where his friend had the same type of work done the previous year. The shop foreman said, “Yes, we can make this according to your specifications – but it will be expensive.” “That’s OK,” replied the student. “It’s important.” He knew his friend has paid $35, and he had been given $50 out of the school’s research budget to get the job done. He returned the next day, expecting the job to be done. “Sorry,” said the foreman. “We’re still working on it. Try next week.” Finally the day came, and our friend got his cube. It looked very, 3 very smooth and shiny and beautiful in its velvet case. Seeing it, our hero had a premonition of disaster and became a bit nervous. But he summoned up enough courage to ask for the bill. “$500, and cheap at the price. We had a terrific job getting it right — had to make three before we got one right.” “But–but–my friend paid only $35 for the same thing!” “No. He wanted a cube 2.1 cm on an edge, and your specifications called for 2.097. We had yours roughed out to 2.1 that very afternoon, but it was the precision grinding and lapping to get it down to 2.097 which took so long and cost the big money. The first one we made was 2.089 on one edge when we got finshed, so we had to scrap it. The second was closer, but still not what you specified. That’s why the three tries.”
up 8

Reply to  Jim Gorman
July 1, 2023 6:38 pm

If you only knew how insane your comment is you would be embarrassed.

If you only knew how little I cared about your opinion on my sanity.

Here is a story.

Is it about calculating the uncertainty of the mean using instruments with limited resolution?

It is a parable about resolution limits.”

It isn’t.

The moral of the story seems to be it’s better to use a rough estimate than a more precise one if it saves money.

Reply to  Bellman
July 1, 2023 7:08 pm

He thus determined that the edge of the cube had to be 2.097 cm.

Another moral is to check your workings. He actually wanted an edge of 2.123cm.

With an edge of 2.097cm he would only get 79.95 grams whilst his cheaper colleague got 80.29 grams. Both somewhat short of the required 83 grams.

Reply to  Bellman
July 1, 2023 9:48 pm

Oh my, please stop! I can’t handle it!

Hehehehe

bdgwx
Reply to  Pat Frank
July 1, 2023 5:56 pm

PF: The resolution limit is a property of the instrument, not of the measurement.

I’m not sure what point you’re making here. The resolution limit just means that each measurement has uncertainty. That’s not the issue.

The issue is how that uncertainty propagates into daily, monthly, annual, etc. means.

Reply to  bdgwx
July 1, 2023 6:14 pm

Incredible.

Reply to  Pat Frank
July 1, 2023 9:50 pm

All I can do is blink in astonishment, and then think about whether to laugh or cry.

Reply to  bdgwx
July 2, 2023 11:23 am

There’s no propagation of uncertainty in eqns. 5 and 6.

They are not meant to propagate uncertainty.

They are meant to establish the root-mean-square of the resolution limit over the different time-scales.

They do exactly that,

Your view is wrong bdgwx. You’re insistently wrong.

Correlation r is meaningless when there is nothing to correlate.

bdgwx
Reply to  Pat Frank
July 2, 2023 1:55 pm

PF: They are not meant to propagate uncertainty.

That’s a problem.

PF: They are meant to establish the root-mean-square of the resolution limit over the different time-scales.

And then used as if they were the uncertainty of the mean. That’s a problem.

PF: Correlation r is meaningless when there is nothing to correlate.

Unfortunately errors are correlated so r has to be considered at least in some capacity for an uncertainty analysis to be rigorous.. For a single instrument the systematic effect is a significant factor so r is going to be relatively high. However, since different instruments have different systematic effects these appear at least in part as random effects when the measurements of these different instruments are aggregated. Refer to JCGM 100:2008 E3.6 regarding how systematic effects can present as random effects when there is a context switch. The debate is not whether temperature instruments have systematic or random effects. They have both. The debate is over their proportions and thus what r actually is. In the real world r is neither 0 nor 1. Fortunately the law of propagation handles effects and any value -1 <= r <= 1.

Reply to  bdgwx
July 2, 2023 2:22 pm

Total gibberish, carefully crafted to seem like you know what you’re flapping gums about (pun intended).

Reply to  bdgwx
July 2, 2023 2:43 pm

And then used as if they were the uncertainty of the mean. That’s a problem.”

No, they’re not. They’re used as the mean of the uncertainty. The problem is yours: idée fixe.

Unfortunately errors are correlated so r has to be considered at least in some capacity for an uncertainty analysis to be rigorous.

The context is the uncertainty arising from detection limits, not measurement errors.

You’re invariably wrong, bdgwx. And that never stops you.

bdgwx
Reply to  Pat Frank
July 2, 2023 6:37 pm

bdgwx: And then used as if they were the uncertainty of the mean. That’s a problem.

PF: No, they’re not. They’re used as the mean of the uncertainty. 

Figure 17 depicts the uncertainty of the global average temperature anomaly both as published and as you calculate.

Figure 19 depicts the uncertainty of the global average temperature anomaly both as published and as you calculate.

If your 2σ grey whiskers are not the uncertainty of the global average temperature anomaly then why present them as such. It’s pretty obvious that everyone is considering your 2σ = 0.432 in figure 17 and 2σ = 1.94 in figure 19 as the uncertainty of the global average temperature anomalies.

Reply to  bdgwx
July 2, 2023 8:37 pm

It’s pretty obvious that everyone 

There is no “everyone” (except maybe inside your head).

Reply to  bdgwx
July 2, 2023 8:53 pm

Figure 17 presents the mean of resolution uncertainty, which applies to every single anomaly temperature.

Legend: “Grey whiskers:… the laboratory lower limit of instrumental resolution…”

Figure 19 presents the resolution mean plus the mean of systematic uncertainty, which applies to every single temperature anomaly.

Legend: “Grey whiskers: … the lower limit of laboratory resolution and the calibration mean of systematic error …”

Can it be any clearer?

Of course the uncertainties apply to the annual temperature anomalies. But they’re not the uncertainty of the mean. They’re the mean of the uncertainties.

Reply to  Pat Frank
July 3, 2023 5:31 am

But they’re not the uncertainty of the mean. They’re the mean of the uncertainties.”

It’s hard to reprogram cultists. My guess is that they will *never* figure out the difference.

Reply to  Tim Gorman
July 3, 2023 4:22 pm

Yes, I keep forgetting how many times you keep insisting that the uncertainty of the mean is the same as mean of the uncertainty.

Reply to  Bellman
July 3, 2023 5:18 pm

You just plain can’t read. The instrumental mean uncertainty exists as an uncertainty factor in each and every observation. The uncertainty of the mean is *NOT* the same as the instrumental mean uncertainty. There are a host of other factors in the observational uncertainty. Calibration drift is a major one that is ignored by using the meme “all uncertainty is random, Gaussian, and cancels.” Microclimate differences (green grass vs brown grass – see Hubbard and Lin, 2002) play a major role. It’s why Hubbard and Lin determined that regional adjustments to temperature values are scientifically wrong. They don’t allow for microclimate variation at each station.

bdgwx
Reply to  Pat Frank
July 3, 2023 8:15 am

PF: Of course the uncertainties apply to the annual temperature anomalies. But they’re not the uncertainty of the mean. They’re the mean of the uncertainties.

Then your publication is misleading and useless at best.

Reply to  bdgwx
July 3, 2023 10:49 am

You’re not equipped to judge, bdgwx.

bdgwx
Reply to  Pat Frank
July 3, 2023 12:35 pm

PF: You’re not equipped to judge, bdgwx.

Anybody can render a judgement. Here is the justification for mine.

You said the uncertainties in your publication are not the uncertainties of the mean.

The global average temperature is a mean.

What we want to know is the uncertainty of that mean (aka uncertainty of the global average temperature).

If you’re publication is not providing that metric then you cannot compare it to the published uncertainties from the various datasets because those are uncertainties of the mean (aka uncertainties of the global average temperature)

That makes your publication misleading because you present your uncertainties as being equivalent to those published by the various datasets.

That makes your publication useless because the central question is of the uncertainty of the mean (aka global average temperature).

I’ll repeat…the average of the individual measurement uncertainties is not the same thing as the uncertainty of the average of the individual measurement values. They are two completely different concepts.

Reply to  bdgwx
July 3, 2023 2:01 pm

The paper derives the mean resolution and calibration uncertainty of meteorological thermometers and sensors, bdgwx.

These are the instrumentally-relevant minimum average uncertainty in measurements.

The uncertainty from the limit of detection cannot average away because it’s a property of the instrument itself.

The uncertainty due to systematic measurement error proved to be non-random, and cannot be assumed to average away.

The total minimum of uncertainty proved to be much larger than the published official uncertainties that are purported to provide a full accounting. Oops.

Thank-you for leaving indelible proof that you understand none of it.

Reply to  bdgwx
July 3, 2023 2:15 pm

Repeat: Unskilled and Unaware you are. The characters publishing those “data sets” also lack in understanding of uncertainty.

Pat is correct.

And the GAT does not exist anywhere in the real world.

Reply to  bdgwx
July 3, 2023 4:32 pm

Are you dyslexic?

Reply to  Tim Gorman
July 3, 2023 5:10 pm

Good question.

Reply to  bdgwx
July 2, 2023 3:44 pm

You need to take some physical science classes at junior or senior level. You will deal with the resolution of devices and ultimately what you can measure with that resolution. You will learn to recognize what you can’t measure with the devices you have available. You will be taught that more and more resolution costs money. You will come to realize that averages and standard deviations can not overrule the physical restrictions of your measurements. Believing you can is rooted in faith, not in actually doing.

Reply to  bdgwx
July 2, 2023 5:07 pm

Unfortunately errors are correlated so r has to be considered at least in some capacity for an uncertainty analysis to be rigorous.”

Error is not uncertainty. Uncertainty is not error. Resolution is not error. Resolution is uncertainty. There is nothing to correlate when speaking of uncertainty. What would you correlate it against? Does a constant correlate with anything? A constant has no variance, how would you then calculate correlation? It should come out to be undefined – a division by zero!

Once again you are making an assertion that makes no sense in the real world!

Reply to  Tim Gorman
July 2, 2023 8:38 pm

He still can’t make it past the Go square, incredible.

billhaag
June 30, 2023 3:27 pm

This summation, and the subsequent comments, indicates to me, once again, that Upton Sinclair was correct when he said (in The Jungle, as I recall) “it is difficult to get a man to understand, when his salary depends upon his not understanding.”

billhaag
June 30, 2023 3:39 pm

A thought experiment that shows the absurd claims made about averages.

Consider a bathroom scale upon which one mounts, records the weight, and steps off of the scale 500,000 times. Add the weights, and divide by 500,000. One will get a number with a boat-load (a quantified scientific term, no doubt) of digits to the right of the decimal point. Then, if one removes a US quarter from ones pocket and repeats the process 500,000 times, again one will get a number with many digits to the right of the decimal point. The difference between these two weights IS NOT a fair estimate of the weight of the quarter, since the weight of the quarter is far smaller than the precision of the measurement device.

Reply to  billhaag
June 30, 2023 4:29 pm

One quarter short of a boat-load full of non-significant digits.

Reply to  billhaag
July 1, 2023 7:08 am

Bevington covers this in his book. In his section 4, under “A Warning About Statistics” he warns about assuming that the larger the sample size the more closely you can calculate the average. An infinite number of observations would reduce the error *in* the mean (most people read that as “of the mean” when it actually has a totally different connotation) to zero. But he lists three limitations that prevent actually getting to that point. 1. those of avaailabe time and resources, 2. those imposed by statistical errors, and 3. those imposed by nonstatistical fluctuations.

He says about nonstatistical fluctuations: “It is a rare experiment that follows the Gaussian distribution beyond 3 or 4 standard deviations. More likely, some unexplained data points, or outliers, may appear in our data sample, far from the mean. Such points may imply the existence of other contaminating points within the central probability region, masked by the large body of good points.” (bolding mine, tpg)

Somehow this never gets considered in climate science. Just how many contaminating points actually lie within the central probability region of the temperature data? That is an additional source of uncertainty in any calculation that is made using the data. It all gets masked in climate science by the meme “all error is random and Gaussian and cancels leaving the stated values 100% accurate”.

June 30, 2023 7:26 pm

1 July 2023
 
Dear Pat,
 
Australia occupies about the same land area as mainland USA, and for the southern hemisphere, Australian temperature data is widely represented in global surface temperature databases.   
 
Yes, I have read your paper and I found it informative from a global perspective, but not necessarily accurate from an Australian perspective. I also have a problem with the perennial dark-horse issue of instrument uncertainty/error vs accuracy, of which everybody seems to have an opinion. A single (manual) observation is a combination of both. Neither error source can be separated without a carefully conducted experiment involving multiple independent observers, observing calibrated thermometers instantaneously, within the same screen, which of course presents logistical problems. Nevertheless, I wonder why such an experiment has not been done?
 
Australian Fahrenheit meteorological thermometers had a single 1-degF index. Although encouraged to be observed to an implied precision of 0.1 degF, they were also often observed to the nearest ½ or full degF. For any daily timeseries, precision metrics can be calculated. However, the Bureau of Meteorology (BoM) database converts degF to one-decimal place degC. Consequently, degF do not directly back-transform (which requires 2-decimal degC places). Evaluating precision metrics requires this to be taken into account.
 
Australian Celsius meteorological thermometers have a ½ degC index, whereas you seem to indicate US-thermometers only have full-degreeC indices. Subdivisions affect instrument uncertainty. I note that you have not differentiated between long-stem laboratory-grade instruments, some of which may be true to 0.1 degF/C, and the less precise meteorological thermometers. While one may be used to calibrate the other, I don’t believe the two are comparable/interchangeable.
 
Early thermometers in Australia were calibrated using Kew standards imported for the purpose, usually by staff located at central observatories. While I would have to check my records and photographs, I believe a dry-bulb (non-recording) thermometer used by Henry Chamberlain Russell at Sydney Observatory in the 1880s is on display there. (It is a fabulous museum and free to enter).
 
In Australia, Tmax and Tmin are measured using recording thermometers (Hg and alcohol respectively) that lie at about 5 degrees off-horizontal on a frame within the Stevenson screen. Many people don’t realise this. Dry and wet-bulb thermometers are held vertically. Also, there were a variety of Stevenson screens in use up to about 1946. The standard screen was 230-lites in volume, these have been largely replaced by 90-litre screens and more recently, the BoM appear to be rolling-out PVC screens.
 
In my experience, including with processing and analysing data from the BoM database, while there are cascading sources of error in measurements (of which observer (eyeball) error is arguably the most important), most bias in timeseries arises from site and instrument changes. Bias falls into three categories: (i), Metadata bias, which in many cases is flagrantly misused by the BoM to imply the climate has changed; (ii), systematic bias resulting from time-correlated changes across sites, metrication being an example, but there are others; and, (iii) the use of correlated sites (comparators) to both detect and adjust homogenisation changepoints.
 
At the root of the problem is that BoM scientists often adjust for documented changes that made no difference to the timeseries of data, while omitting to mention (and therefore not adjusting) changes that did. This allows the BoM to create any trend they want. The second issue here is that 90-litre screens can cause the frequency of extreme temperatures to increase, independently of mean-Tmax. You use histograms in your paper, but I prefer probability density functions that can be overlaid to show where across the distribution contrasting datasets differ. Also, I like normality tests to be supported by Q-Q plots.     
     
Our BoM can cheat in subtle ways including only having comparator data for 90-litre screens; moving sites/changing instruments without the change(s) being reported in metadata, using parallel data to smooth transitions without publishing said data, and more. I found on a self-funded field-trip that the interior of PVC screens is mat-black; the number of louvers has also increased thus slowing ambient air exchange.

Tracking such changes requires a more subtle statistical approach than brute-force linear regression, which is a point I have made over and over.    
 
Australian T-data were never collected in the first-place to track changes in the climate, but rather to describe and understand the weather. Climate change/warming became a bolt-on experiment that started in the late 1980s as they started digitising data, firstly onto cards, then punch-tape then mag-tape then directly via key-board, and now directly by hard-wired instruments (not necessarily in that order). Thus, a lot of patching and adjusting has happened to both raw data and homogenised data, which is pretty tricky to identify/unravel. As most groups that produce global timeseries use similar homogenisation methods (SST included), although not a focus of the paper I am surprised you did not mention homogenisation as a source of bias.  
  
I did a preliminary study on SST in 2021, which I published at http://www.bomwatch.com.au and also at WUWT (https://wattsupwiththat.com/2021/08/26/great-barrier-reef-sea-surface-temperature-no-change-in-150-years/).

As part of that project, to gain understanding of the issues of measuring intake temperatures, I visited a WW-II Corvette (HMAS Castlemaine), which is moored at Williamtown near Melbourne. As it was a Friday, they kindly gave me a ‘private’ tour (I made donation). I clambered down into the engine-room and searched around in-vain for signs of them having measured ‘intake-temperature’. As the guide explained (he was an ex-submariner), with everything banging around (2, 3-cylinder triple-expansion steam engines each side) in such a highly dangerous tight space and all gauges being analogue, it was highly unlikely that it would be possible to measure anything ‘accurately’. Same on an Oberon sub, except they did have to have some “rough” idea of the temperature outside in order to negotiate the thermocline (rough being the operative word). Standing on the deck-plates about 30 cm higher than the inside of the hull, a mark on a stanchion indicated sea level outside was about eye level – about 2m in total depth from inside the hull.
 
Having drawn a blank, I then went to Brisbane, to the Maritime Museum and managed to smile and guile my way into the engine room of the former late WWII Frigate HMAS Diamantina. Two-levels below decks (more narrow ladders than the Corvette), roomier but also more moving parts and more complete in terms of bits and pieces. Still no gauge. Same story. With three larger pistons each side of the narrow walkway thumping up-and-down, the whole machine was pretty lively and noisy when underway. The water intake of both vessels was actually near the keel so air would not be sucked-in in a rough sea.
 
So how they managed to derive “accurate” numbers from analogue gauges under such conditions I have no idea. I also had another dataset from 1876 for a paddlewheel cable ship between Sydney and Darwin. However, compared with contemporary data there was a constant positive offset. I suspect (but cannot confirm) they used bucket samples collected mid-ship (not at the bows), and that data were biased by proximity of the boiler etc, otherwise the thermometer may have been biased. (The data I used in the GBR study was calibrated to the Kew standard at Sydney Observatory and was collected by Henry Chamberlain Russell).   
 
I have access to much more data which is not digitised. Some from navy ships, and some from Burns Philip ships servicing the islands and Papua. The RAN undertook research in the 1960s into the effect of the isocline on sonar. They chased submarines around and so-on but little of what they found is in published or digitised form. CSIRO also operated with the Navy out of Townsville I think in the 1950s, but I would have to check my records to be sure. While they could draw off a sample, I don’t think they could safely place a LIG thermometer in an engine intake and expect it not to break, or for the mercury column to remain intact for very long. Consequently, I take all the SST data with a shovel of salt.
 
These are just thoughts about the paper, not criticisms, neither is it a review.
 
From having observed the weather for almost a decade from 1971, analysed some 300 medium and long-term weather station datasets from across Australia, including developing protocols for same, and using other sources of information such as aerial photographs, maps and plans, once-secret Royal Australian Air Force plans and documents, stuff from museums and archives and having traveled around looking at (and photographing) sites in NSW, Vic, WA, Qld, and having clambered around in ships, I am confident that no evidence exists that suggests the world is warming. I am also convinced that SST data and satellite data in particular has been fiddled to show warming that does not exist. I am therefore not surprised at the outcome of Pat Frank’s paper.
 
Yours sincerely,
 
Dr Bill Johnston
 
http://www.bomwatch.com.au       
 

Reply to  Bill Johnston
June 30, 2023 8:38 pm

The standard screen was 230-lites in volume, these have been largely replaced by 90-litre screens should read 60-litre screens!

June 30, 2023 8:06 pm

Pat,

You have hit a 450 ft home run. It is great to finally see a useful paper for analyzing measurements.

Any field so dependent on measuring devices should have done this work many decades ago.

CONGRATULATIONS👏🤠🌻

Reply to  Jim Gorman
June 30, 2023 11:33 pm

Thanks, Jim. You and Tim have been real stalwarts.

June 30, 2023 8:38 pm

The complete change in air temperature between 1900-2010 is then 0.94±1.92 C.

If put in the more conventional format of the precision of the uncertainty term corresponding to the last significant figure of the nominal value, that would then round to “1±2 deg. C.” In other words, one can only justifiably state the change to the nearest order of magnitude, with the possibility that the change could be positive or negative. That is hardly the 2 or 3 significant figures to the right of the decimal point commonly cited for claims about the “hottest yeah evah!”

Reply to  Clyde Spencer
July 1, 2023 2:56 am

So what just happened?

I spent half-a-day that I could have been doing something else, considering issues raised by Pat Franks. Then I provided multiple perspectives related to Australian T-data that did not dispute his findings.

Aside from insult-trading and finger-pointing why did the whole thing (including my time) suddenly evaporate down the toilet like a raw-prawn Bali-sandwich? Any prospective will do.

Yours sincerely,

Dr Bill Johnston

http://www.bomwatch.com.au

Reply to  Clyde Spencer
July 1, 2023 7:37 am

It’s like significant figure rules weren’t invented as far as climate science goes.

Reply to  Tim Gorman
July 1, 2023 9:29 am

And they don’t care, at all. Anything to keep the global warming hoax alive.

Reply to  Clyde Spencer
July 1, 2023 10:33 am

You’re right, Clyde. The reported numbers are pretty much numerical precision. The accuracy expression is as you stated, 1±2 C. Zero information about rate or magnitude.

For others, just to add that the “±2” is an ignorance width, where no information is available. It doesn’t mean the change in air temperature since 1900 could have been +3 C or -1 C. It just says no one knows the physically real magnitude.

By all independent information (growing season, etc.), the warming since 1900 has been real and mild. See We Live in Cold Times.

Reply to  Pat Frank
July 1, 2023 1:40 pm

The word “unknown” doesn’t seem to be understood in climate science. Unknown means “you don’t know and can never know”. It can be anywhere inside the uncertainty interval and it can also be GREATER THAN or LESS THAN the interval as well!

Reply to  Tim Gorman
July 1, 2023 2:14 pm

Unknown means “you don’t know and can never know”.

Odd definition. So if I don’t know tomorrow’s football results, I can never know them?

It can be anywhere inside the uncertainty interval and it can also be GREATER THAN or LESS THAN the interval as well!

Gosh, almost as if that 95% in a 95% confidence interval has some meaning.

Reply to  Bellman
July 1, 2023 3:05 pm

Odd definition. So if I don’t know tomorrow’s football results, I can never know them?

Just paste a sign on your forehead that says: Clueless Am I

Reply to  karlomonte
July 1, 2023 3:28 pm

Or maybe one saying “It was a joke. Lighten up.”

Reply to  Bellman
July 2, 2023 5:13 am

Odd definition. So if I don’t know tomorrow’s football results, I can never know them?”

Does the football score have an uncertainty interval? Only a fool would make this kind of comparison.

“Gosh, almost as if that 95% in a 95% confidence interval has some meaning.”

It’s why statistics can NEVER give an exact answer, cannot increase resolution, and cannot decrease uncertainty if systermatic uncertainty exists. Things you simply can’t bring yourself to admit!

Reply to  Bellman
July 2, 2023 5:30 am

Football scores are COUNTING NUMBERS, not measurements. In essence they are constants without uncertainty.

You are displaying your ignorance of scientific physical measurements.

Reply to  Jim Gorman
July 2, 2023 7:59 am

Here we go again. I make a simple example to refute what seems like a minor incorrect claim. And now the Gorman bros are going to spend all time spectacularly missing the point, and arguing the minutia of a point that was never made.

Tim says unknown means you can never know something. I simply point out that isn’t always correct and use what seemed like an amusing counterexample if football scores. Rather than admit the original statement may have been badly worded, we are now arguing that football scores are not the same thing as temperatures.

I know. That wasn’t the point. The point was to give an example of something that might be unknown at one point of time but could be known at a different point. At no point have I ever claimed it’s possible to know in an absolute sense what the temperature is or how big your stud wall is.

Reply to  Bellman
July 2, 2023 4:20 pm

Tim says unknown means you can never know something.”

You are a total aZZ. A measurement is not a football score yet you are trying to conflate the two!

Is that the best you have to offer in rebuttal? It’s perfect proof that you have no clue as to what is being discussed!

Reply to  Tim Gorman
July 2, 2023 4:51 pm

A measurement is not a football score yet you are trying to conflate the two!”

And so the distraction continues.

  1. I think you could say it is a measurement.
  2. I didn’t say it was a measurement . My point was about the word “unknown” not about measurements.
Reply to  Bellman
July 2, 2023 5:16 pm

What measurement device do you use to measure a football score? I’m really interested in what you think that device might be.

You compared the football score to a measurement with uncertainty. *YOU* made the comparison, no one else.

Now you are trying to back out of something that you *KNOW* was wrong. Be a man and admit it.

Reply to  Tim Gorman
July 2, 2023 5:46 pm

Counting. Or that’s beyond you try using your fingers or a scoreboard.

The actual measurement is going to depend on the rules of the game and the on field equipment.

You compared the football score to a measurement with uncertainty.
r
No I did not. Maybe you’re not so hot on this English melarky yourself. I’ve explained this enough times so you should have got it by now. I said it was an example of something that might be uncertainty, but would not always be uncertain. Nothing about it being a measurement.

I see you still haven’t taken the hint about maybe not wasting your time obsessing over what was a very light incidental remark.

Be a man and admit it.

Is lying the mark of a man? No I’m not going to admit something that is patently untrue. There are things that can be unknown at some points of time and known at a later date. Your claim that

Unknown means “you don’t know and can never know”.

is wrong. This has also got nothing to do with the topic, so unless you say something even more stupid I will try to ignore any more discussion on the subject.

Reply to  Bellman
July 2, 2023 5:54 pm

Horse hockey. Football scores have a defined value, a constant. There is no uncertainty.

The fact that you don’t know the future is more applicable to linear regressions.

Reply to  Jim Gorman
July 2, 2023 6:23 pm

That’s at least 6 angry comments objecting to my off the cuff claim that it was possible to know a football score.

Football scores have a defined value, a constant.”

OK. I’m not that much of a sports fan. I didn’t realize that football scores never changed.

There is no uncertainty.

Yes, that’s why I said it wasn’t unknown.

The fact that you don’t know the future is more applicable to linear regressions.

Not this again. It’s almost as if you want to detract from Pat Franks masterpiece.

Reply to  Bellman
July 2, 2023 8:40 pm

Poor baby, your pseudoscience trendology GAT hoax agenda is showing again.

Reply to  Bellman
July 3, 2023 3:57 am

Counting. Or that’s beyond you try using your fingers or a scoreboard.”

You don’t count points. You count scores! And those scores are weighted with different point values depending on how they are made!

“The actual measurement is going to depend on the rules of the game and the on field equipment.”

You aren’t MEASURING anything! You don’t get any points for being close to the end zone or for bouncing the puck off the goal post support or for just missing the net in soccer.

No I did not.”

Then why did you bring up football as a comparison to actually measuring something? You had to resort to the crutch that you weren’t talking about actual measuring but to the metaphysical concept of “unknown”.

Uncertainty is an interval of “unknown and unknowable”. The fact that you simply can’t admit that is just more of your meme that all uncertainty is random, Gaussian, and cancels. You can’t escape that false belief no matter how hard you try.

July 1, 2023 5:37 am

Thank you, Pat Frank! I read and downloaded the paper and the supplemental information. This work is so very much appreciated for its thorough penetration of the core issues arising from the records of instrument readings. Congratulations on its publication.

I have two questions from the paper:
1) In 4.6: “From  Figure 19 , the mean global air-temperature-record anomaly over the 20th century (1900–1999) is 0.74 ± 1.94 °C.” 
It appears that 0.74C here is 100 times the slope (which would be deg C per year) of the linear regression of the “Mean Published Anomaly” column of values from 1900 through 1999 in the file named “Uncertainty in Global Air Temperature.txt”.
Is this correct?

2) In 5.1: “The compilation of land- and sea-surface LiG uncertainty yield a 1900–2010 global air-temperature record anomaly of 0.86 ± 1.92 °C (2σ), which renders impossible any conclusion regarding the rate or magnitude of climate warming since 1850 or earlier.”
It appears that 0.86C here is 100 times the slope (which would be deg C per year) of the linear regression of the “Mean Published Anomaly” column of annual values from 1900 through 2010 in the file named “Uncertainty in Global Air Temperature.txt”.
Is this also correct?

One more thing from the paper: “This research received no external funding.” A gift of gem quality, in my view.

Reply to  David Dibbell
July 1, 2023 10:26 am

Hi David — thanks for your kind words. And both kudos and thanks for your interest especially in downloading the data and working with it.

In answer to your questions. 1), yes, and 2) yes. The numbers are each the 100-year trend, which seemed like the most readily grasped metric.

July 1, 2023 12:03 pm

To the pseudoscience GAT hoaxers currently making fools of themselves in this thread (you know who you are):

None of you could offer anything remotely close to a refutation of the LIG thermometer problems and issues painstakingly researched and documented in Pat Frank’s paper as listed in 5.1 Major Findings, such as Joule-drift (I won’t list all 15 bullet points). This includes the magnitudes of the uncertainties — zip, zero, nada. The paper represents a huge amount of work and effort, including 284 references! If any of you lot had any inkling of what it takes to just assemble a collection of references for a paper, you might have thought twice before jumping into the deep end, but no. Y’all have to keep those teeny teensy leetle GAT uncertainty numbers alive, at any cost.

To recap a bit:

Nitpick Nick — “Buh, buh, buh errors all cancel!!”

Keep chanting this to yourself, Stokes, maybe someday you might even believe it yourself.

bgxyz-whatever —
“You put it in a ‘predatory journal’!” (WTH this means).
“You used RMS!”

Beyond pathetic.

bellboy — “You think UAH uncertainty is way way way too big!”, along with more inanities about “RMS”.

big ol’ oil blob — high-fived the journal idiocy.

mosh — I won’t repeat his racist garbage.

And you wonder why no one takes you bankrupt yappers seriously.

July 2, 2023 7:16 am

Pat, one question: does Joule-drift bulb contraction cover or include what I remember about (long time ago now), which was called glass creep? Equivalent to old window panes becoming thicker at the bottom over time?

Reply to  karlomonte
July 2, 2023 10:27 am

“Equivalent to old window panes becoming thicker at the bottom over time?”

They don’t. It’s a myth.

Reply to  karlomonte
July 2, 2023 12:07 pm

KM, the glass-creep of windows (if that really happens) would result from gravity acting on an extremely viscous fluid (glass).

Joule-drift results from the relaxation of residual strain left in thermometer glass after hot manufacture and cooling.

Scientific glassblowers typically anneal their glass apparatus at just below the softening temperature, for a day or more, to let most of that strain work its way out.

Both processes follow from glass as an amorphous solid that has not achieved internal thermodynamic equilibrium.

Reply to  Pat Frank
July 2, 2023 12:50 pm

Thanks, Pat.

Reply to  karlomonte
July 2, 2023 8:39 pm

Happy to oblige, KM. You’re a good guy.

Reply to  karlomonte
July 3, 2023 1:11 am

Noooo no, no. I learnt that old people looking through windows were thicker in the bottom!

Cheers,

b.

Reply to  Bill Johnston
July 3, 2023 1:12 am

Maybe it was old creeps … looking through windows …

July 5, 2023 3:43 pm

The lengthened growing season, the revegetation of the far North, and the poleward migration of the northern tree line provide evidence of a warming climate. However, the rate or magnitude of warming since 1850 is not knowable.”

“Provide evidence”, but not proof of warming.

It is more likely, the change is plants loving greater access to CO₂. A fact that has been identified in studies as increased ability of plants to weather temperatures and drought.

Reply to  ATheoK
July 5, 2023 3:59 pm

It’s quite likely it’s a combination of both. Which makes it difficult if not impossible to identify the magnitude of each factor without more information than temperature alone can give us.