The Verdict of Instrumental Methods

Pat Frank

LiG Metrology, Correlated Error, and the Integrity of the Global Surface Air Temperature Record has passed peer-review and is now published in the MDPI journal, Sensors (pdf).

The paper complements Anthony’s revolutionary Surface Stations project, in that the forensic analysis focuses on ideally located and maintained meteorological sensors.

The experience at Sensors was wonderfully normal. Submission was matter-of-fact. The manuscript editor did not flee the submission. The reviewers offered constructive criticisms. There was no defense of a favored narrative. There was no dismissive language.

MDPI also has an admirable approach to controversy. The editors, “ignore the blogosphere.” The contest of ideas occurs in the journal, in full public view, and critical comment must pass peer-review. Three Huzzahs for MDPI.

LiG Metrology… (hereinafter LiG Met.) returns instrumental methods to the global air temperature record. A start-at-rock-bottom 40 years overdue forensic examination of the liquid-in-glass (LiG) thermometer.

The essay is a bit long and involved. But the take-home message is simple:

  1. The people compiling the global air temperature record do not understand thermometers.
  2. The rate or magnitude of climate warming since 1900 is unknowable.

Global-scale surface air temperature came into focus with the 1973 Science paper of Starr and Oort, Five-Year Climatic Trend for the Northern Hemisphere. By 1983, the Charney Report, Carbon Dioxide and Climate was 4 years past, Stephen Schneider had already weighed in on CO2 and climate danger, Jim Hanson was publishing on his climate models, ice core CO2 was being assessed, and the trend in surface air temperature came to focused attention.

Air temperature had become central. What was its message?

To find out, the reliability of the surface air temperature record should have been brought to the forefront. But it wasn’t. Air temperature measurements were accepted at face value.

Errors and uncertainty were viewed as external to the instrument; a view that persists today.

LiG Met. makes up the shortfall, 40 years late, starting with the detection limits of meteorological LiG thermometers.

The paper is long and covers much ground. This short summary starts with an absolutely critical concept in measurement science and engineering, namely:

I. Instrumental detection limits: The detection limit registers the magnitude of physical change (e.g., a change in temperature, DT) to which a given instrument (e.g., a thermometer) is able to reliably respond.

Any read-out below the detection limit has no evident physical meaning because the instrument is not reliably sensitive to that scale of perturbation. (The subject is complicated; see here and here.)

The following Table provides the lower limit of resolution — the detection limits — of mercury LiG 1C/division thermometers, as determined at the National Institute of Standards and Technology (NIST).

NIST 1C/division Mercury LiG Thermometer Calibration Resolution Limits (2σ, ±C)

a. root-sum-square of resolution and visual repeatability. b. Uncertainty in an anomaly is the root-sum-square of the uncertainties in the differenced magnitudes.

These are the laboratory ideal lower limits of uncertainty one should expect in measurements taken by a careful researcher using a good-quality LiG 1C/division thermometer. Measurement uncertainty cannot be less than the lower limit of instrumental response.

The NASA/GISS air temperature anomaly record begins at 1880. However, the largest uncertainties in the modern global air temperature anomaly record are found in the decades 1850-1879 published by HadCRU/UKMet and Berkeley BEST. The 2s root-mean-square (RMS) uncertainty of their global anomalies over 1850-1880 is: HadCRU/UKMet = ±0.16 C and Berkeley BEST = ±0.13 C. Graphically:

Figure 1: The LiG detection limit and the mean of the uncertainty in the 1850-1880 global air temperature anomalies published by the Hadley Climate Research Unit of the University of East Anglia in collaboration with the UK Meteorological Office (HadCRU/UKMet) and by the Berkeley Earth Surface Temperature project (Berkeley BEST).

That is, the published uncertainties are about half the instrumental lower limit of detection — a physical impossibility.

The impossibility only increases with the decrease of later uncertainties (Figure 6, below). This strangeness shows the problem that ramifies through the entire field: neglect of basics.

Summarizing (full details and graphical demonstrations in LiG Met.):

Non-linearity: Both mercury and especially ethanol (spirit) expand non-linearly with temperature. The resulting error is small for mercury LiG thermometers, but significant for the alcohol variety. In the standard surface station prior to 1980, an alcohol thermometer provided Tmin, which puts 2s = ±0.37 C of uncertainty into every daily land-surface Tmean. Temperature error due to non-linearity of response is uncorrected in the historical record.

Joule-drift: Significant bulb contraction occurs with aging of thermometers manufactured before 1885, and is most rapid in those made with lead-glass. Joule-drift puts a spurious 0.3-0.7 C/century warming trend into a temperature record. Figure 4 in LiG Met. presents the Pb X-ray fluorescence spectrum of a 1900-vintage spirit meteorological thermometer purchased by the US Weather Bureau. Impossible-to-correct error from Joule drift makes the entire air temperature record prior to 1900 unreliable.

The Resolution message: All of these sources of error and uncertainty — detection limits, non-linearity, and Joule drift — are inherent to the LiG thermometer and should have been evaluated right at the start. Well before making any serious attempt to construct a record of historical global surface air temperature. However, they were not. They were roundly neglected. Perhaps most shocking is professional neglect of instrumental detection limit.

Figure 2 shows the impact of the detection limit alone on the 1900-1980 global air temperature anomaly record.

Land surface temperature means include the uncorrected error from non-linearity of spirit thermometers. Sea surface temperatures (SSTs) were measured with mercury LiG thermometers only (no spirit LiG error). The resolution uncertainty for the global air temperature record prior to 1981 was calculated as,

2sT = 1.96 ´ sqrt[0.7 ´ (SST resolution)2 + 0.3 ´ (LST resolution)2]

= 1.96 ´ sqrt[0.7 ´ (0.136)2 + 0.3 ´ (0.195)2] = ±0.306 C, where LS is Land-Surface.

But global air temperature change is reported as an anomaly series relative to a 30-year normal. Differencing two values requires adding their uncertainties in quadrature. The resolution of a LiG-based 30-year global temperature normal is also 2s = ±0.306 C. The resolution uncertainty in a LiG-based global temperature anomaly series is then

2s = 1.96 ´ sqrt[(0.156)2 + (0.156)2] = ±0.432 C

Figure 2: (Points), 1900 – 1980 global air temperature anomalies for: panel a, HadCRUT5.0.1.0 (published through 2022); Panel b, GISSTEMP v4 (published through 2018), and; Panel c, Berkeley Earth (published through 2022). Red whiskers: the published 2s uncertainties. Grey whiskers: the uniform 2 s = ±0.432 C uncertainty representing the laboratory lower limit of instrumental resolution for a global average annual anomaly series prior to 1981.

In Figure 2, the mean of the published anomaly uncertainties ranges from 3.9´ smaller than the LiG resolution limit at 1900, to 5´ smaller at 1950, and nearly 12´ smaller at 1980.

II. Systematic error enters into global uncertainty. Is temperature measurement error random?

Much of the paper tests the assumption of random measurement error; an assumption absolutely universal in global warming studies.

LiG Met. Section 3.4.3.2 shows that differencing two normally distributed data sets produces another normal distribution. This is an important realization. If measurement error is random, then differing two sets of simultaneous measurements should produce a normally distributed error difference set.

II.1 Land surface systematic air temperature measurement error is correlated: Systematic temperature sensor measurement calibration error of proximately located sensors turns out to be pair-wise correlated.

Matthias Mauder and his colleagues published a study of the errors produced within 25 naturally ventilated HOBO sensors (gill-type shield, thermistor sensor), relative to an aspirated Met-One precision thermistor standard. Figure 3 shows one pair-wise correlation of the 25 in that experimental set, with correlation r = 0.98.

Figure 3: Histogram of error in HOBO number 14 (of 25). The StatsKingdom online Shapiro-Wilk normality test (2160 error data points) yielded: 0.979, p < 0.001, statistically non-normal. Inset: correlation plot of measurement error — HOBO #14 versus HOBO #15; correlation r = 0.98.

High pair-wise correlations were found between all 25 HOBO sensor measurement error data sets. The Shapiro-Wilk test, has the greatest statistical power to indicate or reject the normality of a data distribution and showed that every single measurement error set was non-normal.

LiG Met. and the Supporting Information provide multiple examples of independent field calibration experiments, that produced pair-wise correlated systematic sensor measurement errors. Shapiro-Wilk statistical tests of calibration error data sets invariably indicated non-normality.

Inter-sensor correlation in land-surface systematic measurement field calibration error, along with non-normal distributions of difference error data sets, together falsify the general assumption of strictly random error. No basis in evidence remains to permit diminishing uncertainty as 1/ÖN.

II.2.1 Sea-Surface Temperature measurement error is not random: Differencing simultaneous bucket-bucket and bucket-engine-intake measurements again yields the measurement error difference, De2,1. If measurement error is random, a large SST difference data set,De2,1, should have a normal distribution.

Figure 4 shows the result of a World Meteorological Organization project, published in 1972, which reported differences of 13511 simultaneously acquired bucket and engine-intake SSTs from all manner of ships, at low and high N,S latitudes and under a wide range of wind and weather conditions. The required normal distribution is nowhere in evidence.

Figure 4: Histogram of differences of 13511 simultaneous engine-intake and bucket SST measurements during a large-scale experiment carried out under the auspices of the World Meteorological Organization. The red line is a fit using two Lorentzians and a Gaussian. The dashed line marks the measurement mean.

LiG Met. presents multiple independent large-scale bucket/engine-intake difference data sets of simultaneously measured SSTs. The distributions were invariably non-normal, demonstrating that SST measurement errors are not random.

II.2.2 The SST measurement error mean is unknown: The semivariogram method, taken from Geostatistics, has been used to derive the shipboard SST error mean, ±emean. The assumption again is that SST measurement error is strictly random, but with a mean offset.

Subtract emean, and get a normal distribution of with a mean of zero, and an uncertainty diminishing as 1/ÖN.

However, LiG Met. Section 3.4.2 shows that the semivariogram analysis doesn’t produce ±emean, but instead ±0.5Demean, half themean of the error difference. Subtraction does not leave a mean of zero.

Conclusion about SST: II.2.1 shows measurement error is not strictly random. II.2.2 shows ignorance of the error mean. No grounds remain to diminish SST uncertainty as 1/ÖN.

II.2.3 The SST is unknown: In 1964 (LiG Met. Section 3.4.4) Robert Stevenson carried out an extended SST calibration experiment aboard the VELERO IV oceanographic research vessel. Simultaneous high-accuracy SST measurements were taken from the VELERO IV and from a small launch put out from the ship.

Stevenson found that the ship so disturbed the surrounding waters that the SSTs measured from the ship were not representative of the physically true water temperature (or air temperature). No matter how accurate, the bucket, engine-intake, or hull-mounted probe temperature measurement did not reveal the true SST.

The only exception was an SST obtained using a prow-mounted probe, but iff the measurement was made when the ship was heading into the wind “or cruising downwind at a speed greater than the wind velocity.”

Stevenson concluded, “One may then question the value of temperatures taken aboard a ship, or from any large structure at sea. Because the measurements vary with the wind velocity and the orientation of the ship with respect to the wind direction no factor can be applied to correct the data. It is likely that the temperatures are, therefore, useless for any but gross analyses of climatic factors, excepting, perhaps, those taken with a carefully-oriented probe.

Stevenson’s experiment may be the most important investigation ever carried out of the veracity of ship-derived SSTs. However, the experiment generated scant notice. It was never repeated or extended, and the reliability question of SSTs the VELERO IV experiment revealed has generally been by-passed. The journal shows only 5 citations since 1964.

Nevertheless, ship SSTs have been used to calibrate satellite SSTs probably through 2006. Which means that earlier satellite SSTS are not independent of the large uncertainty in ship SSTs.

III. Uncertainty in the global air temperature anomaly trend: We now know that the assumption of strictly random measurement error in LSTs or SSTs is unjustified. Uncertainty cannot be presumed to diminish as 1/ÖN.

III.1 For land-surface temperature, uncertainty was calculated from:

  • LiG resolution (detection limits, visual repeatability, and non-linearity).
  • systematic error from unventilated CRS screens (pre-1981).
  • interpolation from CRS to MMTS (1981-1989).
  • unventilated Min-Max Temperature System (MMTS) sensors (1990-2004).
  • Climate Research Network (CRN) sensors self-heating error (2005-2010).

Over 1900-1980, resolution uncertainty was combined in quadrature with the uncertainty from systematic field measurement error, yielding a total RMS uncertainty 2s = ±0.57 C in LST.

III.2 For sea-surface temperature uncertainty was calculated from Hg LiG resolution combined with the systematic uncertainty means of bucket, engine intake and bathythermograph measurements scaled by their annual fractional contribution since 1900.

SST uncertainty varied due to the annual change in fractions of bucket, engine-intake and bathythermograph measurements. Engine intake errors dominated.

Over 1900-2010 uncertainty in SST was RMS 2s = ±1.38 C.

III.3 Global: Annual uncertainties in land surface and sea surface again were combined as:

2sT = 1.96 ´ sqrt[0.7 ´ (SST uncertainty)2 + 0.3 ´ (LST uncertainty)2]

Over 1900-2010 the RMS uncertainty in global air temperature was found to be, 2s = ±1.22 C.

The uncertainty in an anomaly series is the uncertainty in the air temperature annual (or monthly) mean combined in quadrature with the uncertainty in the selected 30-year normal period.

The RMS 2s uncertainty in the NASA/GISS 1951-1980 normal is ±1.48 C, and is ±1.49 C in the HadCRU/UEA and Berkeley BEST 1961-1990 normal.

The 1900-2010 mean global air temperature anomaly is 0.94 C. Using the NASA/GISS normal, the overall uncertainty in the 1900-2010 anomaly is,

2s = 1.96 ´ sqrt[(0.755)2 + (0.622)2] = ±1.92 C

The complete change in air temperature between 1900-2010 is then 0.94±1.92 C.

Figure 5 shows the result applied to the annual anomaly series. The red whiskers are the 2s quadratic annual combined RMS of the three major published uncertainties (HadCRU/UEA, NASA/GISS and Berkeley Earth). The grey whiskers include the combined LST and SST systematic measurement uncertainties. LiG resolution is included only through 1980.

The lengthened growing season, the revegetation of the far North, and the poleward migration of the northern tree line provide evidence of a warming climate. However, the rate or magnitude of warming since 1850 is not knowable.

Figure 5: (Points), mean of the three sets of air temperature anomalies published by the UK Met Office Hadley Centre/Climatic Research Unit, the Goddard Institute for Space Studies, or Berkeley Earth. Each anomaly series was adjusted to a uniform 1951-1980 normal prior to averaging. (Red whiskers), the 2s RMS of the published uncertainties of the three anomaly records. (Grey whiskers), the 2s uncertainty calculated as the lower limit of LiG resolution (through 1980) and the mean systematic error, combined in quadrature. In the anomaly series, the annual uncertainty in air temperature was combined in quadrature with the uncertainty in the 1951-1980 normal. The increased uncertainty after 1945 marks the wholesale incorporation of ship engine-intake thermometer SST measurements (2s = ±2 C). The air temperature anomaly series is completely obscured by the uncovered uncertainty bounds.

IV. The 60-fold Delusion: Figure 6 displays the ratio of uncovered and published uncertainties, illustrating the extreme of false precision in the official global air temperature anomaly record.

Panel a is (LiG ideal laboratory resolution) ¸ Published. Panel b is total (resolution plus systematic) ¸ Published.

Panel a covers 1850-1980, when the record is dominated by LiG thermometers alone. The LiG lower limit of detection is a hard physical bound.

Nevertheless, the published uncertainty is immediately (1850) about half the lower limit of detection. As the published uncertainties get ever smaller traversing the 20th century, they get ever more unphysical; ending in 1980 at nearly 12´ smaller than the LiG physical lower limit of detection.

Panel b covers the 1900-2010 modern period. Joule-drift is mostly absent, and the record transitions into MMTS thermistors (1981) and CRN aspirated PRTs (post-2004). The comparison for this period includes contributions from both instrumental resolution and systematic error.

The uncertainty ratio now maxes out in 1990, with the published version about 60´ smaller than the combined instrumental resolution plus field measurement error. By 2010, the ratio declines to about 40´ because ship engine-intake measurements make an increasingly small contribution after 1990 (Kent, et al., (2010)).

Figure 6: Panel a. (points), the ratio of the annual LiG resolution uncertainties divided by the RMS mean of the published uncertainties (2s, 1850-1980).  Panel b. (points), the ratio of the annual total measurement uncertainties divided by the RMS mean of the published uncertainties (2s, 1900-2010). Inset: the fraction of SSTs obtained from engine-intake thermometers and hull-mounted probes (a minority). The drop-off of E-I temperatures in the historical record after 1990 accounts for the declining uncertainty ratio.

V. The verdict of instrumental methods:

Inventory of error and uncertainty in the published air temperature record:

NASA/GISS: incomplete spatial coverage, urban heat islands, station moves.

Hadley Centre/UEA Climate Research Unit: random measurement error, instrumental or station moves, changes in instrument type or time-of-reading, sparse station data, urban heat island, bias due to changes in sensor exposure (screen type), bias due to changes in methodology of SST measurements.

Berkeley Earth: non-climate related noise, incomplete spatial coverage, and limited efficacy of their statistical model.

No mention by anyone of anything concerning instrumental methods of analysis, in a field completely dominated by instruments and measurement.

Instead, one encounters an analysis conveying no attention to instrumental limits of accuracy or to the consequences attending their technical development, or of their operational behavior. This, in a field where knowledge of such things is a pre-requisite.

Those composing the air temperature record display no knowledge of thermometers. Perhaps the ultimate irony.

No appraisals of LiG thermometers as instruments of measurement despite their preponderance in the historical temperature record. Nothing of the very relevant history of their technical evolution, of their reliability or their resolution or detection limits.

Nothing of the known systematic field measurement errors that affect both LiG thermometers and their successor temperature sensors.

One might expect those lacunae from mathematically adept science dilettantes, who cruise shallow numerical surface waters while blithely unaware of the instrumental depths below; never coming to grips with the fundamentals of study. But not from professionals.

We already knew that climate models cannot support any notion of a torrid future. Also, here and here. We also know that climate modelers do not understand physical error analysis. Predictive reliability: a mere bagatelle of modern modeling?

Now we know that the air temperature record cannot support any message of unprecedented warming. Indeed, almost no message of warming at all.

And we also now know that compilers of the air temperature record evidence no understanding of thermometers, incredible as that may seem. Instrumental methods: a mere bagatelle of modern temperature measurement?

The climate effects of our CO2 emissions, if any, are invisible. The rate or magnitude of the 20th century change in air temperature is unknowable.

With this study, nothing remains of the IPCC paradigm. Nothing. It is empty of content. It always was so, but this truth was hidden under the collaborative efforts of administrative embrace, partisan shouters, character assassins, media propagandists, and professional abeyance.

All those psychologists and sociologists who published their profoundly learned insights into the delusional minds, psychological barriers, and inadequate personalities plaguing their notion of climate/science deniers are left with egg on their faces or in their academic beards. In their professional acuity, they inverted both the order and the perceivers of delusion and reality.

We’re faced once again with the enormity of contemplating a science that has collapsed into a partisan narrative; partisans hostile to ethical practice.

And the professional societies charged with embodying physical science, with upholding ethics and method — the National Academies, the American Physical Society, the American Institute of Physics, the American Chemical Society — collude in the offense. Their negligence is beyond shame.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
5 35 votes
Article Rating
1.2K Comments
Inline Feedbacks
View all comments
bdgwx
June 30, 2023 7:34 am

Here is another discrepancy I noticed from your publication.

Your equation (4) is as follows.

2σ(Tmean) = 1.96 * sqrt[ (0.366^2 + 0.135^2) / 2 ] = 0.382 C.

When I plug that into a calculator I get 0.541 C.

The right answer when you plug this into the NIST uncertainty machine using u(x0) = 0.366, u(x1) = 0.135, and y = (x0+x1)/2 is 2u(y) = 0.390.

Reply to  bdgwx
June 30, 2023 9:17 am

Nitpick Nick has taught you well, young padowan.

“Correlated and non-normal systematic errors violate the assumptions of the central limit theorem, and disallow the statistical reduction of systematic measurement error as 1 / √N.” — Pat Frank.

Reply to  bdgwx
June 30, 2023 10:56 am

There is no another discrepancy.

In eqn. 4, you’ve caught a misprint, bdgwx, for which I thank you.

Given your fixation on the uncertainty in a mean, you should have caught eqn. 4 as a misprint, because it presents the uncertainty in a mean.

In particular, the division by 2 should have been outside the sqrt.

1.96 * sqrt[ (0.366^2 + 0.135^2)]/2 = ±0.382 C.

You were so insistent that the uncertainty diminished in a mean, that you should have been immediately concerned when your calculation increased the uncertainty in the mean.

How did you miss that?

bdgwx
Reply to  Pat Frank
June 30, 2023 12:27 pm

Technically it should be 2σ(Tmean) = 2 * sqrt[ (0.366^2 + 0.135^2) / 2 ] = 0.390 C, but whatever.

Anyway, so why isn’t the division by 30.417 and 12 outside the sqrt in (5) and (6)?

Reply to  bdgwx
June 30, 2023 12:49 pm

Because eqns. 5 & 6 are RMS.

Also your “/2” should be outside the sqrt. The way you’ve written it equates to ±0.552. I.e., you reiterated the misprint you found.

bdgwx
Reply to  Pat Frank
June 30, 2023 1:17 pm

PF: Because eqns. 5 & 6 are RMS.

Let me word it this way…

You seem to agree that for u(Tday) you use sqrt[ n * u(x) ] / n with the division outside the sqrt.

But you say for u(Tmonth) and u(Tannual) you use sqrt[ n * u(x) / n ] with the division inside the sqrt.

Or asked another way, why use RMS for (5) and (6) but not (4)? And what justification is there for using RMS in (5) and (6) anyway especially since it is inconsistent with what NIST, JCGM, and Bevington all say you should be doing?

PF: Also your “/2” should be outside the sqrt. 

Indeed. Copy/paste error. Let’s try that again.

2σ(Tmean) = 2 * sqrt[ (0.366^2 + 0.135^2) ] / 2 = 0.390 C

Reply to  bdgwx
June 30, 2023 2:42 pm

Because they’re the RMS of the uncertainty, not the uncertainty of the mean.

bdgwx
Reply to  Pat Frank
June 30, 2023 5:43 pm

So you did not intend to compute the uncertainty of the monthly and annual means? No?

If no then why did you call it “The uncertainty in Tmean for an average month (30.417 days)” and use it like the uncertainty of the monthly mean further down in the publication?

Reply to  bdgwx
June 30, 2023 7:14 pm

Because it’s the RMS of the resolution uncertainty over a month or over a year.

bdgwx
Reply to  Pat Frank
July 1, 2023 6:39 am

Yes. I know you used RMS in equations (5) and (6).

My questions are

1) Why did you do that?

2) Who told you to do that?

3) Why didn’t you do that in equation (4)?

Reply to  bdgwx
July 1, 2023 7:08 am

In light of how Pat’s work was published in a “predatory journal” (your words), why do you even care?

Reply to  bdgwx
July 1, 2023 10:49 am

It’s all right there in the paper, bdgwx. Suppose you try studing it for what it actually does say, instead of imposing your mistaken idea of what it doesn’t say.

bdgwx
Reply to  Pat Frank
July 1, 2023 12:07 pm

I don’t see the answers to those questions in your publication. Can you point me to where I can find them?

Reply to  bdgwx
July 1, 2023 1:36 pm

Go to Results Section 3.1 LiG Thermometers: Resolution, Linearity, and Joule-Drift, and start reading there.

bdgwx
Reply to  Pat Frank
July 1, 2023 5:40 pm

I did read that section. I don’t see anything in there that justifies using RMS to propagate the LiG resolution uncertainty through a monthly or annual mean.

Reply to  bdgwx
July 1, 2023 6:16 pm

You read but didn’t understand, bdgwx. I’ve explained here over and over again. Let’s just agree that you don’t get it.

Reply to  Pat Frank
July 1, 2023 6:20 pm

None are so blind as those who refuse to see.

bdgwx
Reply to  Pat Frank
July 1, 2023 6:45 pm

We can definitely agree that I don’t get the use of RMS for the propagation of the uncertainty through a monthly or annual mean in this case especially since it wasn’t done for the daily mean. I’ve not shied away from that. It’s the main impetus behind my challenge of equations (5) and (6). I’ve asked you to defend its use and the best responses I’ve gotten so far is “read the section” and “It’s all right there in the publication”. And now I get the sense that I’m being told to buzz off.

Reply to  bdgwx
July 1, 2023 9:40 pm

I’m quite certain you will continue to embarrass yourself.

Reply to  bdgwx
July 1, 2023 11:14 pm

I’ve explained it over and over and over, bdgwx.

The instrumental lower limit of detection (resolution) is the smallest reliable response increment of which the instrument is capable.

The lower limit of detection is a characteristic of the instrument itself. It puts a constant minimum of uncertainty in every single measurement.

Any movement of a 1 C/division LiG thermometer smaller than that increment is meaningless. See paper Table 1 and the related discussion.

I don’t want to have to keep repeating this. I won’t be explaining it further. Please figure it out.

bdgwx
Reply to  Pat Frank
July 2, 2023 5:40 am

I’ve explained it over and over and over, bdgwx.

I’ve not yet seen an explanation for the use of RMS for propagating uncertainty through a monthly or annual mean.

The instrumental lower limit of detection (resolution) is the smallest reliable response increment of which the instrument is capable.

I have no issue with that.

The lower limit of detection is a characteristic of the instrument itself. It puts a constant minimum of uncertainty in every single measurement.

I have no issue with that.

I don’t want to have to keep repeating this. I won’t be explaining it further. Please figure it out.

That’s too bad. Let it be known that I gave you the opportunity to defend the use of RMS when propagating the instrumental uncertainty through a monthly or annual mean.

Reply to  bdgwx
July 2, 2023 10:31 am

Eqns. 5 and 6 are not propagations of uncertainty, bdgwx. They’re root-mean-squares of uncertainty.

You’re wrong from the start. I’ve told you repeatedly that you’re wrong and why you’re wrong.

Then you go and repeat your original mistake.

Let it be known that you’re insistent in your ignorance, and that you push your opinion when you don’t know what you’re talking about.

Reply to  Pat Frank
July 2, 2023 8:20 am

These folks don’t believe that resolution applies to each and every measurement. To them it is a variable amenable to statistical manipulation so that it can be reduced through division by “n”, thereby allowing one to proclaim fantastic precision and accuracy.

Reply to  Jim Gorman
July 2, 2023 9:00 am

Also note they all fail to recognize what the term “lower limit” implies. The ridiculous yapping about using RMS calculations are nothing but lame attempts at Stokesian nitpicking. It is obvious they didn’t read anything, instead just skimmed without understanding looking for anything to whine about.

Reply to  karlomonte
July 2, 2023 4:27 pm

If a measurement device has a resolution of .001 units but can only recognize a change of .01 units then what good is the resolution?

That’s what they are trying to shoot holes in. If they had *ever* lived in the real world of measurement they would understand – or maybe they wouldn’t. Cultists don’t have to believe in reality.

Reply to  Tim Gorman
July 2, 2023 8:18 pm

Its the Bellmanian yardstick-micrometer, a huge breakthrough in technology.

Reply to  karlomonte
July 3, 2023 4:55 am

Yep!

Reply to  Jim Gorman
July 2, 2023 3:42 pm

Let me try a little experiment. I have the CET monthly values for June. They are given to 0.1 °C, so by your logic it isn’t possible to know the average to less than 0.1°C.

If I look at the average of all Junes since 1900 I get an average of 14.32°C. But you would insist that we can only know the average as 14.3°C. (I’m only interested in the exact average here, I’m not treating it as a sample.)

So now let’s reduces the the lower limit detection of the instrument by rounding all the monthly values to the nearest degree. Obviously that resolution applies to each and every monthly value and cannot be reduced by averaging. So when I take the average of the rounded values it comes to 14°C, to the nearest integer. But what if I cheated and looked at the next digit of the average. It comes to 14.3°C. That’s a coincidence exactly what our higher resolution data said.

What about going to the next digit.

Hi-res = 14.32, lo-res = 14.31. Not identical bit only a difference of 0.01°C.

Let’s try a larger sample size. Now I’ll use all the months and go back to the start of the 19th century, for about 2,600 monthly values.

Now to three decimal places I get

hi-res = 9.406°C, lo-res = 9.407°C. A difference of just 1/1000 of a degree, based on data that was only recorded to the nearest integer.

Let’s really push out luck and round to the nearest 10 degrees. Average monthly values are now only 0, 10 or 20°C, so I’m assuming this isn’t going to be very accurate.

Really lo res average = 9.434°C.

Even I was surprised by that. 2,600 monthly values where each had an uncertainty of ±5°C, yet we could still get the same average to within a third of a degree.

Reply to  Bellman
July 2, 2023 3:53 pm

You are joking right?

Here is a simple question, how do you know the average 14.32 is correct? Could it be 14.39 or 14.29? I need a physical answer, not just, that is what the math gives. Can you ever know what the measurements were to the hundredths digit so that you can say with 100% assurance you know the correct average?

Reply to  Jim Gorman
July 2, 2023 4:03 pm

You are joking right?

Of course. I know no amount of evidence will change your position, it’s an article of faith. I did it more for myself, becasue I was beginning to doubt my understanding. It’s always helpful to test your own believes.

Here is a simple question, how do you know the average 14.32 is correct?

I don;t and that’s not the point of the exercise. The point is to see how much reducing the resolution results in a reduction of the resolution of the mean.

I could have done this with randomly generated numbers and then we can just say the average of those numbers is the true mean, but I know you’d then object on the grounds they weren’t real values.

Reply to  Bellman
July 2, 2023 4:31 pm

I don;t and that’s not the point of the exercise”

Of course it’s the point!

You can’t just ignore uncertainty! It doesn’t go away just because you want to try and work out a hypothetical that doesn’t actually exist in the real world.

“If I look at the average of all Junes”

The average of the STATED VALUES! Where is your uncertainty?

As usual, you have assumed that all uncertainty is random, Gaussian, and cancels! YOU DO IT EVERY SINGLE TIME EVEN THO YOU CLAIM YOU DON’T!



Reply to  Tim Gorman
July 2, 2023 5:11 pm

You can’t just ignore uncertainty!

I was not ignoring uncertainty. I was adding uncertainty in the the form of reduced resolution.

It doesn’t go away just because you want to try and work out a hypothetical that doesn’t actually exist in the real world.

Are yes, that real world again. Where averages don’t exist, but you can still pontificate on their uncertainty,.

Let me explain again what the point was. You claim that if a measurement lacks resolution taking average will not increase the resolution. The point is to demonstrate by an example, why that is not the case. My rounded values have huge uncertainty but somehow were capable of reproducing the average obtained by the higher resolution.

I understand why you will have to drag up any irrelevance you can find and keep rejecting that this is even the point, because it’s a worrying test of your religious beliefs.

The average of the STATED VALUES! Where is your uncertainty?

Please try to think. It does not matter how accurate the data is. They are just numbers being used to illustrate the point that it is possible to increase resolution of an average.

As usual, you have assumed that all uncertainty is random, Gaussian, and cancels!

Is this some sort of catechism your religion demands you memorize? You just keep repeating it regardless of the context.

I have not made any assumptions about the distribution of the data or the uncertainty. And I don’t need to assume the data is random and cancels, becasue all I did was calculate the average and see what the result was.

Reply to  Bellman
July 2, 2023 5:56 pm

My rounded values have huge uncertainty

No, they don’t. They just hide your assumption of perfect accuracy. All your roundings rely on that assumption.

Reply to  Pat Frank
July 2, 2023 6:11 pm

You think rounding a monthly temperature to the nearest 10°C doesn’t increase the uncertainty?

It doesn’t matter if the data is accurate, it’s just an experiment to compare one data set against another. As I said it could just as easily been random numbers which I could state in a thought experiment were perfect. The point is to demonstrate that the resolution of an average can be greater than the resolution of individual values. A claim which some seem to take as an article of faith.

Reply to  Bellman
July 3, 2023 4:03 am

It doesn’t matter if the data is accurate,”

Which is why you will NEVER understand uncertainty. You can’t explain uncertainty using an example where you completely ignore it and assume the stated values are 100% accurate!



Reply to  Tim Gorman
July 3, 2023 5:41 am

Please at least try to understand the context. It’s a help if you at least read to the end of a sentence.

“It doesn’t matter if the data is accurate, it’s just an experiment to compare one data set against another.

You can’t explain uncertainty using an example where you completely ignore it

I can if the point of the experiment is to demonstrate how little the resolution affects the average. That is what was being claimed. That resolution imposes a limit on the resolution of the average. Nothing about if there was also a systematic error.

Reply to  Bellman
July 3, 2023 6:41 am

Argumento by Handwaving.

The truth is you and bw don’t care a fig about uncertainty, its just an inconvenient blockade in the path to The Promised Land.

Reply to  Bellman
July 3, 2023 6:59 am

“It doesn’t matter if the data is accurate, it’s just an experiment to compare one data set against another.””

Says the person that has no relationship with the real world at all!

“I can if the point of the experiment is to demonstrate how little the resolution affects the average.”

No, you can’t. You think you can but that’s because you don’t understand how resolution works at all. It’s your total lack of real world experience coming out just like it always does! Measurements live in the real world, not unreal hypotheticals that you dream up.

Reply to  Bellman
July 3, 2023 4:00 am

Please try to think. It does not matter how accurate the data is.”

That just about explains everything you say.

Reply to  Bellman
July 2, 2023 5:53 pm

I was beginning to doubt my understanding.”

A good first step. Your example is wrong from the start.

Tim is correct.

Your example assumes perfect accuracy and perfect rounding. Nick Stokes’ mistake.

Reply to  Pat Frank
July 2, 2023 6:30 pm

Then you do your simulation that demonstrates it’s impossible to ever increase resolution by averaging.

Of course, if the data isn’t true than neither will the result. Nobody says different. But what is being claimed is that resolution alone makes it impossible for the average to be more precise than the individual measurements. If you don’t like the “perfect rounding” I can easily add some extra randomness to the rounding.

Reply to  Bellman
July 2, 2023 6:45 pm

“””””But what is being claimed is that resolution alone makes it impossible for the average to be more precise than the individual measurements. “””””

To refute this, you need to explain how simple measurements with a wooden ruler marked at 1/8th of an inch can be averaged to achieve a calculated 1/1000th resolution.

Companies everywhere will pay you millions for the knowledge so the no longer need to buy expensive measurement devices.

Heck I could just use a giveaway Harbor Freight voltmeter and set my camera to take 1000 photos at a 1 second interval. Then I could average the reading to get millivolt resolution.

Reply to  Jim Gorman
July 2, 2023 7:07 pm

To refute this, you need to explain how simple measurements with a wooden ruler marked at 1/8th of an inch can be averaged to achieve a calculated 1/1000th resolution.

What do you think I’ve just done. I demonstrated how it was possible for an average using data measured to the nearest degree, or 10 degrees to be almost as precise as one measured to a tenth of a degree.

I’ve no idea what your ruler example is saying. Again are you talking about an average of different things or multiple measurements of the same thing? I’ve explained why that is different.

Companies everywhere will pay you millions for the knowledge so the no longer need to buy expensive measurement devices.

I somehow doubt it. It’s a bit to late to be patenting the idea of the standard error of the mean, or of taking multiple measurements to decrease uncertainty. And, as has to keep being pointed out there are limits, and diminishing returns.

See the chapter in Bevington I keep pointing to. It might seem like in theory you could get unlimited precision by taking multiple measurements, but that improvement is only increasing with the square root of your sample size. If you have an uncertainty or 0.1 and want to “average” it to 0.001, you would need around 10000 measurements. A million measurements wold only allow one extra digit. It’s goping to be easier to invest in a more precise measuring device. And then there there’s the inevitability of systematic errors. The more precision you try to achieve the more likely a tiny bias will start to become relevant.

Reply to  Bellman
July 2, 2023 9:36 pm

Unphysical nonsense—perfect description of the world you inhabit.

Reply to  Bellman
July 3, 2023 4:15 am

almost as precise”

By ignoring propagation of uncertainty and assuming the stated values are all 100% accurate.

“Again are you talking about an average of different things or multiple measurements of the same thing? I’ve explained why that is different.”

But you never treat them differently. Why is that?

“standard error of the mean”

One more time. The standard error of the mean only tells you how close you are to the population mean. It does *NOT* tell you the accuracy of that population mean. Yet you continue, every single time, to assert that SEM *is* the accuracy of the population mean. Thus ignoring totally the propagation of uncertainty from the individual measurements to the average! It’s always “uncertainty is always random, Gaussian, and cancels” so that how precisely we can calculate the average is the uncertainty of the average!

Reply to  Tim Gorman
July 3, 2023 6:02 am

But you never treat them differently. Why is that?

You missed typed “But you keep explaining why they are different and I choose not to remember. Why its that?

The standard error of the mean only tells you how close you are to the population mean.

Which in the real world is what is meant by the uncertainty of the mean, an indication of how close your measurement is to the population mean.

It does *NOT* tell you the accuracy of that population mean.

But in your world the population mean might be different to the population mean.

Yet you continue, every single time, to assert that SEM *is* the accuracy of the population mean.

I may well have said something like that in error. But it isn’t what I’m saying. The SEM is analogous to the precision of the mean. It is not the accuracy of the mean, just a part of it. That is it does not tell you the trueness of the mean, just the precision. If every measurement is off by 10cm, the sample mean will be off by 10cm in addition to any random uncertainty.

Reply to  Bellman
July 3, 2023 7:07 am

Which in the real world is what is meant by the uncertainty of the mean, an indication of how close your measurement is to the population mean.”

That is *NOT* what it means in the real world. In the real world it means “the standard deviation of the sample means”. If you would start to use *that* definition perhaps you could gain a glimmer of understanding for what is being discussed. If you will, it is a measure of sampling error – it has absolutely NOTHING to do with the accuracy (i.e. the uncertainty) of the mean.

You speak the words but have no understanding of what they actually mean – it is either deliberate misunderstanding or ignorance. Only you know for sure.

Reply to  Bellman
July 3, 2023 11:42 am

The SEM is analogous to the precision of the mean. It is not the accuracy of the mean, just a part of it. That is it does not tell you the trueness of the mean, just the precision.

This is ridiculous. Step by step

  1. Population {1,2,3,4,5,6,7,8,9,10}
  2. µ = 6 σ = 2.9
  3. Sample 1 [1, 3, 5], Sample 2 [4, 5, 8] Sample 3 [2, 5, 9] and Sample 4(5, 7, 10]
  4. Sample means 1-> 3 and 2 -> 5.7 and 3 -> 5.3 and 4 -> 7.3 (keep 1 decimal for rounding.
  5. Mean x̅ = 5.3 and s = 1.8
  6. µₑₛₜᵢₘₐₜₑ𝒹 = 5
  7. σₑₛₜᵢₘₐₜₑ𝒹 = 1.8*√3 = 3.1

Well let’s try it with larger samples and more samples.

  1. Population {1,2,3,4,5,6,7,8,9,10}
  2. µ = 6 σ = 2.9
  3. Samples [1,2,3,4] [5,6,7,8] [1,2,9,10] [2,6,9,4] [3,7,9,10] [4,6,7,8] [3,5,9,10] [6,7,8,9] [1,3,5,7] [2,3,5,8]
  4. Sample Means => 2.5, 6.5 ,5.5, 5.3, 7.3, 6.3, 6.8, 7.5, 4, 4.5
  5. Mean x̅ =5.6 and s = 1.6
  6. µₑₛₜᵢₘₐₜₑ𝒹 = 6
  7. σₑₛₜᵢₘₐₜₑ𝒹 = 1.6*√4 = 3.2

More and larger samples gives a better estimate of the mean. If the population σ is rounded to one significant digit, all the calculations for standard deviation match as they should.

Reply to  Jim Gorman
July 3, 2023 1:22 pm

More and larger samples gives a better estimate of the mean.

Ignoring the fact you still haven’t figured out why there is no point in taking multiple samples – you are correct. That’s the point. Larger sample size gives you a better estimate of the mean.

But that’s on the assumption that your measurements and sampling is correct. If your measuring instrument had a systematic error and added 1 to all your values the sample means would be 1 greater, and increasing the sample size will not fix that. Hence the result will be precise but not true.

Reply to  Bellman
July 3, 2023 2:00 pm

And more handwaving.

old cocky
Reply to  Bellman
July 3, 2023 3:48 pm

But that’s on the assumption that your measurements and sampling is correct. If your measuring instrument had a systematic error and added 1 to all your values the sample means would be 1 greater, and increasing the sample size will not fix that. Hence the result will be precise but not true.

Measurement error|uncertainty is orthogonal to sampling. error.

One can have uncertainty in the values due to measurement limitations even with the full (here, small) population.

One can also have sampling errors with (as here) discrete values.

Jim’s example illustrates the latter.

Something which everybody seems to have below their conscious level is that an average (in this case arithmetic mean) is just the ratio of the sum of values divided by the number of values. Converting this to decimal notation tends to give spurious precision. The mean is only rarely part of the distribution, whereas the modes are, and the median is for odd numbers of values.

Reply to  Bellman
July 3, 2023 4:40 pm

You didn’t bother to go look at the reference I gave you, did you? You haven’t read Pat’s study, you haven’t read Taylor, you haven’t read Bevington. All you’ve done is cherry pick things you can use as a troll looking for replies.

The SEM is better known as THE STANDARD DEVIATION OF THE SAMPLE MEANS. Means as in plural. If all you have is one sample mean then you do not have a standard deviation because you don’t have a distribution. You truly have no idea whether that sample represents the population mean or not. You might *hope* it does, you might *WISH* that it does, but you have exactly zero ways to show it. Multiple samples are what allows you to *show* that the sample means are honing in on the population mean.

I am glad as all git out that you have *NEVER* designed anything capable of causing harm to the world. I pray you never do.

Reply to  Tim Gorman
July 3, 2023 5:14 pm

You didn’t bother to go look at the reference I gave you

I’ve no idea what this is aid of. I was agreeing with Jim in the comment you are replying to.

The SEM is better known as THE STANDARD DEVIATION OF THE SAMPLE MEANS.

Maybe in your small part of the world. In most places it’s better known as the Standard Error of the Mean, hence SEM.

Even in the GUM it’s called the Standard Deviation of the Mean. Never “means” never “sample mean”.

Taylor only calls it the standard deviation of the mean or SDOM.

If all you have is one sample mean then you do not have a standard deviation because you don’t have a distribution.

Explained many times why this is not just wrong, it goers against every practical requirement for sampling. I really don’t know if T & J are really this dense, or just have a huge cognitive dissonance, or if they are just trolling.

Here’s a reference for you, the same one I gave to Jim, which he then ignored.

Fortunately, you don’t need to repeat your study an insane number of times to obtain the standard error of the mean. Statisticians know how to estimate the properties of sampling distributions mathematically, as you’ll see later in this post. Consequently, you can assess the precision of your sample estimates without performing the repeated sampling.

https://statisticsbyjim.com/hypothesis-testing/standard-error-mean/

Reply to  Bellman
July 3, 2023 5:27 pm

In order to have a standard deviation you HAVE to have multiple data points. That’s what “standard deviation of the mean” implies!

Even your reference states “sample estimates”, as in multiple samples. It also says “sampling distributions”, again plural distributions.

I truly believe your major problem is that you simply can’t read.

Reply to  Tim Gorman
July 4, 2023 4:47 am

I truly believe your major problem is that you simply can’t read.

Whereas I think your problem is you can read, you just choose not to.

Even your reference states “sample estimates”, as in multiple samples. It also says “sampling distributions”, again plural distributions.

Estimates and distributions, as in you may want to use the technique more than once.

Read the whole clause

Statisticians know how to estimate the properties of sampling distributions mathematically

Nothing about them needing multiple sampling distributions in order to work out one sampling distribution. It’s just saying the know how to do it for any number of different distributions.

He literally tells you don’t need repeated sampling, and goes on to show how to calculate the sampling distribution of IQ scores without needing even one sample. Yet you insist in claiming he actually says the opposite, because of your inability to understand what a plural means.

Reply to  Bellman
July 4, 2023 6:03 am

Whereas I think

No, you don’t.

your problem is you can read, you just choose not to.

Yeah, its true, the irony is thick today.

Reply to  Bellman
July 4, 2023 8:19 am

Estimates and distributions, as in you may want to use the technique more than once.”

“MAY” want to use it more than once? No, you *have* to use more than one sample in order t have a distribution. Even the CLT requires MORE THAN ONE MEAN. You simply can’t develop a distribution from one valuel

“Nothing about them needing multiple sampling distributions in order to work out one sampling distribution. It’s just saying the know how to do it for any number of different distributions.”

You *still* can’t read! The operative words are “sample distributonS”. (my capital S, tpg)

Reply to  Tim Gorman
July 4, 2023 9:31 am

No, you *have* to use more than one sample in order t have a distribution.”

Simply not worth discussing this further. It clearly has some deep religious significance to Tim, which prevents any actual words or logic to prevail.

Reply to  Bellman
July 3, 2023 6:26 pm

Back to cherry picking I see. Did you read the rest of the article?

He gave an example of how to to assess the precision of the sample estimates using the following.

“””””For this example, I’ll use the distribution properties for IQ scores. These scores have a mean of 100 and a standard deviation of 15. To calculate the SEM, I’ll use the standard deviation in the calculations for sample sizes of 25 and 100″””””

Guess what 100 and 15 are? The population mean is 100 and the standard deviation is 15.

Don’t confuse the sample size of 100 with the population mean of 100. The population mean could have been calculated from 10,000 scores or more. You sample all those to get a number of sample means to develop a distribution of all the means of the samples.

The sampling sizes are 25 and 100.

Guess what, you don’t even need to take one sample to know that IF YOU SAMPLE CORRECTLY, the SEM will be 3 for a size of 25 or it will be 1.5 for a size of 100.

That is what he meant by the clip that you posted. Wake up and do some real studying. There are free on line courses on sampling that explain these concepts. You would be wise to take one.

One last note. σ and “s” and “n” have an indirect relationship signified by:

s = σ/√n

That can be rewritten as:

σ = s•√n

Since σ is a statistical parameter of a population, it is constant. That means as “n” goes up, “s” goes down. And, as “n” goes down “s” goes up.

“s” is nothing more than a standard deviation that describes the interval within which the estimated mean will lay. For one sigma the estimated mean will have a probability of 68% of the values in the sample mean distribution. That is why large sample size and lots of samples are important.

Reply to  Jim Gorman
July 3, 2023 7:09 pm

This is just getting painful. As I say you are either incapable of understanding this or are just trolling.

The population mean could have been calculated from 10,000 scores or more.

The population mean is not calculated from any sample, that would be a sample. The population mean is the mean of the population. But in this case it’s the standard normalization for IQ scores, so they are 100 and 15 by design.

You sample all those to get a number of sample means to develop a distribution of all the means of the samples

Explicitly not what he’s doing. He says straight up, “To calculate the SEM, I’ll use the standard deviation in the calculations for sample sizes of 25 and 100.”

Guess what, you don’t even need to take one sample to know that IF YOU SAMPLE CORRECTLY, the SEM will be 3 for a size of 25 or it will be 1.5 for a size of 100.

You realize that “guess what” is exactly what I pointed out to you a short time ago, and you said was impossible. You do not need to have one actual sample to know what the sampling distribution is. It can be calculated from knowing the population mean and standard deviation.

That is what he meant by the clip that you posted.

Guess what? That’s the point I was making.

Wake up and do some real studying.

Speechless.

One last note. σ and “s” and “n” have an indirect relationship signified by:
s = σ/√n

Gosh, what an equation. Never seen that before. It’s almost as if you don’t need to take thousands of samples to get the sampling distribution. Think of all that time we could have saved.

That can be rewritten as

That’s the thing about equations, you can write them in lots of ways, some more helpful than others.

Try

N = [σ / s]²

Hay, now if we knew the population and sample standard deviation we could work out what our sample size was.

That means as “n” goes up, “s” goes down.

Hello, that gives me a cunning idea. What if we increased sample size to reduce the uncertainty of our sample mean?

And, as “n” goes down “s” goes up.

That gives me another idea, but not so useful.

For one sigma the estimated mean will have a probability of 68% of the values in the sample mean distribution.

As long as the sampling distribution is normal.

That is why large sample size and lots of samples are important.

Have you actually understood a word of what was said in the article?

Reply to  Bellman
July 2, 2023 10:12 pm

Then you do your simulation that demonstrates it’s impossible to ever increase resolution by averaging.

They’ve been done.

Demonstration 1. NIST

Demonstration 2: Inter-laboratory comparative calibrations.

Both discussed under 3.1.1. You’d have known if you’d read the paper.

If you don’t like the “perfect rounding” I can easily add some extra randomness to the rounding.

You can’t truly believe anyone here would fall for that.

Reply to  Pat Frank
July 3, 2023 5:03 am

They’ve been done.

Both articles are behind paywalls, but neither of the abstracts mention the effects on an average.

Reply to  Bellman
July 3, 2023 5:26 am

Discussed in the paper you’ve not read, Bellman.

Reply to  Pat Frank
July 3, 2023 6:55 am

bellman and bdgwx have never read either Taylor or Bevington from start to finish and worked out all the examples. They have no basic understanding of the context of any of the formula derivations in either book. They are equation cherry-pickers looking for things they can throw against the wall in the faint hope something might stick.

I even hate to call them statisticians because *true* statisticians understand that what they derive are statistical descriptors and not actual measurements. A true statistician understands that the descriptors have to be applied properly in order to fit the real world, otherwise the descriptors are useless.

Reply to  Bellman
July 3, 2023 4:09 am

Nobody says different.”

YOU do. You do it every single time you make a post.

” But what is being claimed is that resolution alone makes it impossible for the average to be more precise than the individual measurements. If you don’t like the “perfect rounding” I can easily add some extra randomness to the rounding.”

The “rounding” is for significant figure purposes, not uncertainty. You can’t even distinguish between the two! Significant figures are used to convey to others the resolution of the measurements you have made so they don’t think your measuring equipment has more resolution than it actually does. Uncertainty is meant to convey the unknowns affecting your measurements and what their magnitude might be.

The fact that you continually confuse the two or try to conflate them just confirms that you have not learned anything in the past two years about MEASUREMENT or the science of metrology!

It’s why averages can’t improve resolution. It leads to others believing that your measuring equipment has more resolution that it actually does! Averaging simply can’t increase resolution.

Reply to  Tim Gorman
July 3, 2023 5:50 am

YOU do. You do it every single time you make a post.

Then you will have no problem finding examples of posts where I’ve said an average of untrue measurements will give a true average.

The “rounding” is for significant figure purposes, not uncertainty.

Only for convenience. Do you think it would be a different result if I’d rounded to a non integer?

Significant figures are used to convey to others the resolution of the measurements you have made so they don’t think your measuring equipment has more resolution than it actually does.

Are you going to complain to Pat Frank that he rounds all his uncertainty figures to 3 decimal places? That’s claiming he knows the uncertainty to a thousandth of a degree.

It’s why averages can’t improve resolution.”

That’s what your model says. The evidence says different.

Reply to  Bellman
July 3, 2023 6:47 am

That’s what your model says. The evidence says different.

Translation from the Trendology Manual:

“The floot floot did a boom boom on the jim jam.”

Reply to  karlomonte
July 3, 2023 6:54 am

Do you think anyone here cares about your little quips?

Reply to  Bellman
July 3, 2023 8:18 am

Heh. More downvotes! Please!

Reply to  karlomonte
July 3, 2023 1:24 pm

Why would I want to downvote your comment? I think it speaks for itself.

Reply to  Bellman
July 3, 2023 8:58 pm

You missed the key word, it flew over your head, just like my little hint the other day did.

Reply to  Bellman
July 3, 2023 7:01 am

Then you will have no problem finding examples of posts where I’ve said an average of untrue measurements will give a true average.”

bellman: “It doesn’t matter if the data is accurate”

That just about says it all!

Reply to  Jim Gorman
July 2, 2023 4:25 pm

Yep. They want to call the average of different things a MEASUREMENT and then say their measuring device for the average has a higher resolution than the measuring devices used for all the individual measurements.

  1. The average is not a measurement.
  2. They don’t have a higher resolution measuring device with which to measure the average.

Never let it be said that bellman and bdgwx live in the same reality the rest of us exist in.

If the average is *NOT* a measurement then their whole argument falls to pieces.

June 30, 2023 3:18 pm

OK, I’m a bit late to this, and have only had a brief glance through the paper, but this seems to the the usual sticking point.

The uncertainty in Tmean for an average month (30.417 days) is the RMS

of the daily means

Is there any justification for the uncertainty of a monthly average for an instrument being the RMS? RMS is just the uncertainty in the daily reading, not the mean.

As an aside, whilst it’s good the Pat Frank has realized it was wrong to use (n / (n – 1)) as he did in the previous paper, it still seems rather circular to write it as √(N × σ² / N), rather than admit this just reduces to σ.

Reply to  Bellman
June 30, 2023 4:07 pm

It’s the RMS of the uncertainty from the LiG lower limit of detection. Read the paper, then comment.

There is no such realization. The 2010 paper discussed the loss of a degree of freedom, requiring the N-1 denominator.

Reply to  Pat Frank
June 30, 2023 4:56 pm

The 2010 paper discussed the loss of a degree of freedom, requiring the N-1 denominator.

My point was more that giving N is always large, N / (N – 1) is as close to 1 as you could want. Certainly nothing that will impact your 3 figure uncertainties. However, now you mention it, why are you allowing for a loss of a degree of freedom when you are looking at “adjudged” uncertainties?

While I’m being pedantic, you keep saying throughout this new paper that 2σ = 1.96 × whatever. Surely that should be 2σ = 2 × whatever, or the 95% confidence interval = 1.96 × whatever.

Reply to  Bellman
June 30, 2023 5:29 pm

The loss of a DoF is fully discussed. I recommend you read it.

On the other, you’re right. An oversight.

Reply to  Pat Frank
June 30, 2023 5:19 pm

It’s the RMS of the uncertainty from the LiG lower limit of detection. Read the paper, then comment.”

Why does it matter that the uncertainty comes from the resolution? Like your previous work you just throw this concept in with no explanation I can find, so I was hoping you could point me to the reason. You include references to most of the details, but not this one.

It seems counter-intuitive to be claiming that the uncertainty of a monthly average is identical to that for an annual average and to a 30-year average, all with uncertainties quoted to the 1/1000th of a degree.

Reply to  Bellman
June 30, 2023 7:12 pm

It’s all there, Bellman. Get back to me when you’ve read the paper.

If you don’t understand instrumental resolution, you shouldn’t be commenting.

Reply to  Pat Frank
July 1, 2023 5:17 am

“It’s all there, Bellman. Get back to me when you’ve read the paper.”

You’re the author. I was hoping you could explain, or at least point me in right direction.

I do understand instrument resolution, that’s why I’m asking for your justification for using RMS for the uncertainty if a mean. If I had 100 instruments each measuring a different temperature, or the same instrument measuring 100 different temperatures I would not assume that the uncertainty if their average was the same as the average uncertainty, especially if the uncertainty was due to resolution. The only way RMS makes sense is if you think it’s plausible that the resolution causes an identical error in each measurement.

Reply to  Bellman
July 1, 2023 6:32 am

Why do you think RMS requires identical error in each measurement? RMS stands for root-mean-square. Finding the RMS value of a sine wave doesn’t require each value to be identical, why would it for error?

Reply to  Tim Gorman
July 1, 2023 7:27 am

“Why do you think RMS requires identical error in each measurement?”

It doesn’t. It’s just the same as a standard deviation. It averages the squares of all the different errors, and gives you the uncertainty of that set of measurements.

What it doesn’t do is give you the uncertainty of the mean of all those measurements.

The only way that would make sense is if the correlation between all the measurements was 1, that is there is no independence in the errors. That’s why I’m saying that Frank’s arguement requires all errors to be identical.

Reply to  Bellman
July 1, 2023 7:57 am

It doesn’t.”

That isn’t what you said.

Bellman: “The only way RMS makes sense is if you think it’s plausible that the resolution causes an identical error in each measurement.”



Reply to  Tim Gorman
July 1, 2023 8:38 am

Not not sure what you think I said. My point is it only makes sense to consider the RMS as the uncertainty if a mean if all errors are identical.

Reply to  Bellman
July 1, 2023 1:33 pm

if all errors are identical

What is a lower limit of instrumental resolution?

Reply to  Pat Frank
July 1, 2023 2:27 pm

Resolution:

smallest change in a quantity being measured that causes a perceptible change in the corresponding indication

By lower limit you mean the stated resolution uncertainty is the smallest possible. It might be larger but can’t be smaller.

None of this has anything to do with errors being constant.

If a device has a resolution of ±0.5°C, say because it’s rounded to the nearest degree, the stated value will have an error of anything from -0.5°C to +0.5°C depending on the actual temperature. Another device measuring a different temperature will also have an error between -0.5°C to +0.5°C. There is no certainty that they will have exactly the same error.

Reply to  Bellman
July 1, 2023 5:18 pm

because it’s rounded to the nearest degree,

Resolution isn’t rounded to the nearest degree. It’s the detection limit, below which indications have no meaning.

Your understanding of resolution is lacking Bellman. Your comments show that. You should stop commenting on it here.

Reply to  Pat Frank
July 1, 2023 5:55 pm

Resolution isn’t rounded to the nearest degree.

It was an example of a type of resolution limit. It doesn’t matter why you can’t detect changes beyond a limit, the point is that you do not have identical errors.

Reply to  Bellman
July 1, 2023 11:03 pm

It was an example of a type of resolution limit.

Not in the sense we’re discussing here.

Reply to  Bellman
July 2, 2023 5:15 am

How do you know you don’t have identical errors? You *STILL* haven’t grasped the concept of uncertainty!

Reply to  Tim Gorman
July 2, 2023 6:54 am

Because the probability is vanishingly small that say 30 random values will all be identical.

I know this is a futile argument with you as you take pride in not understanding probability. Was it you or Jim who claimed that if you tossed a coin a million times it was almost certain you would get 100 heads in a row? That’s the problem here. You think that unknown means everything is equally likely.

Reply to  Bellman
July 2, 2023 6:57 am

The standard mantra of the trendologists:

“all errors cancel!”
“we don’t even have to think about them!”

Reply to  karlomonte
July 2, 2023 8:05 am

Stop whining. Errors do not all cancel and you do have to think about them. Time for you to get back under the bridge.

Reply to  Bellman
July 2, 2023 9:28 am

Hi, PeeWee!

Reply to  Bellman
July 2, 2023 10:33 am

The lower limit of instrumental detection is not random. Your analysis is wrong from the very start, Bellman.

Reply to  Pat Frank
July 2, 2023 6:04 pm

The lower limit of instrumental detection is not random.

Again, the uncertainty limits aren’t random, but I’m talking about the errors.

Your analysis is wrong from the very start, Bellman.

The thing is I’m not really doing any analysis, just looking at yours.

I’m not the one who’s written a paper, you are.
I’m not the one who want to demonstrate that every data set is fundamentally flawed, you are.
I’m not the one who it is claimed has destroyed the credibility of the official climate-change narrative, you are.

Reply to  Bellman
July 2, 2023 8:23 pm

The thing is I’m not really doing any analysis,

This is so true.

just looking at yours.

When are you going to start?

Reply to  Bellman
July 2, 2023 10:00 pm

“I’m talking about the errors.

The discussion is about uncertainty

“The thing is I’m not really doing any analysis, ...”

Agreed.

…just looking at yours.

Hard to credit, given the content of your posts.

The results fell out of the analysis, Bellman. “Want” had nothing to do with it.

The “climate-change narrative” cannot survive a temperature record demonstrated to convey no information about the rate or magnitude of warming since 1900.

Nor can it survive climate models shown to have no predictive value.

Falsification of the narrative is complete.

Reply to  Bellman
July 3, 2023 4:23 am

Again, the uncertainty limits aren’t random, but I’m talking about the errors.”

And here we go again. Systematic bias always cancels.

u_total = u_systematic + u_random. Therefore
u_random = u_total – u_systematic

If you don’t know u_systematic then you can never know u_random.

If you don’t know u_random then how can you assume *anything* cancels?

“I’m not the one who it is claimed has destroyed the credibility of the official climate-change narrative, you are.”

The *really* funny thing is that you believe you have somehow falsified the study done by Pat!

Reply to  Bellman
July 1, 2023 10:44 am

The point of eqns. 5 and 6 is to calculate the mean of the uncertainty Bellman. bdgwx can’t seem to grasp the difference, or why that calculation is pertinent, and evidently neither can you.

Part of the reason, I suspect, is that neither you nor bdgwx had the grace to read and understand the paper before launching into criticism.

Reply to  Pat Frank
July 1, 2023 11:56 am

“The point of eqns. 5 and 6 is to calculate the mean of the uncertainty Bellman.”

In that case I would have no objection. The mean of the uncertainty is obviously the uncertainty of the instrument.

But then you go on to use these mean of the uncertainties to calculate the uncertainty of an anomaly. Specifically using them as the monthly and 30 year uncertainties. I don’t see how this makes sense if the values are the average uncertainty.

Then in section 4.4 you are using the same values to calculate the uncertainty of global annual temperatures and anomalies.

Reply to  Bellman
July 1, 2023 1:31 pm

I don’t see how this makes sense if the values are the average uncertainty.

Does your inability to see the sense of it mean it’s not sensible, or does it mean you should spend time to understand the analytical logic?

Why would anyone think the resolution limit of the LiG thermometer should not condition a global mean annual LiG temperature or the annual LiG temperature anomaly?

Reply to  Pat Frank
July 1, 2023 2:06 pm

Don’t you know you can determine the width of an elemental particle if you just take enough measurements with a yardstick?

Reply to  Tim Gorman
July 1, 2023 2:56 pm

If by “you” you mean “me” I definitely don’t think you can do that. But if you think it’s possible explain how?

If you want to use averaging to improve resolution you need to ensure the variation in the thing measured is bigger than the resolution of your device. That’s why measuring the same thing multiple times with the same instrument will not necessarily improve the uncertainty, but measuring different things will have a better chance.

Reply to  Bellman
July 1, 2023 3:16 pm

That’s why measuring the same thing multiple times with the same instrument will not necessarily improve the uncertainty, but measuring different things will have a better chance.

HAHAHAHAHAHAAH — do you really believe this?

Reply to  karlomonte
July 1, 2023 3:26 pm

Yes. And you hysteria isn’t a persuasive argument against it.

Reply to  Bellman
July 2, 2023 5:24 am

If you want to use averaging to improve resolution you need to ensure the variation in the thing measured is bigger than the resolution of your device.”

Did you put in even one second of thought about this before you posted it?

How does the thing being measured vary? A measurand is a measurand. What varies is the measurement, not the measurand. At least as long as the environment during the measurements remains the same.

Once again, you somehow believe that single measurements of multiple things (for if the measurand changes then you *are* measuring a different thing) can increase resolution. Resolution is a function of the measuring device, not of the measurand.

You *still* don’t have a grasp of physical reality, do you?

Reply to  Tim Gorman
July 2, 2023 7:19 am

“Did you put in even one second of thought about this before you posted it?”

Probably not. It’s a waste of time given the quality of the inumerable comments you demand I respond to every hour.

I have however given the subject quote some thought over the years, and I stand by everything I said.

“How does the thing being measured vary? ”

I could have been clearer there however. I should have said the measurements vary. This could either be because there is random error each time you measure the same thing, or because you are measuring different things to determine, say, an average.

“Once again, you somehow believe that single measurements of multiple things …. can increase resolution”

I do believe that. Thanks for noticing. I’ve been trying to explain this for the last few years, but it probably didn’t register as you were too busy öbseesing over stud walls and lawn mowers.

If you weren’t so convinced you are right about everything and so anyone who disagrees is wrong, you might have taken some time to understand why I believe that. You might have then been in a better position to explain why you think I’m wrong, rather than just make your usual arguments by assertion and ad hominems.

Reply to  Bellman
July 2, 2023 7:48 am

Stop whining.

Reply to  karlomonte
July 2, 2023 8:06 am

A good point well made. How about I stop whining when you start thinking.

Reply to  Bellman
July 2, 2023 9:29 am

Clown. Go try to read the paper.

Reply to  Bellman
July 2, 2023 7:57 am

Here is why you are wrong.

I’ve got 1000 8′ boards. I measured each one of them to the nearest 1″. I can sell them to you and I’ll guarantee they are all 8′ ±0.005″. IOW 8.0004 feet long.

Reply to  Jim Gorman
July 2, 2023 8:18 am

Not sure what point that is making.You’re guarenteeing the length of each board, not the average.

And relevant to my point. If all your measurements are within half an inch of 8′ all will be rounded to exactly 8′. The same if you measured the same board 1000 times. No amount of averaging will help you on that case.

If on the other hand the boards vary in length be a few inches, your rounding will all differ, therefore some cancellation of errors is likely and the average can be now to a resolution better than an inch.

You could easily test this yourself. Take a load of boards of various lengths. Measure each one with a precise instrument, then round each measurement to the nearest inch. Compare the averages of the 1000 boards using each of the measurements. How different are the two averages? Do they agree to less than an inch?

Reply to  Bellman
July 2, 2023 8:38 am

You missed the whole point as usual. Would you buy the lot of boards if you needed boards that were exactly 8′? How many are shorter than 8′ by an inch? How many are longer?

Reply to  Jim Gorman
July 2, 2023 9:22 am

Then just what is your point?

I, and I assume Pat Frank, are talking about the uncertainty of a global average, and you are talking about the uncertainty of an individual board. The fact you think they are the same goes someway to explaining why you have such a hard time understanding how the uncertainty of an average can be different to the uncertainty of an individual measurement.

If I want to know the likelihood of an individual board being below a specific vale then I need to know the deviation of the individual boards, i.e the standard deviation of the boards. If I want to know how likely the mean of all the boards is less than a specific value I need to know the uncertainty of the mean, i.e. the standard error of the mean.

If your argument is I probably don’t want to know anything about the mean length of a board, you might be correct. But that just illustrates the problem with your example.

Reply to  Bellman
July 2, 2023 12:38 pm

Still you pathetic peeps have not retufted a single word of Pat’s paper.

Nothing!

Reply to  karlomonte
July 2, 2023 5:30 pm

Questioning, even snarky questions, is not refuting. Refutation takes knowledge, and ability to show where faults occur. All you see here are veiled accusations that something is wrong because it violates a tenent of faith. Showing references that support assertions would go far!

Reply to  Jim Gorman
July 2, 2023 5:55 pm

I’m not trying to refute the paper. I’ll leave that to people with more expertise. I’m just trying to resolve what appears to be a major misdirection.

All you see here are veiled accusations that something is wrong because it violates a tenent of faith.”

It’s called skepticism. You read something, see things that don’t make sense and ask questions about them. The fact that you and others get so defensive over this one paper, and insist that no-one who isn’t an expert is allowed to ask question or express doubts, feels more like an article of faith than what we are doing.

I think it’s an interesting juxtaposition of the attitude here to this paper, compared with the usual skepticism directed at any paper you consider to be on the other side. It was only a couple of days ago Pat Frank was saying anyone who wrote a paper showing too much warming should be thrown in jail.

Reply to  Bellman
July 2, 2023 9:47 pm

you and others … insist that no-one who isn’t an expert is allowed to ask question or express doubts,

That’s a lie, Bellman.

It was only a couple of days ago Pat Frank was saying anyone who wrote a paper showing too much warming should be thrown in jail.

Another lie.

You appear to exemplify your own accusation, Bellman.

You and bdgwx are not skeptics. Skeptics possess a critical understanding of the question. You and bdgwx evidence no such knowledge.

Reply to  Pat Frank
July 3, 2023 4:53 am

That’s a lie, Bellman.

Maybe an exaggeration.

If you don’t understand instrumental resolution, you shouldn’t be commenting.

Another lie.”

I agree with you, Tom. Pace Rich Davis, but to my mind we need to be juridically effective.

File charges of criminal negligence and malfeasance against the narrative purveyors for the evident harms they have caused with their global warming pseudo-science.

https://wattsupwiththat.com/2023/06/24/global-warming-has-begun-expert-tells-senate-1988-exaggerations-vs-today/#comment-3738956

This was in response to Tom Abbot saying

I personally think Hansen ought to be put in jail for his lies along with a couple of dozen of his fellow colleagues/liars.

Reply to  Bellman
July 3, 2023 5:39 am

Bellman, you wrote, “Pat Frank was saying anyone who wrote a paper showing too much warming should be thrown in jail.

You misrepresented the difference between writing a legitimate paper and criminal malfeasance.

Your lie is the attempt to obfuscate that difference.

Hansen’s malfeasance has led to 10s of thousands of premature deaths and the theft of trillions.

bdgwx
Reply to  Pat Frank
July 3, 2023 1:07 pm

Do you think Hansen should be convicted of a crime? Anyone else?

Reply to  bdgwx
July 3, 2023 6:14 pm

Did Hansen intentionally leave uncertainty bounds off the air temperature projections he presented in his 1988 testimony?

Do you think he consciously knew that his testimony before the Senate Committee that, “Global warming has reached a level such that we can ascribe with a high degree of confidence a cause and effect relationship between the greenhouse effect and the observed warming.” was scientifically indefensible and therefore false?

bdgwx
Reply to  Pat Frank
July 4, 2023 11:42 am

I was just curious if Hansen should be prosecuted. Do you think he should? What about Dr. Spencer, Dr. Christy, and others?

Reply to  bdgwx
July 4, 2023 12:01 pm

An infamous loaded question from bee’s wax — hoping to play another fun round of Stump the Professor.

His problem is that he doesn’t realize that he is the one who is stumped.

Reply to  bdgwx
July 4, 2023 12:39 pm

You know what, engineers and architects are put jail sometimes and certainly endure civil penalities for not using due diligence in their analysis of data. That means sometimes saying data is not fit for purpose and starting over. Most scientists do the same thing even though they may not suffer the indignity of being found guilty of negligence. I’ll bet Dr. Frank can tell about experiments that he has thrown out because of faulty data or calculations.

The hysteria that Hansen initiated causing untold trillions of dollars to be spent and making poor people poorer may very well be found to be not doing due diligence in his use of data.

If this turns out to be the case should he suffer some consequences?

Reply to  Bellman
July 3, 2023 5:46 am

If the thing being measured is smaller than the detection limit of all the instruments, then yes, it’s impossible to see it so it won;t show up in an average.

You hid your ‘yup’ in the middle of your equivocation.

Reply to  Pat Frank
July 3, 2023 5:35 am

They are both trolls at heart. No understanding of the subject but a lot to say about it.

Reply to  Jim Gorman
July 2, 2023 8:25 pm

All they do is rant about RMS and “random” then push the downvote button.

Success achieved!

Reply to  Bellman
July 2, 2023 12:40 pm

Great, a word salad answer. I gave you a complete description of the product. Even the mean was 8′. Yet you couldn’t answer with a simple yes or no.

Maybe because you know the uncertainty was unreasonable knowing the uncertainty in each individual measurement.

Reply to  Jim Gorman
July 2, 2023 2:23 pm

You keep coming up these board obsessed analogies, and I keep explaining why they miss the point. But if you want a detailed answer:

I’ve got 1000 8′ boards. I measured each one of them to the nearest 1″.

Are you saying they are 8′ or that you don;t know what length they are but they all measured 8′ when you rounded to the nearest inch? I’ll assume the later, and you have no prior so the actual length could be anything, so the best we can assume is that none of the 1000 was less than 7′ 11.5″ or grater than 8′ 0.5″.

So maybe the person you bought them from is honest and they are all close to 8′, or maybe he’s sold you 1000 boards that area ll slightly larger than 7′ 11.5″, knowing you can’t afford a better tape measure.

I can sell them to you and I’ll guarantee they are all 8′ ±0.005″. IOW 8.0004 feet long.

You can’t make that guarantee. Maybe you’re the one trying to con me. But even if you are honest and the average length of a board is 8′, you’ve already told me the best you can do is guarantee none are shorter than 7′ 11.5″.

Would you buy the lot of boards if you needed boards that were exactly 8′?

Obviously not.

How many are shorter than 8′ by an inch? How many are longer?

No idea. As I said they could all be shorter than 8′ or all longer. If you are truly random lengths then you might be able to assume there’s an equal probability of each being either shorter or longer than 8′.

Now – do you have any actual point to make about how to determine the average length or how this is applicable to the uncertainty of a global temperature average?

Reply to  Bellman
July 2, 2023 4:18 pm

Are you saying they are 8′ or that you don;t know what length they are but they all measured 8′ when you rounded to the nearest inch? “

You are exhibiting your total ignorance of uncertainty for all to see! The 1″ is an UNCERTAINTY. It is a function of the measuring device, not of the boards themselves! Included in that uncertainty is the resolution capability of the measuring device.

All you have is a bunch of sophistry and an inability to read simple English to offer as a rebuttal!

“You can’t make that guarantee.”

Then how can you make it for temperatures?

Reply to  Tim Gorman
July 2, 2023 4:40 pm

The 1″ is an UNCERTAINTY. It is a function of the measuring device, not of the boards themselves!

And here we go again. A pointless example with vague questions that has no relevance to the issue under discussion. But now if I ask for clarification, I’m the ignorant one.

I know. But the question stated the boards were all 8′ long, and I’m trying to figure out if you know they are all exactly 8′ long, or if they are varying within an inch of 8′.

“Then how can you make it for temperatures?”

I’m not. No-one is. Absolutely nobody with the intelligence they were born with, which might exclude some here, thinks that if you say the average has an uncertainty of ±0.1, then it means that all temperatures are guaranteed to be within 0.1°C of the average.

Reply to  Bellman
July 2, 2023 5:13 pm

You can’t even make sense of simple English.

 I measured each one of them to the nearest 1″”

That’s 8′ +/- 1″.

It’s just one more indication that you don’t have even a basic understanding of uncertainty – not a clue!

Reply to  Bellman
July 2, 2023 4:15 pm

 understanding how the uncertainty of an average can be different to the uncertainty of an individual measurement.”

The RESOLUTION of the average can’t be any greater than that of the individual members. If your resolution is the limiting factor than the average will have the same limiting factor!

Reply to  Tim Gorman
July 2, 2023 4:43 pm

And I disagree. The problem is all you have done these last two years is angrily shout the same slogans. If they didn’t work the first 1000 times, why do you think they will work now. Maybe you need to come up with an example that will demonstrate what you are saying rather than just keep asserting it.

Reply to  Bellman
July 2, 2023 6:00 pm

And I disagree.

In which case you suppose data can appear out of thin air.

Reply to  Pat Frank
July 2, 2023 6:06 pm

Nope.

Reply to  Bellman
July 2, 2023 9:40 pm

Yup.

You disagree that “The RESOLUTION of the average can’t be any greater than that of the individual members

That disagreement amounts to an assertion that data can appear out of thin air.

Instruments cannot produce physically real data intervals smaller than their detection limit.

Reply to  Pat Frank
July 3, 2023 4:37 am

Still nope. The resolution of an average can be greater than that of the individual members, but that doesn’t mean the data was pulled out of thin air. The information comes from the data.

Instruments cannot produce physically real data intervals smaller than their detection limit.

Instruments can’t but the average of many instruments can.

I’m assuming by detection limit you actually mean resolution. If the thing being measured is smaller than the detection limit of all the instruments, then yes, it’s impossible to see it so it won;t show up in an average. And for resolution, as I keep saying , of the variance in the data is less than the resolution, averaging is of no use.

Reply to  Bellman
July 3, 2023 5:48 am

If the thing being measured is smaller than the detection limit of all the instruments, then yes, it’s impossible to see it so it won;t show up in an average.

You hid your ‘yup’ in the middle of your equivocation.

Reply to  Pat Frank
July 3, 2023 6:24 am

Do you think the variation in temperatures across the globe is smaller than the resolution of the thermometers?

Reply to  Bellman
July 3, 2023 7:21 am

When the average global temp anomaly is given out to the hundredths digit when the resolution of the instruments only justifies values in the units digit or perhaps in the tenths digit then WHO KNOWS for certain?

You *still* don’t grasp the basics of uncertainty or of significant digits.

Reply to  Bellman
July 3, 2023 11:26 am

irrelevant.

Reply to  Pat Frank
July 3, 2023 1:26 pm

Why?

Reply to  Bellman
July 3, 2023 2:01 pm

Because.

Reply to  karlomonte
July 3, 2023 3:33 pm

Thought so.

Reply to  Bellman
July 3, 2023 4:53 pm

No, you don’t think. This is your problem.

Reply to  karlomonte
July 3, 2023 7:30 pm

I’ll let you have the last word. Otherwise this will go on forever.

Reply to  Bellman
July 3, 2023 5:53 pm

Because the topic is instrumental resolution, not the variation in temperatures across the globe.

Reply to  Pat Frank
July 4, 2023 4:34 am

That is an impossibility in the alternate universe they live in. *ALL* uncertainty cancels and doesn’t have to be considered in their universe. Trend lines only have to consider stated values because there is no uncertainty in the stated values – it all cancels out.

Reply to  Pat Frank
July 4, 2023 5:38 am

To sum up this thread:

My claim is that if measurements vary by more than the resolution, then it is possible to reduce the resolution in the average. I also said that this isn’t the case if the variation is less than the resolution.

Pat Frank tried to claim that the second point meant I was agreeing with him. I pointed out that in the real world temperatures do indeed vary by a lot more than the instrument resolution (they would serve no purpose if that wasn’t the case.)

To which PF claimed it didn’t matter because his topic was instrument resolution and not the variation in temperatures across the globe.

That seems to be the problem all along. Pat Frank wants to give the impression he’s talking about the uncertainty in the global average, but in reality all he’s ever talking about is the uncertainty of individual measurements.

Reply to  Bellman
July 4, 2023 6:05 am

My claim is that if measurements vary by more than the resolution, then it is possible to reduce the resolution in the average.

And as always, you spread bullshite propaganda, trying to keep your political agenda alive.

“He’s dead, Jim.”

Reply to  Bellman
July 4, 2023 8:33 am

My claim is that if measurements vary by more than the resolution, then it is possible to reduce the resolution in the average.

Wrong. Complete misportrayal. Instrumental resolution is a constant lower limit of uncertainty.

“…all he’s ever talking about is the uncertainty of individual measurements.

About resolution, — the constant instrumental lower limit of uncertainty (not error), and about calibration — the uncertainty due to field inaccuracy of the instruments.

Both of which condition every single measurement and condition the global average temperature.

You wrote of problems, Bellman. Your problem, very evidently, is that you don’t understand the topic, do not recognize your ignorance, and fail to apply modesty thereto.

Instead, you impose your confused view as though it is a mistake made by others. And a partisan fervor for one outcome prevents you ever correcting yourself.

Reply to  Pat Frank
July 4, 2023 11:03 am

Wrong. Complete misportrayal. Instrumental resolution is a constant lower limit of uncertainty.

Then provide some evidence, reason, demonstration or proof. All I get is people insisting it must be true.

Reply to  Bellman
July 4, 2023 11:46 am

You whine about people giving you detailed explanations, and whine about getting short rebuffs — is there length of answer to which you would pay attention?

Reply to  Bellman
July 4, 2023 5:29 pm

Systematic uncertainty can’t be reduced through statistical means. A common systematic uncertainty that exists because of the design of a measuring apparatus can’t be eliminated through statistics, it doesn’t “cancel out”, and it conditions every single measurement taken by that apparatus.

It’s not just “people” asserting this. Recognized experts like Taylor, Bevington, and Possolo assert this as well. The fact that *you* don’t believe it to be true is *your* issue, not an issue of “people”.

You’ve been given the references and quotes over and over for two years. And you still refuse to learn.

Reply to  Tim Gorman
July 4, 2023 6:36 pm

Systematic uncertainty can’t be reduced through statistical means.

I’m glad you agree.

But we are not talking about a systematic error here. We are talking about instrumental resolution.

Reply to  Bellman
July 5, 2023 6:45 am

Your cognitive dissonance never ceases to amaze me. Uncertainty due to the physical constraints of the measuring device *is* systematic uncertainty. It colors every single measurement made with the device in the same manner every single time. Resolution *IS* a physical constraint of the measurement device!

Do you *ever* stop to think before making such idiotic statements?

Reply to  Tim Gorman
July 5, 2023 9:13 am

Bellman and bdgwx worship at the altar of JCGM. Let’s see what 100:2008 says about instrumental resolution.

3.3.1 … Thus the uncertainty of the result of a measurement should not be confused with the remaining unknown error.

3.3.2 In practice, there are many possible sources of uncertainty in a measurement, including:
f) finite instrument resolution or discrimination threshold;

F.2.2.1 The resolution of a digital indication
One source of uncertainty of a digital instrument is the resolution of its indicating device. For example, even if the repeated indications were all identical, the uncertainty of the measurement attributable to repeatability would not be zero, for there is a range of input signals to the instrument spanning a known interval that would give the same indication. If the resolution of the indicating device is δx, the value of the stimulus that produces a given indication X can lie with equal probability anywhere in the interval X − δx/2 to X + δx/2. The stimulus is thus described by a rectangular probability distribution (see 4.3.7 and 4.4.5) of width δx with variance u² = (δx)²/12, implying a standard uncertainty of u = 0,29δx for any indication.

The division by 12 involves the assumption of a triangular distribution — a guess that the physically true value lies closer to the middle of the ignorance width than to the sides.

This assumption is unjustifiable, in part because it falsely implies that the researcher knows the probability regions within the ignorance width.

Physical scientists and engineers are typically conservative about uncertainty; very reluctant to assume what is not in evidence, such as claiming precocious knowledge of an unknown.

The adherence to the standard of knowledge in physical science and engineering (the courage to admit ignorance) requires division by 3 (a box ignorance width) rather than 12, in turn requiring a standard uncertainty of u = 0.58δx for the uncertainty due to resolution (the detection limit).

And there’s no getting inside that 0.58δx.

Reply to  Pat Frank
July 5, 2023 11:43 am

Bellman and bdgwx worship at the altar of JCGM.

Within the space of a view hours I’m accused of being a heretic because I said I disagree with some point in the GUM, and then I’m accused or worshiping at at it’s altar.

The only reason I even heard of the GUM was because I was told to follow equation 10 in order to understand how uncertainty inevitably increased with sample size. To my mind there’s not much to disagree with in the GUM but I find a lot of it confusinly stated, and it feels a lot like the proverbial horse designed by a committee.

The division by 12 involves the assumption of a triangular distribution

What?! No, they’ve just told you “the value of the stimulus that produces a given indication X can lie with equal probability anywhere in the interval X − δx/2 to X + δx/2. ” and that this means “The stimulus is thus described by a rectangular probability distribution

That’s why they divide by 12. For a uniform distribution

\sigma = \sqrt{\frac{(b - a)^2}{12}}

Reply to  Bellman
July 5, 2023 12:22 pm

To my mind there’s not much to disagree with in the GUM but I find a lot of it confusinly stated, and it feels a lot like the proverbial horse designed by a committee.”

That’s because you have no basic knowledge of what they are talking about.

You still can’t read. PF: “The adherence to the standard of knowledge in physical science and engineering (the courage to admit ignorance) requires division by 3 (a box ignorance width) rather than 12″ (bolding mine, tpg)

Reply to  Tim Gorman
July 5, 2023 7:12 pm

You still can’t read.”

He said they were using a triangular distribution when state they were using a uniform one. He claimed this was because they divided by 12 becasue he didn’t understand the equation for standard deviation. Then he posted a couple of irrelevant sections from the GUM that just confirmed the division by 12 was correct. But sure, I’m the one who can’t read.

If he means by that a p-box, I’m not sure why he would think there is any epistemic uncertainty in a value lying randomly between two other values. Or what he’s dividing by 3 to get the width.

And presumably you will be yelling at him for claiming the GUM is wrong, and insisting he writes to NIST to explain why that section is wrong.

Reply to  Bellman
July 5, 2023 1:46 pm

JCGM 100:2008 4.3.9 Note 1: “By comparison, the variance of a
symmetric rectangular distribution of half-width a is a²/3 [Equation (7)] and that of a symmetric triangular distribution of half-width a is a²/6 [Equation (9b)].

H.6.3.3: “Also let the maximum difference be described by a triangular probability distribution about the average value xz′/2 (on the likely assumption that values near the central value are more probable than extreme values — see 4.3.9)”

Search for triangular.

Reply to  Pat Frank
July 5, 2023 4:16 pm

the variance of a symmetric rectangular distribution of half-width a is a²/3 [Equation (7)]”

Which is exactly what I said.

In F.2.2.1 δx is the resolution of the instrument, so it’s half width is δx / 2. Hence the variance is

(δx / 2)²/3 = (δx)²/(2² × 3) = (δx)² / 12

If you wanted a triangular distribution you would end up with (δx)² / 24.

Also let the maximum difference be described by a triangular probability distribution”

Is from H.6.3.3: Uncertainty of the correction due to variations in the hardness of the transfer-standard block.

It has nothing to do with resolution. They explain why they assumed a triangular distribution, and the variance is given by

U²(Δb) = (xz’)² / 24

As I said, dividing by 24.

Reply to  Bellman
July 5, 2023 10:31 pm

The full discussion is in 4.3.7 and 4.3.9.

If the half-width is δx/2, the variance is [(δx/2)₊ − (δx/2)₋]²/12 = [2δx/2]²/12 = [δx]²/12 for a symmetric (box) distribution and σ =±δx/3.5

F 2.2.1 is nevertheless misleading. It says, “If the resolution of the indicating device is δx, … leading to your “interval X − δx/2 to X + δx/2.”

But the standard notation for a single instrument is resolution = ±δx. The notation in F 2.2.1 introduces a “/2,” which increases the divisor by way of a formalism artifact.

More properly, [(δx)₊ − (δx)₋]²/12 = [2δx]²/12 = 4δx²/12 = δx²/3.

H.6.3.3 eqn. H.37 discusses the special case of a difference resolution of test and calibration instruments ∆_b, not the resolution of a single instrument.

Reply to  Pat Frank
July 5, 2023 11:46 am

The first thing a smart lawyer would ask when dealing with how a given measurement was obtained, is why did you divide by 3 instead of 12 thus allowing the use of cheaper material than what was needed.

I showed the Instagram for a reason. After checking the link, i couldn’t get video and audio at the same time. The real points were that the valve guide seating requirement is 25/32″, not 26/32″ or 24/32″‘. The valve seats had 30° and 60° slopes. Not 29° and 61°. These are what machinists deal with every day. By the way, 25/32″ = 0.78125″. That’s having resolution out to the 1/100,000ths place. One isn’t going to get that by using a ruler with 1/16″ markings. Lest anyone wonder why such an odd specification, the manufacturer could have said 12/16″ = 3/4″ or 13/16″ = 0.8125, but they chose something in between for a reason. They needed a precise depth for longevity. Consequently, very good resolution tools are needed to achieve the requirement.

Reply to  Tim Gorman
July 5, 2023 11:50 am

Uncertainty due to the physical constraints of the measuring device *is* systematic uncertainty.

Argument by assertion carries little weight. If say the physical constraint is that values are rounded up or down to the nearest unit, and there is no reason to suppose a systematic propensity for the values to be in any specific part of the uncertainty window, then it’s as likely that a value will be rounded down as up. Hence it is random.

Reply to  Bellman
July 5, 2023 12:05 pm

Have you patented the Bellmanian yardstick-micrometer yet? Better get on it, don’t want the “predatory chinese” to beat you to it.

Reply to  karlomonte
July 5, 2023 12:23 pm

Is that the same method as saying we can take the 2000 crap jokes you make each day the and average them out to a coherent point? Because they have about the same likelihood of working.

Reply to  Bellman
July 5, 2023 1:06 pm

Pat Frank as well as Tim have shown you multiple times why resolution is a hard limit to what you can know, but it goes against what you “feel” so they can’t be right. And in peak irony you then go on to accuse them of argument by assertion!

Do you see the problem?

As always, attempting to educate you is proven a waste of time; now you whine about me pointing out your pseudoscience.

Reply to  karlomonte
July 5, 2023 2:31 pm

I have a hard time even crediting to psuedoscience. It’s not any kind of science. It’s religious dogma. It’s got to be written on a tablet somewhere that only the CAGW cult knows about that says “all uncertainty is random, Gaussian, and cancels”.

Reply to  Tim Gorman
July 5, 2023 4:14 pm

Thinking back, this started after Pat’s model uncertainty paper came out when Stokes waved his hand and dismissed it by proclaiming “…the error can’t that large…”. His acolytes followed suit with the same line whereby they were told that error isn’t uncertainty which has to be propagated and all.

They kicked and screamed, claiming that subtracting a baseline removes all “bias”, thus the integrity of the anomalies was intact. Desperate to show that averaging removes all warts, they latched onto sigma/root-N as the path to the promised land, which of course including stuffing the average formula into GUM 10. bgw rants about this unceasingly.

Overnight they became instant experts in uncertainty, and to this day still refuse to consider they don’t know squat. So yes you are right, the tiny anomalies are in front of the mule, and implicitly, even if they deny it, everything has to be Gaussian. But their religion is pseudoscientific in the sense that they know the answer prior to analysis (although they aren’t capable of doing any analysis beyond a linear least-squares fit, and the computer does this for them).

Reply to  karlomonte
July 5, 2023 4:46 pm

Nice. bdgwx doesn’t realize that when he writes

q_avg = Σx_i/N

he is finding an average.

when he then tries to say the uncertainty of the average is

u(q_avg) = sqrt[ (Σu(x_i)/N )^2 ] he is finding the AVERAGE UNCERTAINTY, not the uncertainty of the average. They are not the same.

u(q_avg) = Σu(x_i) + u(N) by the rules of propagation. (assuming direct addition, of course. but quadrature won’t make any difference, u(N) still drops out)

With them its all psuedo-science, as you say. What a joke.

Reply to  karlomonte
July 5, 2023 8:36 pm

I’ve done some digging into bdgwx’s “functional relationship”.

1) Temps are not determined via multiple measurements of fundamental quantities that are used to calculate the combined value. Temperatures are a fundamental SI base unit that is measured directly. Consequently, fudging up a “functional relationship” consisting of an average of temperatures is not finding a temperature, it is finding a mean of several temperatures.

2) bdgwx’s little trick of a so called “functional relationship” is nothing more than finding a mean of a series of numbers by breaking them into small groups and then averaging the group values.

This does not justify dividing the uncertainty by “n”. They still add according to a sum (or by quadrature if justified).

Example:

1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Mean = 55/10 = 5.5

m1 = (1 + 2) / 2 = 1.5
m2 = (3 + 4) / 2 = 3.5
m3 = (5 + 6 ) / 2 = 5.5
m4 = (7 + 8) / 2 = 7.5
m5 = (9 + 10) /2 = 9.5

(m1 + m2 + m3 +m4 + m5) / 5 = 5.5

11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 +20. Mean = 15.5

m1 = (11 + 12) / 2 = 11.5
m2 = (13 + 14) / 2 = 13.5
m3 = (15 + 16 ) / 2 = 15.5
m4 = (17 + 18) / 2 = 17.5
m5 = (19 + 20) /2 = 19.5

(m1 + m2 + m3 +m4 + m5) / 5 = 15.5

Reply to  karlomonte
July 5, 2023 4:45 pm

Pat Frank as well as Tim have shown you multiple times why resolution is a hard limit to what you can know

Have they? Or have they just repeated the claim ad nauseam?

but it goes against what you “feel” so they can’t be right

No. It goes against what I understand of the maths, and what I can demonstrate.

Do you see the problem?

Yes a few people here believe that the resolution of an average cannot be better than the resolution of the instruments. They believe it so strongly that they can;t accept the possibility they are wrong even when it is demonstrated to be wrong.

They believe so strongly that any argument against their belief is rejected as pseudoscience. It becomes the classic unfalsifiable hypothesis. Any evidence against it must be wrong because they know it to be impossible.

Reply to  Bellman
July 5, 2023 5:20 pm

As I said, you need to talk with the Forestry Department.

Reply to  Bellman
July 5, 2023 12:13 pm

You do realize that means you don’t know, right? Whoops, unknown! That doesn’t help in extending resolution dude.

Reply to  Jim Gorman
July 5, 2023 12:19 pm

You do realize that means you don’t know, right?

Right, absolutely.

That’s why it’s called uncertainty. Uncertainty means you don’t know. Not knowing means you are uncertain.

For some reason people seem to think that not knowing / being uncertain, means it’s impossible to say anything about the scope of not knowing. But that’s the whole point of all this uncertainty analysis. To put limits and conditions on what you don’t know.

Saying I don’t know if this coin will come up heads if I toss it, is not the same as saying I don’t know if I toss the coin 100 times it will come up heads each time. They are both “don’t knows”, but not the same “don’t know”.

Reply to  Bellman
July 5, 2023 12:43 pm

But that’s the whole point of all this uncertainty analysis. To put limits and conditions on what you don’t know.”

You just don’t get it at all. Uncertainty puts limits on WHAT YOU KNOW, not on what you don’t know!

A coin flip is not a measurement using a measurement device. It is not even a counting situation to determine how many “something” happens in an interval.

Why do you compare taking a measurement to a coin flip?

Reply to  Tim Gorman
July 5, 2023 1:07 pm

He can never understand because it goes against what he wants to be the truth. Ergo pseudoscience.

Reply to  Tim Gorman
July 5, 2023 5:04 pm

Uncertainty puts limits on WHAT YOU KNOW, not on what you don’t know!

Two sides of the same coin.

A coin flip is not a measurement using a measurement device.

It’s an illustration of probability.

Why do you compare taking a measurement to a coin flip?

We were talking about uncertainty caused by by rounding. I pointed out that in most circumstances there’s the same chance (0.5) of rounding up or down, say by 1. With 100 measurements it’s as certain as it could be that not all readings will be rounded up, and a lot more likely that close to 50 will be up and 50 down. This inevitably means the result of the average will be closer to zero than to plus or minus 1.

Reply to  Bellman
July 5, 2023 1:49 pm

I am reminded of sophomore year fall semester, EE201: the professor handing out the results of about the 2nd or 3rd exam. He quipped “…and some of you need to go talk to the Forestry Dept…”; at the time I thought he was having a bit of fun at the expense of those who had crashed and burned, but he was right. They were never going to get it, and they needed to go switch majors before he would have to fail them.

This is you — you will never get this stuff, and you need to switch majors.

Reply to  Bellman
July 5, 2023 1:58 pm

But that also means that whatever distribution you think you know won’t matter one iota. What will happen next is not under your control, it is unknown, i.e., uncertain.

Reply to  Bellman
July 5, 2023 12:24 pm

Rounding is *NOT* systematic uncertainty. Can you get *anything* right?

Can you even state what the rule for rounding *is*?

Reply to  Tim Gorman
July 5, 2023 1:11 pm

I am reminded of the Dan Aykroyd fish malt blender.

Reply to  Bellman
July 5, 2023 1:50 pm

If say the physical constraint is that values are rounded up or down…

Rounding is not a physical constraint.

Reply to  Pat Frank
July 5, 2023 4:31 pm

Then what is your argument. This thread started becasue you said

Wrong. Complete misportrayal. Instrumental resolution is a constant lower limit of uncertainty.

and then Tim insisted

Systematic uncertainty can’t be reduced through statistical means.

and I said

But we are not talking about a systematic error here. We are talking about instrumental resolution.

To which Tim replied

Your cognitive dissonance never ceases to amaze me. Uncertainty due to the physical constraints of the measuring device *is* systematic uncertainty.

At which point I returned to rounding as a result of instrument resolution.

So is uncertainty caused by rounding to the limit of a display systematic uncertainty or not?

Reply to  Bellman
July 5, 2023 4:53 pm

Pat already answered you on this. Your short memory is failing again.

Rounding is not uncertainty. Rounding is a function of resolution, i.e. what’s the next value after the 3 1/2 digit display on a voltmeter. Resolution *is* uncertainty – systematic uncertainty. Resolution is instrumental uncertainty. It affects each and every measurement. But none of this is the uncertainty Pat analyzed. What Pat is looking at is minimum detection limits. That too is systematic uncertainty but it is *not* the same as resolution uncertainty.

Why you can’t understand this is beyond me. It’s readily apparent that you have absolutely ZERO experience in metrology yet you feel you have enough expertise in the subject to become the arbiter of what is right and what is wrong with the statements of all the experts in the field. Unfreakingbelievable.

Reply to  Tim Gorman
July 5, 2023 6:26 pm

Rounding is not uncertainty.

Of course rounding is uncertainty. If you round to a particular mark you can’t know where the value lied with the interval. That’s the whole point of all these SF rules you are so fond of.

Pat even quoted a part of the GUM where they talk about the uncertainty from rounding. F.2.2.1 The resolution of a digital indication “One source of uncertainty of a digital instrument is the resolution of its indicating device.”

Resolution *is* uncertainty

Word games. The uncertainty of resolution caused by a fixed interval is caused by rounding to the nearest point.

systematic uncertainty.

And just saying it doesn’t make it true. Look at the GUM passage. It specifically points it’s described by a rectangular probability distribution.

It affects each and every measurement.

You keep saying that as if it means something to you.

It’s readily apparent that you have absolutely ZERO experience in metrology…

Never denied that. But it doesn’t mean I can’t read.

yet you feel you have enough expertise in the subject to become the arbiter of what is right and what is wrong

No, I just have enough understanding of maths and statistics to see when you are wrong.

with the statements of all the experts in the field

I;m not arguing with any experts in the field. But if I was so what? You’re the one who keeps objecting to arguments from authority.

You seem to have no problem telling all the experts in the field of climate science they are wrong, even though you have no expertise – apart from building stud walls.

Reply to  Bellman
July 5, 2023 6:52 pm

You have not progressed one micron past “the error can’t be that big!”

“experts in the field of climate science” — got a list of these?

Reply to  karlomonte
July 6, 2023 5:05 am

You have not progressed one micron past “the error can’t be that big!”

Not suire when I said that, but it’s a useful smell test. If the errros seem impossibly big it’s a good indication you should look closely at your assumptions and workings. In your case, if you are claiming the average of 10000 thermometers can have an uncertainty of ±50°C, then yes, that’s impossibly big, if an individual thermometer’s uncertainty ±0.5°C.

You seem to have no problem saying things are wrong because the uncertainties can’t be that small.

Reply to  Bellman
July 6, 2023 5:35 am

Why is it impossibly big? Annual temps certainty vary at least that much at one location. Temps between the SH and NH can have that kind of variance on a daily basis.

If you don’t know the variance of your distribution and chase that variance through all the averaging then how do you know the variance can’t be that big?

Once again, you are stating a “belief” not a fact. A belief founded on cult dogma from the CAGW religion – not on actual reality.

Reply to  Tim Gorman
July 6, 2023 3:15 pm

Annual temps certainty vary at least that much at one location.”

I doubt there are many places that vary by 50°C a year. But as always it’s irrelevant as we are not talking about the uncertainty at one location or even in this case the uncertainty of the range of temperature. All we are discussing is the uncertainty caused by a ±0.5°C. There is no way that magnitude of error could increase 100 fold by averaging thousands of instruments.

If you don’t know the variance of your distribution and chase that variance through all the averaging then how do you know the variance can’t be that big?

I’ll assume you no longer want to talk about how measurement uncertainty propagates, and actually talk about the good old SEM, or ESDOM or what the current woke term is.

So lets say you do have a random set of thermometers reading a value on one day across the globe. The obvious point is you do know the variance – it’s the variance in the data. And more usefully you have the standard deviation of the data – and you should know by now how to use that standard deviation. With a sample size of 10000, the SEM is SD / 100, so unless your standard deviation around the world is 50,000°C, I still say an uncertainty of 50°C is too big.

But ignoring the maths for a moment – if you say for example that the actual average is 14°C, and you occasionally see temperatures on the earth that are as much as 50°C warmer or colder than that, then you have to consider how likely it is that most of your 10000 random samples all happened to come from that one spot.

Reply to  Bellman
July 6, 2023 4:16 pm

I doubt there are many places that vary by 50°C a year. “

Have you *ever* been out of a climate controlled environment?

It isn’t unusual here on the central plains of the US to see near 100F (about 40C) temps in the summer and 0F (-17C) temps in the winter. That’s a 57C annual change!

But as always it’s irrelevant as we are not talking about the uncertainty at one location “

Of course we are! The variance of every component has to be considered and propagated when combining them! Variance grows just like uncertainty does.

You are *still* laboring under the misconception that all uncertainty (variance) is irrelevant because it all somehow cancels. Simply unfreakingbelieable.

And you believe we can’t prove why climate science is a joke?

The obvious point is you do know the variance – it’s the variance in the data”

The variance is the combination of the variance of each individual piece! Especially when each component is the mid-range value of two daily temperatures!

Do you *ever* stop to think about what you say?

“And more usefully you have the standard deviation of the data”

That standard deviation is based on mid-range values which have variances – which you just blissfully ignore!

“With a sample size of 10000”

You have a sample made up of mid-range values, i.e. two samples from a daily temperature profile that aren’t even average values. That mid-range value changes based on the variance of the daily temperature.

This is why everything you say is a joke. You want to ignore those statistical descriptors that are inconvenient, like variance and uncertainty! Just like climate science!

“if you say for example that the actual average is 14°C”

That average is useless without a variance associated with it. And you think you do statistics?

Reply to  Bellman
July 6, 2023 6:25 pm

Another waft of idiotic hand waving.

Reply to  Bellman
July 6, 2023 6:26 am

You sit there in front of the computer lecturing experienced professions about subjects for which you have no ability to grasp even the basics.

Do you see the problem?

Reply to  karlomonte
July 6, 2023 2:47 pm

You sit in front of your device and keep telling the world how every climate scientist, or anyone who’s produced a global temperature analysis, is wrong and doesn’t understand anything about the subject. Yet you won’t actually explain your concerns to them. Do you see the problem?

Reply to  Bellman
July 6, 2023 3:05 pm

I participate here and on Twitter. If climate scientists refuse to read sites that have criticisms, that is their perogitive.

Have you communicated your criticism of UAH to Dr. Christy or to the NOAA STAR satellite data who now agrees with UAH! Show us what you told them!

Reply to  Jim Gorman
July 6, 2023 4:20 pm

I’ve not criticized UAH – maybe you are confusing me with karlo. I use the data as is. I also comment from time to time on Spenser’s blog.

Reply to  Bellman
July 6, 2023 4:00 pm

And it can be proven! Climate science assumes all uncertainty cancels, both systematic and random. Climate science assumes variance doesn’t grow when combining random variables with different distributions. Climate science thinks a daily mid-range temperature value is an “average daily temperature”. Climate science thinks combining NH temps with SH temps, both of which have different variances, is just fine and no regard has to be paid to the different variances. Climate science thinks calculating a trend line from the stated temperature values give a “true trend line” with no regard for the uncertainties associated with those stated values.

You do exactly the same!

Reply to  Bellman
July 6, 2023 8:16 pm

Are you a parrot?

Reply to  Bellman
July 5, 2023 10:35 pm

Rounding is not a result of instrument resolution.

Resolution is something an instrument has, Rounding is something you do.

Reply to  Pat Frank
July 4, 2023 5:25 pm

bellman believes that there is nothing statistics can’t do. That’s probably true in his alternate universe.

Reply to  Tim Gorman
July 4, 2023 7:24 pm

There are lots of things statistics can’t do. Figure out how many times you will lie about me in the next 15 minutes, for one.

Reply to  Bellman
July 4, 2023 9:11 am

My claim is that if measurements vary by more than the resolution, then it is possible to reduce the resolution in the average. I also said that this isn’t the case if the variation is less than the resolution.”

You are just plain wrong. Resolution has to do with MEASUREMENTS. The average is *NOT* a measurement, it is a statistical descriptor. As a statistical descriptor it should not be given a value that misleads anyone as to the resolution of the actual measurements used to develop the average. Doing so is just plain lying.

“I pointed out that in the real world temperatures do indeed vary by a lot more than the instrument resolution (they would serve no purpose if that wasn’t the case.)”

And you didn’t understand the answer. The uncertainty of a measurement is a combination of *all* uncertainty factors. Just because the stated value variation is more than the measurement uncertainty it doesn’t mean the measurement uncertainty can be ignored. Possolo *told* you this in TN1900 which you refuse to read and understand. He assumed there was no systematic error in any measurement and that the random error cancelled because the same thing was being measured by the same device.

I.E. – NO MEASUREMENT UNCERTAINTY TO CONSIDER.

The very same meme you *always* assume even though you say you don’t!

To which PF claimed it didn’t matter because his topic was instrument resolution and not the variation in temperatures across the globe.”

And the implications of that went flying by right over your head. You never even looked up and tried to understand why.

“That seems to be the problem all along. Pat Frank wants to give the impression he’s talking about the uncertainty in the global average, but in reality all he’s ever talking about is the uncertainty of individual measurements.”

See what I mean about you always assuming all uncertainty is random, Gaussian, and cancels? YOU JUST DID IT AGAIN!

The uncertainty of the individual measurements DETERMINES THE UNCERTAINTY OF THE GLOBAL AVERAGE!

The uncertainty of the global average is *NOT* the SEM which is based solely on the stated values and ignores the uncertainty in those stated values. It’s part and parcel with your ASSUMPTION THAT ALL UNCERTAINTY CANCELS!

You just can’t get away from that meme no matter how hard you try. It’s become a true joke and you don’t even know it.

Reply to  Bellman
July 3, 2023 5:55 am

 that doesn’t mean the data was pulled out of thin air. The information comes from the data.”

Nope! That *IS* creating data out of thin air. You can’t glean more information from the data than it provides. That’s the whole concept of significant figures – which I guess you still don’t believe in.

“Instruments can’t but the average of many instruments can.”

Not if you follow significant digit rules.

Taylor, Rule 2.9: “Rules for Stating Answers – The last significant figure in any answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.

Taylor, Rule 2.5: “Experimental uncertainties should almost always be rounded to one significant figure”

You have never actually studied Taylor on ANYTHING. You’ve just cherry-picked things with no conceptual understanding of the context in which they exist.

Nearly everything you assert is at odds with Taylor, Bevington, and Possolo even while you say you believe and understand what they say. Your cognitive dissonance is huge.

Reply to  Tim Gorman
July 3, 2023 6:18 am

You can’t glean more information from the data than it provides.

Indeed not. But the information the data does provide is much more then you seem to think. There’s a lot more information in 1000 measurements then there is in one.

That’s the whole concept of significant figures – which I guess you still don’t believe in.

Not this again. I believe in significant figure guidelines as convince. A style guide. I do not think they provide a better way of determining uncertainty than actually working out the uncertainty and stating it. And I think in some places the “rules” are inadequate or just wrong. Especially if you think it should be a rule that an average cannot be stated to more decimal places than the individual measurements.

Tim then goes on to quote Taylor, oblivious to the fact that the quotes demonstrate why the so called rules about averaging are wrong. He even gives a couple of exercise to demonstrate that the uncertainty of an average can be written to more decimal places than the measurements.

“Experimental uncertainties should almost always be rounded to one significant figure”

I take it you’ve pointed out to Pat Frank why he’s wrong to quote all the uncertainty figures to 3 significant figures.

Reply to  Bellman
July 3, 2023 7:18 am

Not this again. I believe in significant figure guidelines as convince. A style guide. I do not think they provide a better way of determining uncertainty than actually working out the uncertainty and stating it.”

I *gave* you the reason for significant figures. As usual you just blew it off. It is *NOT* a style guide. It is so you do not mislead others into believing you used higher resolution equipment than you actually had. There is a REAL WORLD reason for the use of significant digit rules.

From Libretexts on chemistry: “ignificant Digits – Number of digits in a figure that express the precision of a measurement instead of its magnitude.”

When you include more significant digits in a result than justified by the precision of the measurements you made then you *are* misleading people on the precision with which the base measurements were made.

I know that probably makes no sense to you and that you simply don’t care – because you don’t live in the real world let alone the real world of physical science. Most climate scientists don’t ether.

Reply to  Bellman
July 3, 2023 8:21 am

I take it you’ve pointed out to Pat Frank why he’s wrong to quote all the uncertainty figures to 3 significant figures.

Go read the paper, fool, he explains why. Just tossing more shite at the wall.

Reply to  Tim Gorman
July 3, 2023 7:14 am

Nope! That *IS* creating data out of thin air.

Bingo! Another +7000!

Reply to  Bellman
July 3, 2023 6:14 am

The resolution of an average can be greater than that of the individual members

Read this from NIST.

2.4.5.1. Resolution (nist.gov)

Note the following statement.

The number of digits displayed does not indicate the resolution of the instrument.

from:Accuracy, Precision, and Resolution; They’re not the same! | Phidgets (wordpress.com)

Resolution is easily mistaken for precision, but it’s not always the case that you will have a high precision just because you have a high resolution. Even with many many decimal places in the values you are getting from a sensor, you may still find there is a lot of variability in the data, or in other words that there is low precision despite the high resolution.

Your statement really ignores the basis of resolution. The minimum change in the subject that can cause a change in the reading. You can not obtain this information from averaging, nor can you eliminate the uncertainty because the uncertainty remains in the average.

The only time your statement makes sense is in the presence of noise, true noise and not signal variations. Noise in a temperature signal would be variation in the measurement due to external heat sources. I’m not clear where those noise sources would occur as long as the thermometer was sited correctly. Are there incorrectly sited thermometers? Sure there are, but then one must define the frequency of the noise and make multiple measurements frequently enough to recognize the noise and determine its quantity. Only with the new digital thermometers could this be done on a mechanized basis. One example would be where jet exhausts impact a thermometer momentarily. Another would be HVAC exhaust impacting the thermometer. HVAC could be done with samples of a minute. Jet exhaust, probably would require 1 second sampling. Certainly not out of the question. The administrative problem would be worst, that is, trying to agree on a world wide algorithm.

Reply to  Bellman
July 3, 2023 7:13 am

An apt analogy for your averaging idiocy is seen often in TV shows: the cops get some grainy low-res CCTV image, pass it off to the IT Guru, who then types away for a bit on his/her keyboard.

Voila! The license plate number is clear a day and the perp is revealed!

You think you can manufacture data that isn’t there — should be expected from the crowd who “adjust” historic data to remove “biases”, though.

Reply to  Bellman
July 2, 2023 8:28 pm

The problem is all you have done these last two years is angrily shout the same slogans.

When all else fails, run to the whitewash brush.

Reply to  karlomonte
July 3, 2023 4:19 am

Wow, you nailed it. Never an actual refutation of anything. Just examples that totally ignore propagation of uncertainty and assumes resolution in a measurement can be increased mathematically.

Reply to  Bellman
July 2, 2023 4:12 pm

Not sure what point that is making.You’re guarenteeing the length of each board, not the average.”

Unfreaking believable! He TOLD you what the average is – 8′! Can’t you read simple English?

“If all your measurements are within half an inch of 8′ all will be rounded to exactly 8′.”

How is that any different than temperatures?

“If on the other hand the boards vary in length be a few inches, your rounding will all differ, therefore some cancellation of errors is likely and the average can be now to a resolution better than an inch.”

Unfreaking belivable! A few inches? What is that? To change the rounding it would have to be greater than or equal to 6″ difference!

the average can be now to a resolution better than an inch.”

Wow! Just wow! How did the resolution of the average change if the boards have a wider standard deviation? You *still* can’t measure any board to more than +/- 1″. That includes boards that are 6″ longer and 6″shorter.

Reply to  Tim Gorman
July 2, 2023 4:24 pm

All just hand-waving trying to support untenable and illogical ideas.

And of course, relying on “cancellation of errors” that he denies he relies on.

Reply to  Tim Gorman
July 2, 2023 5:31 pm

Unfreaking believable! He TOLD you what the average is – 8′! Can’t you read simple English?

Calm down. No he said he had 1000 8′ boards. If that’s correct the average will be 8′, but as you keep saying it’s unknown. If it’s not unknown why point to the uncertainty in the measurement, and why ask how many were bigger or smaller than 8′.

Honestly these examples are always so badly thought through. I’m sure they mean something to you when you come up with them, but you never describe them in a meaningful way or explain their relevance. And then get apoplectic when I try to clarify the specifications.

““If all your measurements are within half an inch of 8′ all will be rounded to exactly 8′.”
How is that any different than temperatures?”

Because temperatures are not all between 7.5 and 8.5°C.

“Unfreaking belivable! A few inches? What is that? To change the rounding it would have to be greater than or equal to 6″ difference! “

Perhaps if you didn’t use these antiquated units there wouldn’t be this confusion. I thought it was said the rounding was to the nearest inch, not foot. I didn’t realize American tape measures where so useless.

Let me check.

I’ve got 1000 8′ boards. I measured each one of them to the nearest 1″. I can sell them to you and I’ll guarantee they are all 8′ ±0.005″. IOW 8.0004 feet long.

I’m sure in the dark ages the ” was meant to mean inches not feet. Maybe it’s different now.

How did the resolution of the average change if the boards have a wider standard deviation?

If you haven’t figured this out by now, I doubt you ever will.

Reply to  Bellman
July 2, 2023 8:29 pm

You’re an idiot, this is the only possible logical conclusion.

Reply to  Bellman
July 3, 2023 3:49 am

If that’s correct the average will be 8′, but as you keep saying it’s unknown.”

You can’t even get this straight! There is no reason you can’t know the average. But you can’t know the average to more decimal places than the measuring device allows! Thus 8′! Nor is the uncertainty unknown – it is +/- 1″!

Honestly these examples are always so badly thought through.”

There is nothing wrong with the examples. The root of the problem is that you simply refuse to learn about measurement uncertainty so you can’t figure the examples out properly. If you would take Taylor and work out ALL of the examples and compare your answers to the ones in the back of the book, you might gain some actual knowledge. Stop just cherry-picking things that you think might support your religious beliefs.

Taylor says in his intro to Part I: “Chapter 3 describes error propagation, whereby uncertainties in the original measurements propagate through calculations to cause uncertainties in the calculated final answers. Chapters 4 and 5 introduce statistical methods with which the so-called random uncertainties can be calculated” (bolding mine, tpg)

Nowhere in Chapter 3 does he show averaging reducing uncertainty, NO WHERE. Nor does he show in Chapter 3 how systematic bias can cancel. That never seems to faze you in any way, shape, or form.

Because temperatures are not all between 7.5 and 8.5°C.”

Unfreakingbelievable. So what? You think that is a refutation of anything? To get to the GAT you *still* average all the temperatures together – and the uncertainty of the measurements propagates to that average, just like with the boards!

“I thought it was said the rounding was to the nearest inch, not foot.”

And here we go with the nit-picking. You aren’t even bright enough to understand when you are seeing a typo!

“If you haven’t figured this out by now, I doubt you ever will.”

That’s not an answer. But it is ALWAYS your fallback so you don’t actually have to explain how your assertions always fail. Pathetic.

Reply to  Tim Gorman
July 3, 2023 4:54 pm

There is no reason you can’t know the average. But you can’t know the average to more decimal places than the measuring device allows! Thus 8′

Can you not see the contradiction here. You are claiming you know the average, then saying you only know it to the nearest inch. As you keep telling me, uncertain means you do not know.

Nor is the uncertainty unknown – it is +/- 1″!

How do you know that. The initial premise was that you were measuring to the nearest inch, no other mention of uncertainty. If the rounding to nearest inch is the only uncertainty than it’s ±0.5″.

If there any other sources of uncertainty you just don’t what they are at this point.

And this argument doesn’t seem to be going anywhere as we are just back to the usual pathetic personal insults.

Reply to  Bellman
July 3, 2023 5:21 pm

The uncertainty interval, be it Type A or Type B is, at its base, an educated guess. It should be wide enough that subsequent observations should fall into the interval at least seven out of ten times.

There is no reason why you can’t estimate the uncertainty of a measuring device. Go look at the GUM for Type B uncertainty.

You simply can’t get ANYTHING right about uncertainty and how it is used, can you?

Reply to  Tim Gorman
July 3, 2023 5:59 pm

There is no reason why you can’t estimate the uncertainty of a measuring device.

This is a pretend example using fictitious boards to engage in some thought experiment, which nobody seems to know the purpose of.

If there’s an uncertainty in the measurement device you can tell me as it can be anything you want it to be. All I want to know is what is the point of this nonsense and why do you keep insisting I give meaningful answers about your fantasy world.

Just specify the parameters of this experiment, explain what you want me to solve and then say what the point is.

And please stop finishing every one of your increasingly deranged comments by saying I can’t get anything right.

Reply to  Bellman
July 3, 2023 6:21 pm

Tim’s right, you can’t.

Reply to  Bellman
July 4, 2023 4:56 am

This is a pretend example using fictitious boards to engage in some thought experiment, which nobody seems to know the purpose of.”

Now that’s a real fine refutation there!

“If there’s an uncertainty in the measurement device you can tell me as it can be anything you want it to be”

I didn’t say that at all. I said you can estimate what it is! You simply can’t read even simple English. It’s no wonder you can’t make heads or tails out of Pat’s study!

“All I want to know is what is the point of this nonsense and why do you keep insisting I give meaningful answers about your fantasy world.”

I don’t live in a fantasy world, you do! I live in the real world where the average value of a pile of random boards is of little use to me since it can’t tell me an accurate total length of the three boards I pick from the pile. I need to know the uncertainty of each so I can ensure that they will span the distance I need them to. The “average uncertainty” only tells me that someone has tried to spread the total uncertainty equally over all the boards – a physical impossibility.

“Just specify the parameters of this experiment, explain what you want me to solve and then say what the point is.”

The parameters include not assuming that all error is random, Gaussian, and cancels.

I’ll stop saying you can’t get anything right when you actually say something right concerning measurement uncertainty and then actually incorporate it in your assertions instead of always falling back on your standard meme – and we all know what that is.

Reply to  Tim Gorman
July 4, 2023 5:26 am

Now that’s a real fine refutation there!

There’s nothing to refute. You still haven’t explained what your point is.

Reply to  Tim Gorman
July 4, 2023 6:06 am

Again and again it has been proven that the word “estimate” is not in bellcurveman’s vocabulary.

Reply to  Tim Gorman
July 1, 2023 5:10 pm

If you can believe it, I was pretty much told exactly that by an MIT physicist in the Q&A after giving a talk on temperature measurement uncertainty.

Reply to  Pat Frank
July 2, 2023 5:35 am

Oh, I can believe it. The education of physical science and its principles is sadly lacking today. I often tell the story of my youngest son who, when starting his college study in microbiology, was told by his advisor not to worry about taking any math or statistics classes. If he needed something done with data he could get a math major or grad student to analyze it for him.

I was stunned. That winds up with the blind leading the blind. A microbiologist with no way to judge whether the statistical analysis is believable and a math major who has no idea of the underlying realities of biological data, including the uncertainties associated it with it – the old “the stated values are 100% correct” bugaboo of statisticians today. Which we continually see in climate science.

Thankfully I convinced my son to take at least two semesters of statistics and probability. It has served him well in his career.

Reply to  Pat Frank
July 1, 2023 2:34 pm

Does your inability to see the sense of it mean it’s not sensible

No, it’s quite possible I’m misunderstanding something. That’s why I’m trying to get you to explain what you mean.

Why would anyone think the resolution limit of the LiG thermometer should not condition a global mean annual LiG temperature or the annual LiG temperature anomaly?

What do you mean by “condition”? If you mean contribute to, I’d agree. If you mean dominate, I’d disagree.

Reply to  Bellman
July 2, 2023 5:17 am

What do you mean by “condition”? If you mean contribute to, I’d agree. If you mean dominate, I’d disagree.”

How do you know what dominates if you don’t know the magnitude of each factor contributing to the uncertainty?

As usual, you are making an unjustified assumption based on your religious beliefs in CAGW.

bdgwx
Reply to  Pat Frank
July 1, 2023 5:51 pm

PF: Why would anyone think the resolution limit of the LiG thermometer should not condition a global mean annual LiG temperature or the annual LiG temperature anomaly?

Assuming condition means contribute to then it does condition temporal (daily, monthly, annual, etc.) means; just not via RMS. It also conditions spatial (regional, global, etc.) means as well; just not via RMS.

Speaking of the spatial domain…I don’t see anywhere in the publication where you propagate the uncertainty in the spatial domain (ie grid mesh). That’s another issue and rabbit hole on its own.

Reply to  bdgwx
July 1, 2023 11:01 pm

Condition there means ‘sets the knowledge bounds.’

The paper is restricted to instrumental analysis.

Reply to  Bellman
July 1, 2023 7:16 am

As bgdwx has pointed out, the obvious problem with the instance on RMS being correct for resolution uncertainties is that just before you do this, you use a calculation for the average if max and min that is reducing the uncertainty. You have to explain why that’s correct for averaging two readings each with resolution uncertainty, but not for averaging thousands if instruments on a single day, ir the global average over a month, or a year ir 30 years.

Reply to  Bellman
July 1, 2023 9:28 am

Let’s [tinu] get real here: without tiny uncertainty numbers for your GAT lines, you and the rest of the global warming hoaxers are bankrupt.

So you and bgwxyz whine about minutia like “you are using RMS!”.

Reply to  karlomonte
July 1, 2023 10:25 am

Minutia? It’s the entire reason for the claim the global uncertainties are so large.

I’ve no idea why you think I have to believe all uncertainties are “tiny”. I’d like it if we could be more certain about the actual temperature change but I doubt that’s possible short of a time machine. That’s why I think it’s good there’s a range of different estimates, using different methods. I’ve always said I think the real uncertainty is probably greater than most of the estimates, just by comparing different sets.

What I don’t agree with is coming up with self serving, implausible uncertainty figures based on dubious statistics. I’m not even sure why you think this is good for your cause. If you want to pursuade people there has been no global warming, claiming that we might have had three times as much as previously thought doesn’t seem like a good strategy.

Reply to  Bellman
July 1, 2023 12:13 pm

What I don’t agree with is coming up with self serving, implausible uncertainty figures based on dubious statistics.

Another irony overload!

Reply to  karlomonte
July 1, 2023 1:08 pm

They simply can’t understand that when the use the formula

(x1 + x2)/2 they are finding an average. When they find the uncertainty for that formula they are finding the average uncertainty. I.e. the total uncertainty divided by the number of members — q.e.d. the average uncertainty. The average uncertainty is *not* the uncertainty of the average!

Reply to  Tim Gorman
July 1, 2023 2:43 pm

(x1 + x2)/2 they are finding an average.

Says someone who insists (max + min) / 2 is not an average.

the total uncertainty divided by the number of members

How many more times. Total uncertainty is not the uncertainty of the total. The total uncertainty is usually (assuming random errors) is found by adding in quadrature – that it is it less than the sum of the uncertainties. The uncertainty of the average is only the same as the average uncertainty if you just add all the uncertainties and divide by N. There is only one person in the argument who is doing that.

Reply to  Bellman
July 1, 2023 3:19 pm

Says someone who insists (max + min) / 2 is not an average.

Just quit now while you’re behind, its easier.

Reply to  Bellman
July 2, 2023 10:43 am

add all the uncertainties in quadrature and divide by N”

Eqns. 5 and 6.

Reply to  Pat Frank
July 2, 2023 12:06 pm

5 and 6 are not doing that. They would be doing that if you took the divisor from outside the square root sign. But what you are doing is adding in quadrature and dividing by root N.

Reply to  Bellman
July 2, 2023 12:41 pm

“Yes, it’s true, I really am this clueless” — the hapless Bellman.

Reply to  Bellman
July 2, 2023 1:13 pm

No. Eqns. 5 & 6 divide by N inside the root.

Reply to  Pat Frank
July 2, 2023 2:29 pm

Argh. My mistake. Yes, I meant you need to take the divisor out, from inside the root.

You are saying

√[(N × σ²) / N] = σ

Which is not adding in quadrature and dividing by N.

Adding in quadrature and dividing by N would be

√(N × σ²) / N = σ / √N

Reply to  Bellman
July 2, 2023 6:06 pm

No, Bellman.

Using your method of removal, it becomes (√N/√N)×√σ² = ±σ.

Exactly eqns. 5 & 6.

Reply to  Pat Frank
July 2, 2023 6:13 pm

Splat!

Reply to  Jim Gorman
July 2, 2023 6:34 pm

Well said.

Reply to  Bellman
July 2, 2023 8:30 pm

Face plants—you have a special talent here.

Reply to  Pat Frank
July 2, 2023 6:17 pm

Using your method of removal, it becomes (√N/√N)×√σ² = ±σ

At least one of us is very bad at maths.

(√N/√N)×√σ²

That is not the same as

√(N × σ²) / N

You see, in “my” method the divisor N is outside the square root symbol, so doesn’t get square rooted. Whilst the multiplier N is inside the square root symbol so will be √N.

Hence

√(N × σ²) / N = (√N × σ) / N = σ / √N

Reply to  Bellman
July 2, 2023 9:32 pm

“in “my” method the divisor N is outside the square root symbol, so doesn’t get square rooted.”

In that case, the rules of algebra require you to have written it as [√(N × σ²)]/N rather than as √(N×σ²)/N.

Hoisted by your own petard, Bellman.

√(N × σ²)/N is, in fact, [adding] all the uncertainties in quadrature and [dividing] by Nafter which the square root is taken. Eqns. 5 & 6.

Reply to  Pat Frank
July 3, 2023 3:55 am

What rules of algebra are you talking about?

In BODAS, O comes before D.

Even if I should have put the brackets I’d have thought the meaning was clear enough, given I’d told you in so many words that I was taking the divisor out of the root.

Maybe I need to go back to using LaTeX with all the problems that entails.

Reply to  Bellman
July 3, 2023 5:49 am

Your notation was wrong.

Reply to  Pat Frank
July 3, 2023 6:21 am

Could you point me to a reference explaining why it was wrong. I’m not that happy with writing inline equations, so maybe it could be clearer.

But just asserting it was wrong, seems to look like a distraction from the actual point. Which is that your equation is not doing what you claimed – adding in quadrature and dividing by N.

Reply to  Bellman
July 3, 2023 11:23 am

Why is a reference necessary? Your abbreviated root-sign implicitly covered the whole equation.

Nothing indicated division by N was excluded from the root.

In the textual formalism, to indicate division by N only after taking the root, the independent parts should have been separated with parentheses or brackets.

Reply to  Pat Frank
July 3, 2023 1:43 pm

Why is a reference necessary?

So you couldn’t find one either.

Your abbreviated root-sign implicitly covered the whole equation.

I don’t think it does. But if you found it confusing I apologize. I was trying to type out the comment on a tiny phone whilst waiting for a train. At least I didn’t make the same mistake in a peer-reviewed publication.

Still well done in keeping up this distraction. Regardless of whether I made a mistake in the equation, do you accept you were wrong to claim equations 5 and 6 were adding in quadrature and dividing by N?

Reply to  Bellman
July 3, 2023 5:48 pm

you couldn’t find one

The answer is obvious. There was no need to find a reference.

“But if you found it confusing I apologize.”

You wrote the equation incorrectly.

At least I didn’t make the same mistake in a peer-reviewed publication.

Misrepresenting a small triumph. Very shallow, Bellman. Have you ever achieved a peer-reviewed publication?

do you accept you were wrong to claim equations 5 and 6 were adding in quadrature and dividing by N?”

Why would anyone accept that? Eqns. 5 & 6 do in fact add uncertainties in quadrature and divide by N. All within the root. Visual inspection alone is enough to prove that.

Maybe you should restrict comment on the paper to times when you have it before you for reference.

Reply to  Pat Frank
July 3, 2023 6:40 pm

you couldn’t find one
The answer is obvious. There was no need to find a reference.”

You missed the word “either” at the end of my quote.

Eqns. 5 & 6 do in fact add uncertainties in quadrature and divide by N.

Then you wrote the equations wrong.

All within the root.

You can’t divide by N “within the root” the root is part of the adding in quadrature.

Maybe you should restrict comment on the paper to times when you have it before you for reference.

I have it here. I’ll post a screen shot of equation 5.

comment image

This is not the same as adding 0.195 in quadrature, and dividing by N. It’s equivalent to adding in quadrature and dividing by √N.

And given how much grief I’ve been getting over the perceived missing bracket in comment. I feel I should alert the author to the fact that the brackets in his equation are all over the place.

Screenshot 2023-07-04 023736.png
Reply to  Bellman
July 3, 2023 10:46 pm

This is not the same as adding 0.195 in quadrature, and dividing by N. It’s equivalent to adding in quadrature and dividing by √N.”

Wrong.

What’s the equation for root-mean-square, Bellman?

Reply to  Pat Frank
July 4, 2023 4:30 am

What’s the equation for root-mean-square, Bellman?

The clues in the name. it’s the root of the mean of the squares. Square each value add them all up and divide by N to get the mean of the squares then take the square root.

\sqrt{\frac{x_1^2 + x_2^2 + \dots + x_n^2}{N}}

If the mean of all the x’s is zero then this is the same as Standard Deviation.

Adding in quadrature, also know as RSS or Pythagorean addition. Square all your values, add them take the square root of the sum.

\sqrt{x_1^2 + x_2^2 + \dots + x_n^2}

Adding in quadrature and dividing by N. Does what it says on the tin.

\frac{\sqrt{x_1^2 + x_2^2 + \dots + x_n^2}}{N}

IF you think I’m wrong please explain why, rather than just saying “wrong”.

Reply to  Bellman
July 4, 2023 6:08 am

bellcurveman doubles down, again!

Reply to  Bellman
July 4, 2023 8:43 am

Your first equation — for RMS — correctly defines eqns. 5 & 6. The very equations you claimed are wrong.

Notice that when your xᵢ are constant, the sum within the root is merely N×(x²). And that’s Q.E.D.

Reply to  Pat Frank
July 4, 2023 9:44 am

Your first equation — for RMS — correctly defines eqns. 5 & 6. The very equations you claimed are wrong.

I did not say they were wrong. They are the correct equations for RMS. What I said, and it’s there in the comment you linked to, was that they were not “adding in quadrature and dividing by N”, which is what you claimed they were.

Notice that when your xᵢ are constant, the sum within the root is merely N×(x²). And that’s Q.E.D.

Yes that’s why I was saying using the equation to calculate the average uncertainty was redundent, given that you are already assuming your uncertainty is the average uncertainty.

Reply to  Bellman
July 4, 2023 12:51 pm

they were not “adding in quadrature and dividing by N”

That’s prima facie exactly what they are.

… you are already assuming…

That’s a demonstration, not an assumption.

Always assertive, always wrong, Bellman.

Reply to  Pat Frank
July 4, 2023 1:04 pm

That’s prima facie exactly what they are.

And what I’m I doing if you take a second look?

That’s a demonstration, not an assumption.

The point stands. You know what the average uncertainty is, but then want to average it again to find out what the average uncertainty is.

Reply to  Pat Frank
July 4, 2023 5:32 pm

I’m surprised he hasn’t claimed you are using “voodoo math”!

Reply to  Tim Gorman
July 4, 2023 6:32 pm

What? I’m just pointing out where he seems to have misunderstood what I was saying, and still seems to have a problem with where the N is in his equation.

Reply to  Bellman
July 4, 2023 3:41 am

Oh my gosh — bellcurveman grabs the ball, shoots deep and throws … a brick. And then face-plants as the ball is brought back the other direction.

Obsessing over a typo — great job by the trendologist! The GAT hoax is saved! Yay!

/me hands out party kazoos

The really question you should be asking — why would an average uncertainty be needed? (its a really simple answer)

And you have to extend a huge e-hug to bg-whatever, the NIST uncertainty machine can’t tell him an average U, poor guy.

/me hands a hankie

Reply to  karlomonte
July 4, 2023 4:32 am

Not obsessed, just pointing out the irony.

Reply to  Bellman
July 4, 2023 6:22 am

The real danger is that someone unfamiliar with metrology and uncertainty might read these threads and think you know WTF you yap about.

You don’t.

Reply to  Bellman
July 4, 2023 4:31 am

Wow! You simply can’t figure out finding the mean uncertainty, can you?

You are blinded by your cult dogma!

Reply to  Tim Gorman
July 4, 2023 5:48 am

Finding the mean uncertainty is easy when you are saying all your uncertainties are the same. Just multiply by N and divide by N.

You are blinded by your cult dogma!

Scientologists are always telling me that.

Reply to  Bellman
July 4, 2023 8:45 am

Just multiply by N and divide by N.

Right. And show your work for publication.

Reply to  Pat Frank
July 4, 2023 4:30 am

He can’t help himself. He is caught up in a cult and only knows the cult dogma.

Reply to  Tim Gorman
July 4, 2023 6:23 am

His agenda trumps reality, just like Stokes.

Reply to  Tim Gorman
July 1, 2023 3:01 pm

Another great line:

“I’ve no idea why you think I have to believe all uncertainties are “tiny””

Is he really this dense or is it an act? These guys hang on every little squiggle in the UAH graph (and whine (a lot) when CMoB calculates a pause). And bellman is the dude who when apoplectic when I replotted the graph with some estimated uncertainty limits, which looked very much like Pat’s graph above.

Reply to  karlomonte
July 1, 2023 3:21 pm

These guys hang on every little squiggle in the UAH graph

Only becasue it’s the only one allowed to be mentioned here. It’s the one accurate graph whilst all others have gigantic uncertainties.

In case it hasn’t registered with you yet, the main reason I look at every twist and turn is to mock Monckton. Seeing how much this pause has extended or contracted by, when we know that the uncertainty around the trend line is humongous. Laughing at the people here who think Monckton is a genius for being able to put some numbers in a spreadsheet and find a mystical start point for a meaningless trend. Made all the sweeter by seeing the same people insist that the average global temperature doesn’t even exist.

And bellman is the dude who when apoplectic when I replotted the graph with some estimated uncertainty limits

I think you have a different definition of apoplectic. I merely suggested that if you thought there was so much uncertainty you should point it out to Dr Spencer, and Monckton.

which looked very much like Pat’s graph above.

And you still don’t see the irony of this. Hint,

The pathetic Bellman, who lacks the courage to publish in its own name, is perhaps unfamiliar with the difference between the surface and space, where the UAH satellite measurements are taken. The microwave sounding units on the satellites do not rely on glass-bulb mercury thermometers but on platinum- resistance thermometers. We use the satellite record, not the defective terrestrial record, for the Pause analyses.

Reply to  Bellman
July 1, 2023 9:44 pm

is to mock Monckton. 

When are you going to start? All you do is face-plant when you whine about his articles.

Reply to  Bellman
July 2, 2023 5:27 am

In case it hasn’t registered with you yet, the main reason I look at every twist and turn is to mock Monckton.”

So, in other words, you really don’t care what Monckton is doing, you just want to mock him. I think most of us have already figured that out.

Reply to  Tim Gorman
July 2, 2023 6:25 am

Oh yeah.

Reply to  Tim Gorman
July 2, 2023 8:27 am

” you really don’t care what Monckton is doing”

Not really no. I doubt many who take him seriously now. There was a time in the past when he was being offered up as some sort of expert, appearing as a witness in hearings and such like, but now his claims are just an amusing irrelevance. The only people who take him seriously are the true believers.

“you just want to mock him.”

I just want to mock his meaningless arguments. I try not to mock him as a person. I’m sure he’s Avery nice man if you get to know him. But it’s a bit difficult not to take all his insults and libels directed at me a little personally.

Reply to  Bellman
July 2, 2023 9:31 am

Poor baby bellboy, doesn’t get the respect he thinks he deserves.

Geoff Sherrington
Reply to  Bellman
July 4, 2023 12:50 am

Bellman,
If, like I have, you had the pleasure of time face-to-face with Viscount Monckton, you would quickly realise that you are dealing with a an intellect that is unusually impressive. As a trivial example, when we were discussing some Chemistry, he recited the Elements song by Tom Lehrer.
https://www.youtube.com/watch?v=AcS3NOQnsQM

Not many people can do that, off the cuff. I can only recite the periodic table up to the rare earths after a lifetime of exposure to the elements.That’s 56 out of 114, so only the first half.
People with excellent recall are commonly found among the top scientists. I have just finished the book “Surely You’re Joking, Mr Feynman”. His ability in math and memory started in childhood. He was a Prof by age 23. I was left with the feeling that his eminence in math was because he remembered more numbers than most people did, so when presented with a challenge he had a huge mental database of recall on which to build. Few of us carry the cube root of 48 or whatever in our minds.
In conversation, Viscout Monckton can go far deeper and wider than his posts on the latest T pause. You need to find a better example if derision is your aim,though I fail to grasp why you are doing the tall poppy thingo.
Geoff S

Reply to  Geoff Sherrington
July 4, 2023 4:49 am

Impressive writing Geoff S!

Reply to  Bellman
July 1, 2023 10:41 am

The reason is obvious to anyone who has read the paper and understands the argument, Bellman. The LiG resolution uncertainty is constant for every T_mean.

You haven’t read the paper.

Like bdgwx, you focuse in on something unfamiliar, impose your own mistaken meaning on it, and then criticize from your lack of understanding.

Reply to  Pat Frank
July 1, 2023 12:08 pm

In all my years working in the software industry, rtfm was my least favourite excuse. It’s usually just blaming the customer for your own bad design.

When I’m reading a paper and there’s something I don’t understand or just seems wrong, and the author is around and asking questions it seems sensible to ask them for clarification. At worst the author can explain my mistake and point to the part of the paper that resolved the issue. I’ll feel embarrassed but will thank them for their help.

If the author keeps deflecting, insists I have to read the entire paper for enlightenmen, and says I shouldn’t be allowed to comment if I don’t understan, then I begin to have my suspicions that the real explanation is that they don’t know the answer.

Reply to  Bellman
July 1, 2023 2:10 pm

‘Read the paper’ is not an excuse when it’s clear you’ve not read the paper. Start at the beginning.

I’ve never deflected. I’m just annoyed at having to repeat the same explanation yet again. The lower limit of LiG resolution is a constant uncertainty conditioning every measurement.

Don’t ask again.

Reply to  Pat Frank
July 1, 2023 3:01 pm

The lower limit of LiG resolution is a constant uncertainty conditioning every measurement.

But uncertainty is not error. Having a constant uncertainty just means the interval remains the same, not that you will get the same error each measurement.

Don’t ask again.

Fine I won’t. I’ll just draw my own conclusions.

Reply to  Bellman
July 1, 2023 10:55 pm

not that you will get the same error each measurement.

You get the same lower detection limit of instrumental uncertainty with each measurement. Uncertainty is not error.

Reply to  Pat Frank
July 2, 2023 4:03 am

You are trying to convert a committed cultist. He will never understand uncertainty because he doesn’t want to understand uncertainty.

I wish you good luck in enlightning him but am pessimistic about the chances of it happening.

Reply to  Tim Gorman
July 2, 2023 10:53 am

Yeah, you’re right, Tim.

Bellman and bdgwx are focused on the same sort of equations here as they focused on from my 2010 paper.

They do so while completely ignoring the analytical context. The insistent ignorance seem to indicate some sort of numerological obsession

bdgwx
Reply to  Pat Frank
July 2, 2023 11:47 am

PF: Bellman and bdgwx are focused on the same sort of equations here as they focused on from my 2010 paper.

They are similar. In that case you told me Bevington 4.22 was the justification you used for the equations in 2010. And as Bellman and I pointed out 4.22 is nothing more than an intermediate step whose result is used in 4.23 to calculate the variance of the mean which can be square rooted for the uncertainty of the mean as described in example 4.2.

Reply to  bdgwx
July 2, 2023 12:45 pm

Where is the NIST uncertainty machine in this post?

Reply to  bdgwx
July 2, 2023 1:25 pm

Bevington 4.22 is nothing more than the correct equation for the weighted average variance.

B 4.23 is its transformation for random error. Systematic error is not random. Use of B. 4.23 is wrong when the error is not random.

You’re wrong, bdgwx.

You and Bellman are wrong at every turn. You find different ways to be invariably wrong.

Insistent ignorance is pathological.

Reply to  Pat Frank
July 2, 2023 5:00 pm

It’s because of their cultist belief that all uncertainty is random, Gaussian, and cancels. There *is* no such thing as systematic bias in their world view. There’s no room for it!

Reply to  Tim Gorman
July 2, 2023 5:19 pm

Just keep lying. You know full well that I’ve discussed systematic errors plenty of times. I certainly think there are both systematic and correlated uncertainties in the instrumental data.

But that doesn’t justify assuming that there are zero random errors.

Reply to  Bellman
July 3, 2023 3:30 am

You may talk about systematic bias but you never actually post anything that shows you understand the consequences of them. You *always*, every single time, fall back to the meme that all uncertainty is random, Gaussian, and cancels. If that wasn’t an inbuilt worldview that you can’t break out of it might dawn on you that you can’t increase resolution or decrease uncertainty by averaging!

YOU are the only one that assumes there are zero random errors, that they all cancel to zero in every case. You do it EVERY SINGLE TIME you make a post!

Reply to  Tim Gorman
July 3, 2023 5:22 am

You may talk about systematic bias but you never actually post anything that shows you understand the consequences of them.”

We probably both have failing memories, and you often seem to blank out anything I say that upsets your assumptions.

Let me try again.

I say that if a measurement uncertainty (U) is caused entirely by random errors, the measurement uncertainty of an average will be U / √N.

I say that if the measurement uncertainty is caused entirely by systematic errors the measurement uncertainty of an average will be U.

More correctly, if the correlation between measurement errors is 0, the uncertainty of the average will be U / √N, and if it’s 1, the uncertainty will be U.

And in between you can use the formula in the GUM.

I also say this is mostly irrelevant to the actual uncertainty of an average, that is if you are not interested in the actual average of a sample, but what it says about the population average.

I also say that there’s a logical fallacy in applying the concept of a systematic error to an anomaly. Any truly systematic errors will cancel when you subtract one value from another.

I also say there’s a problem using the concept of systematic errors to justify large uncertainties in the global average over a period of time, and then claiming that this uncertainty will affect the uncertainty of the trend.

I also say, that all of this is a distraction from the real problem, which would be any systematic error that changes over time, which will affect the trend.

Reply to  Bellman
July 3, 2023 5:27 am

YOU are the only one that assumes there are zero random errors, that they all cancel to zero in every case. You do it EVERY SINGLE TIME you make a post!

I’m sure this is what you want to believe I think. But could you for once point to any comment I have ever made where I have said anything cancels to zero?

How can you look at a statement saying the uncertainty is U / √N and think that means it will always be zero?

Reply to  Bellman
July 3, 2023 6:51 am

How can you look at a statement saying the uncertainty is U / √N and think that means it will always be zero?”

U / √N is the SEM. Theoretically the SEM *can*approach zero. But as Bevington points out, you cannot actually approach this because of statistical fluctuations that can’t be identified in most measurement protocols.

Again, for the umpteenth time, the SEM is *NOT* the accuracy of the population mean. It is the accuracy with which you have calculated the population mean.

They are *NOT* the same thing.

Reply to  Tim Gorman
July 3, 2023 3:50 pm

U / √N is the SEM.

Only for the measurement uncertainty.

Theoretically the SEM *can*approach zero.

As I say, very much only theoretically. That would assume no systematic errors, and it would still only approach zero. It’s never actually going to be zero unless you take an infinite number of measurements. For any finite number of measurements the SEM is strictly greater than zero, unless U is 0. So I’ll ask again, why do you think I believe that all random errors cancel to zero?

because of statistical fluctuations

He says nonstatistical fluctuations.

Again, for the umpteenth time, the SEM is *NOT* the accuracy of the population mean.

And for the same number of times, I agree.

It is the accuracy with which you have calculated the population mean.

And again, you do not calculate the population mean. You estimate it from the sample mean.

Reply to  Bellman
July 3, 2023 5:06 pm

Yes, nonstatistical flucuations. Things statistics can’t identify and can’t account for. So no, an infinite number of observations won’t get you to zero!

The SEM * sqrt(N) GIVES YOU THE POPULATION MEAN. You can’t even get this one right!

Reply to  Tim Gorman
July 3, 2023 7:15 pm

The SEM * sqrt(N) GIVES YOU THE POPULATION MEAN. You can’t even get this one right!”

If that’s right I’d prefer to be wrong.

Reply to  Bellman
July 3, 2023 6:21 am

I say that if a measurement uncertainty (U) is caused entirely by random errors, the measurement uncertainty of an average will be U / √N.”

This is a perfect example of you ALWAYS ignoring systematic bias in measurements. You don’t even realize that you do it!

“I say that if the measurement uncertainty is caused entirely by systematic errors the measurement uncertainty of an average will be U.”

Therein lies your total and complete lack of understanding of the physical world and of uncertainty! No measurement can consist of only systematic bias or of random error. NONE.

More correctly, if the correlation between measurement errors is 0, the uncertainty of the average will be U / √N, and if it’s 1, the uncertainty will be U.”

You still can’t understand the difference between the average uncertainty and uncertainty of the average. U / √N is the SEM! It is how accurately you have calculated the population average. It is *NOT* the uncertainty of the population average!

Why is that so hard to understand?

I know you won’t do it but you *really* should go here: https://statisticsbyjim.com/hypothesis-testing/standard-error-mean/

Read it over and over till you understand it. “For the standard error of the mean, the value indicates how far sample means are likely to fall from the population mean using the original measurement units. Again, larger values correspond to wider distributions.”

And in between you can use the formula in the GUM.”

The formula in the GUM is for measurements with only RANDOM ERROR! You’ve been given the quotes from Taylor and Bevington about this often enough that it should have sunken it.

“I also say that there’s a logical fallacy in applying the concept of a systematic error to an anomaly. Any truly systematic errors will cancel when you subtract one value from another.”

That is total and utter crap. If q = u – v then the uncertainty of q is the sum of the uncertainties of u and v. It’s not the difference of the uncertainties in u and v but the SUM!

Taylor addresses this in Section 2.5 and Rule 2.18! Once again, it’s obvious that you have NEVER studied Taylor at all or worked out any examples in his book. If you had you would find that your answers would never match the answers he gives!

Bevington never addresses this because his entire book is based on all uncertainty being from random errors with no systematic bias. The GUM is exactly the same!

I also say there’s a problem using the concept of systematic errors to justify large uncertainties in the global average over a period of time, and then claiming that this uncertainty will affect the uncertainty of the trend.”

That’s because you ALWAYS use only the stated values of the measurements to calculate your trend line. It’s your meme of “all uncertainty is random, Gaussian, and cancels”. I’ve given you graphs at least twice showing that trend lines inside the uncertainty intervals can be up, down, sideways, or even a zig-zag. YOU DON’T KNOW WHICH AND YOU CAN NEVER KNOW WHICH.

But you stubbornly cling to the belief that all stated values are 100% accurate so the trend lines must be what they show.

I also say, that all of this is a distraction from the real problem, which would be any systematic error that changes over time, which will affect the trend.”

You can’t even discern the difference in systematic bias that comes from calibration drift and that which comes from instrumental resolution. One is fixed, the other is not. Instrumental resolution doesn’t change over time, calibration drift does. Pat has looked only at the fixed uncertainty from instrumental resolution – *that* part of uncertainty that *can* be known.

Reply to  Tim Gorman
July 3, 2023 6:37 am

“I say that if a measurement uncertainty (U) is caused entirely by random errors, the measurement uncertainty of an average will be U / √N.”

This is a perfect example of you ALWAYS ignoring systematic bias in measurements.

Read the rest of the comment.

Reply to  Tim Gorman
July 3, 2023 6:43 am

Therein lies your total and complete lack of understanding of the physical world and of uncertainty! No measurement can consist of only systematic bias or of random error. NONE.

What do you think the word “if” means?

I’m presenting the two extremes. In the real world the truth lies somewhere in between.

Really stop being so hysterical. Read what I say, try to understand it and if you still don’t ask me questions.

I don’t have the time or desire to plough through the rest of your insults and misunderstanding at this point.

Reply to  Bellman
July 3, 2023 7:24 am

What do you think the word “if” means?”

It means you’ve wandered off, once again, into an alternate universe where all uncertainty is random, Gaussian, and cancels.

“I’m presenting the two extremes. In the real world the truth lies somewhere in between.”

No, you are creating an alternate universe. The truth lies no where in that alternate universe.

“Read what I say, try to understand it and if you still don’t ask me questions.”

What you say comes from that alternate universe. I don’t need to understand it, most of us don’t live there.

Reply to  Tim Gorman
July 3, 2023 3:53 pm

You still can’t understand the difference between the average uncertainty and uncertainty of the average.

Sure I can. uncertainty of the average is what you want, average uncertainty is what Pat Frank gives you.

One is a lot bigger than the other. Guess which one we are talking about in this paper.

Reply to  Bellman
July 3, 2023 5:10 pm

That mean instrumental uncertainty goes into each and every observational value. When the population uncertainty is calculated that mean instrumental uncertainty gets added into the total uncertainty for each and every observation. The average simply can’t have any less uncertainty than the mean instrumental uncertainty but it can definitely have *more* uncertainty than the mean instrumental uncertainty!

Reply to  Tim Gorman
July 3, 2023 4:04 pm

I know you won’t do it but you *really* should go here:

What makes you think I haven’t already seen it? What part of it do you think disagrees with anything I’ve said? This is all basic statistics.

For the standard error of the mean, the value indicates how far sample means are likely to fall from the population mean using the original measurement units. Again, larger values correspond to wider distributions.

Exactly what I’m trying to tell you. Honestly, I have no idea what point you think you are making or arguing against. I’m not even sure you do.

The formula in the GUM is for measurements with only RANDOM ERROR!

Keep up. I’m talking about correlation at that point. You can look at a systematic error as a type of correlation, but the GUM points out it’s better to try to eliminate them and then include an uncertainty factor for the uncertainty caused by the correction. It also points out that the term systematic and random uncertainties are not always a helpful distinction.

Reply to  Bellman
July 3, 2023 5:12 pm

You just don’t get it. The words “sample means” is plural. You have to have more than one sample in order to have a distribution and a standard deviation of the SAMPLE MEANS.

When you say that all you need is one sample you are violating what the standard deviation of the sample means requires!

Reply to  Tim Gorman
July 3, 2023 6:13 pm

The words “sample means” is plural.

Whereas as sample mean is singular. Are we down to remedial English now?

You have to have more than one sample in order to have a distribution and a standard deviation of the SAMPLE MEANS

Firstly, almost nobody apart from you and Jim call, it the SAMPLE MEANS. Secondly, no you don’t. I’ve quoted the very things you insist I read spelling out you only need one sample to estimate the standard error of the mean.

When you say that all you need is one sample you are violating what the standard deviation of the sample means requires!

What you don’t understand is that I’m a super hero, who as god like powers that you mere mortals lack. This includes the ability to calculate a sampling distribution from a single sample, a gift knwon only to me, and anyone who’s read a basic statistics text book.

Reply to  Bellman
July 4, 2023 6:06 am

Whereas as sample mean is singular. Are we down to remedial English now?”

“Sample means” is not equivalent to “sample mean” no matter how much you wish it to be.

Firstly, almost nobody apart from you and Jim call, it the SAMPLE MEANS”

You have been given at least two references with internet links saying the standard deviation of the sample MEANS.

Here is another: https://www.khanacademy.org/math/ap-statistics/sampling-distribution-ap/sampling-distribution-mean/v/standard-error-of-the-mean

Take a sample from a population, calculate the mean of that sample, put everything back, and do it over and over. How much do those sample means tend to vary from the “average” sample mean? This is what the standard error of the mean measures. Its longer name is the standard deviation of the sampling distribution of the sample mean.” (bolding mine, tpg)

here is another: https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.02%3A_The_Sampling_Distribution_of_the_Sample_Mean

hat we are seeing in these examples does not depend on the particular population distributions involved. In general, one may start with any distribution and the sampling distribution of the sample mean will increasingly resemble the bell-shaped normal curve as the sample size increases. This is the content of the Central Limit Theorem.” (bolding mine, tpg)

In order for the CLT to apply you HAVE TO HAVE a sampling distribution. One sample alone is *NOT* a distribution no matter what *you* think. If you have only one sample you simply don’t know if it is representative of the population mean or not, no matter how large your sample size is. As Bevington points out as your sample size grows so does its uncertainty because of nonstatistical fluctuations.

Now, tell us all again how no one else talks about the use of SAMPLE MEANS.

I’ve quoted the very things you insist I read spelling out you only need one sample to estimate the standard error of the mean.”

You never provide a reference link for *anything*. And one sample can *ONLY* estimate the population mean, it cannot estimate the standard error of the mean because you don’t have a distribution of means that can form a distribution! If you don’t know the population mean then you can’t calculate the SEM.

What you are actually saying, even if you don’t realize it, is that *YOU* can *GUESS* at what the population mean is from *YOUR* one sample. Without a distribution of sample means you actually have no way to calculate the standard deviation of the sample means so you have no way to evaluate how close you actually are to the population mean. The CLT *requires* you to have multiple samples in order to approach a Gaussian distribution of sample means. The mean of one sample does *NOT* make a Gaussian distribution.

Nor can you “bootstrap” into the SEM by “assuming” you KNOW the population mean from one sample and can use it to calculate the SEM. That’s a form of circular logic fallacy.

What you don’t understand is that I’m a super hero, who as god like powers that you mere mortals lack. This includes the ability to calculate a sampling distribution from a single sample, a gift knwon only to me, and anyone who’s read a basic statistics text book.”

No, it’s known only to *YOU*, not anyone else. Most people recognize circular logic when they see it – but not you.

Reply to  Tim Gorman
July 4, 2023 6:48 am

[I’m not sure which is worse at the moment. These 1000 line essays from Tim to every comment I make, or the 1000 1 line cries for attention karlo makes every day. At least it’s easy to just bin all of karlo’s]

Reply to  Bellman
July 4, 2023 7:13 am

Translation: “I don’t understand, but I’ll continue making a fool of myself regardless.”

Reply to  Tim Gorman
July 4, 2023 7:07 am

All this is about the fact that the Gorman’s insist that the correct term is “standard deviation of the sample means”, which they think proves you can only figure it out if you have multiple samples. As an aside I pointed out that almost nobody calls it the standard deviation of the sample means (plural), it’s either the standard error of the mean, or in some circles standard deviation of the mean (in either case singular). Ity’s not even that relevant, but as usual this is blown up into some major issue with lost of insults about my inability to read English.

So Tim now gives me a couple of references to prove that it’s really called the standard error of the means. Let’s see how good his English really is.

Reference 1

https://www.khanacademy.org/math/ap-statistics/sampling-distribution-ap/sampling-distribution-mean/v/standard-error-of-the-mean

is headed Sample Error of The Mean. And then calls it the standard deviation of the sampling mean – singular

Take a sample from a population, calculate the mean of that sample, put everything back, and do it over and over. How much do those sample means tend to vary from the “average” sample mean? This is what the standard error of the mean measures. Its longer name is the standard deviation of the sampling distribution of the sample mean.

The lesson is called “Lesson 6: Sampling distributions for sample means”

Not sampling distribution of the means. Not “of” but “for”. Not “deviation” but “distributions”. As usual Tim wants to pick up on anything plural to make his point, but ignores the obvious point that if you are talking about distributions in general you use the plural, and the same with means.

The part he want to highlight is the part that says “How much do those sample means tend to vary from the “average” sample mean?”. Once again he’s completely unable to distinguish between an explanation of what a sampling distribution is, in a hypothetical sense, and how you actually use it in practice.

Reference 2
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Introductory_Statistics_(Shafer_and_Zhang)/06%3A_Sampling_Distributions/6.02%3A_The_Sampling_Distribution_of_the_Sample_Mean
is headed “The Sampling Distribution of the Sample Mean”. Again in the singular.
Here’s the part that Tim draws attention to

“hat we are seeing in these examples does not depend on the particular population distributions involved. In general, one may start with any distribution and the sampling distribution of the sample mean will increasingly resemble the bell-shaped normal curve as the sample size increases. This is the content of the Central Limit Theorem.” (bolding mine, tpg)

The bold says sampling distribution, it’s followed by the word “mean” singular.

Reply to  Bellman
July 4, 2023 7:14 am

Oh my, hypocrisy now: 1000 line essays from Tim”

Reply to  Bellman
July 4, 2023 4:36 pm

Once again, you can’t even read!

Take a sample from a population, calculate the mean of that sample, put everything back, and do it over and over.”

What in Pete’s name do you think this is describing if not multiple samples?

Once again he’s completely unable to distinguish between an explanation of what a sampling distribution is, in a hypothetical sense, and how you actually use it in practice.”

A distribution has multiple values, not just one! A sampling distribution is developed from multiple samples.

“sampling distribution”

I don’t know what you think you are asserting! *YOU* said you only need one sample to have a sampling distribution. Now you are changing that to saying you need only one SAMPLING DISTRIBUTION! The issues is *not* how many sampling distributions you need but how many samples you need to form a distribution.

Once again you have face planted. You think you can fool everyone with your argumentative fallacy of Equivocation? I.e. changing the definition of what is being discussed?

You are as big of a fool as everyone believes!

Reply to  Tim Gorman
July 4, 2023 5:01 pm

Once again, you can’t even read!

Once again, why do you think I have any interest in reading another hate filled missive from you, when your first words are to insult me once again.

I’ve pointed to the sections in your own sources that say in so many words you do not need multiple samples to estimate the sampling distribution. That the maths allows you to do that from a single sample, or the population. Believe it or not, that’s up to you – but it’s pathetic to just keep ignoring it and then pointing to every mention of a plural as proof that you can’t do what you have just been shown.

Reply to  Bellman
July 5, 2023 3:09 am

You’ve pointed to *NOTHING* that says you don’t need multiple samples to create a sampling DISTRIBUTION. Every reference you can find on the internet plus Taylor, Bevington, and Possolo says you do. Even the CLT requires you to have multiple samples in order to form a Gaussian distribution around the estimated population mean.

That the maths allows you to do that from a single sample, or the population.”

You simply cannot do that. defining a Gaussian distribution requires you to have both an average and a standard deviation. The only way you can find the SEM is by dividing the population standard deviation by the sample size. And if you already know the population standard deviation then you also already know the population average because they go together. If you already know both the population average and standard deviation then the SEM is meaningless!

Reply to  Tim Gorman
July 5, 2023 5:55 am

You’ve pointed to *NOTHING* that says you don’t need multiple samples to create a sampling DISTRIBUTION.

I’ve no interest in prolonging this discussion at this time, and I worry it’s not good for our mental health.

But for the record here’s a quote again:

Fortunately, you don’t need to repeat your study an insane number of times to obtain the standard error of the mean. Statisticians know how to estimate the properties of sampling distributions mathematically, as you’ll see later in this post. Consequently, you can assess the precision of your sample estimates without performing the repeated sampling.

https://statisticsbyjim.com/hypothesis-testing/standard-error-mean/

I suspect Tim is now fixating on the idea of creating a sampling distribution rather than knowing what it is. As always he seems to be incapable of understanding that a distribution is an abstract concept.

Reply to  Bellman
July 5, 2023 7:03 am

sampling distributions”

Only in bellman’s alternate universe is “distributions” singular.

Reply to  Tim Gorman
July 5, 2023 8:24 am

This is just sad, and I worry about you.

Statisticians know how to estimate distributions, meaning they know how to estimate the properties of more than one distribution. That does not mean there are multiple distributions for any one sample.

You look at the heights of trees in a forest and estimate the sampling distribution for that. Then you look at the IQ of participants in a discussion forum, and estimate that distribution. Hey, now you know how to estimate two different distributions -plural.

Reply to  Bellman
July 5, 2023 12:27 pm

Those are things you can see. How about the widths of hair strands from a discussion forum or hundredths of a degree change. Those are things you need measurements for. Can they estimate the uncertainty by just examining the measurements?

Reply to  Jim Gorman
July 5, 2023 6:50 pm

Talk about missing the point.

Reply to  Bellman
July 6, 2023 4:08 am

If you only take samples of the trees then you need MULTIPLE samples in order to find the average height accompanied with a measure of how accurately that height represents the population of trees, i.e. the standard deviation of the sample means, incorrectly known as the Standard Error,

Estimating distributions is GUESSING! That’s not physical science – its voodoo statistics!

You can’t look at one sample of the IQ of the participants and KNOW the distribution with any certainty at all. You simply don’t know if your single sample actually represents the population.

What if your life depended on knowing the average height of the trees in the forest? Would you settle for ONE sample and guess at the average height?

Here’s another one for you. I am a Type 2 diabetic. My endocrinologist recently changed my medication and we both worried about what that would do to my glucose levels and night when my metabolism bottoms out. I risk going into diabetic shock and dying at 55 or lower.

So at 3:30AM I get up and measure my glucose levels with my Freestyle Libre 3 sensor and with my finger-stick meter. Libre says my glucose is 65 +/- 10. The finger-stick meter says 80 +/- 10.

Now what do I tell the endocrinologist my blood sugar is?

73 +/- 5
73 +/- 10
73 +/- 15
73 +/- 20
Or something else entirely?

Remember the 55 danger level for diabetic shock and death.

Reply to  Tim Gorman
July 7, 2023 3:12 am

I see Bellman has decided to not answer my post. Could it be that he knows his assertions concerning uncertainty are not only wrong but dangerous when applied to reality?

Reply to  Tim Gorman
July 7, 2023 4:00 am

Good grief man. You post countless aggressive posts every hour, each more demented than the last. I reply to far mor than is healthy but each time it just results in more deranged insults. And then if I don’t answer one of them you claim it as a victory.

I’ll look at you post when I have time, but the short answer is yo’re wrong. It’s the safest assumption.

Reply to  Bellman
July 7, 2023 4:56 am

I anxiously await your advice on what I should tell my endocrinologist. But I’m not holding my breath.

Reply to  Tim Gorman
July 7, 2023 5:47 am

Remember this is the guy who all my inner thoughts!

Reply to  Tim Gorman
July 7, 2023 6:39 am

If you are waiting on me for medical advice, you are as good as dead. I’m certainly not going to ask you to deal with any of my pressing medical conditions.

Reply to  Tim Gorman
July 7, 2023 6:37 am

OK, let’s start.

Part 1:

Forests.

Wrong on most points. Pointless going through all this again. I’ve explained why you don’t need not than one sample. I’ve explained why taking multiple samples would be pointless and a waste of time and money, and on some cases unethical.

I’ve shown how your own sources point this out. Even the infallible GUM tells you to take a sample of measurements and divide the SD by √N to get the “experimental standard deviation of the mean”.

You would be right to call any of these figures a guess, but it’s a highly educated guess, based on solid mathematics and centuries of practice. But it’s also a guess if you use you sample of samples method. You are literally relying on the same concept of looking at the standard deviation of a sample to estimate the standard deviation of the sampling distribution. You could always try taking a sample of samples of samples.

If my life for some reason depended on knowing the average height of the trees in a forest I would far sooner it was based on one large sample then one smaller sample. If you can repeat a sample of size 30, 30 times just to estimate the uncertainty of a sample of size 30, then why not just take a sample of size 900, which will be more certain?

Reply to  Bellman
July 7, 2023 6:55 am

Part 2

“I am a Type 2 diabetic”

I’m really sorry to hear that. You have my condolences. But I’m definitely not the person to ask for medical advice. Please speak to your doctor, not some anonymous internet source such as me.

I don’t know what would be expected from two different instruments on two different parts of the body, but it’s possible that one of the readings is wrong. Please don’t look at the average of the two. This is s lorance issue, not a question of the mean, and if it’s critical the value doesn’t get too low, I would say the safer option is to assume the lower figure is correct.

It’s a similar problem with uncertainty in temperature rises. If you are worried about top much warming, but you are told there are huge uncertainties in the record, you will have to assume the worst case is correct until you have better data.

Reply to  Bellman
July 7, 2023 8:51 am

Fortunately real engineering normally includes a cost-benefit analysis that would prevent spending hundreds of trillions of dollars, planting tens of thousands of wind turbine that don’t work, to fix a problem that can’t be detected (see Fig. 1), based on a number that exists nowhere in the real world.

This is the essence of the GAT hoax.

Reply to  Bellman
July 7, 2023 3:53 pm

I don’t know what would be expected from two different instruments on two different parts of the body, but it’s possible that one of the readings is wrong”

ROFL!! But not for temperatures? Or LIG thermometers? The difference in readings from taking measurements on the fingertip vs the back of the arm IS EXACTLY THE SAME AS TAKING SINGLE TEMP READINGS AT DIFFERENT LOCATIONS!

Which one is correct? Which one is least correct? Which one is most correct?

“It’s a similar problem with uncertainty in temperature rises. If you are worried about top much warming, but you are told there are huge uncertainties in the record, you will have to assume the worst case is correct until you have better data.”

Except climate science assumes all the uncertainties cancel! So they have no “worst case”!

Reply to  Bellman
July 7, 2023 3:50 pm

Wrong on most points. Pointless going through all this again. I’ve explained why you don’t need not than one sample. I’ve explained why taking multiple samples would be pointless and a waste of time and money, and on some cases unethical.”

Unless your sample consists of the entire sample then you can only ASSUME the sample distribution is the same as the population distribution. With only one sample you have no way to judge how close you are to the population mean. You can’t calculate the SEM unless you know the population standard deviation and if you already know the population standard deviation then you also know the populatio mean then the SEM is meaningless.

You fail all the way around.

You would be right to call any of these figures a guess, but it’s a highly educated guess”

It’s not a guess at all let alone an educated guess. It’s an assumption to make things easier – something statisticians do but not people who have to live with the results of the measurements.

It’s like saying I can measure the diameter of all the pistons in my car at 100,000 miles and assume the distribution of wear in that sample is the distribution of wear for all cars.

The CLT allows you to *measure* how close you are. That’s hardly a guess. With just one sample the CLT is not in play!

“based on one large sample then one smaller sample.”

You keep forgetting Bevington. By the time you make the single sample large enough nonstatistical flucuations will have already set in making your “guess” meaningless.



Reply to  Bellman
July 6, 2023 4:13 am

A distribution is *NOT* an abstract concept. You are trying to get out from under your idiotic statements.

The distribution can be extremely important in physical science. For example, the insulating rings in the booster on the Challenger or the infection factor for COVID in healthy youth, or the shear strength of the steel beams used in a bridge span.

As usual you just totally ignore reality and remain in your alternate universe where guessing at things or ignoring things have no consequence for you.

Reply to  Bellman
July 3, 2023 7:30 am

Stuffing the average formula into the GUM (or the vaunted NIST uncertainty machine) is an abuse of these tools.

You don’t know what you are doing, as well evidenced by this:

I also say that there’s a logical fallacy in applying the concept of a systematic error to an anomaly. Any truly systematic errors will cancel when you subtract one value from another.

Just another indication that you don’t know what you don’t know.

affect the trend.

Ah yes, the only thing that really matters. Damn the torpedoes, defend the trend!

Reply to  karlomonte
July 3, 2023 12:35 pm

“Stuffing the average formula into the GUM (or the vaunted NIST uncertainty machine) is an abuse of these tools.”

Strange, it seems like only a little while ago karlo was the one insisting we had to use those equations to understand why uncertainties grew with sample size. It was his answer to all objections, just use equation 10 of the GUM.

Then when we pointed out that it produced the same resulte, a decrease in uncertainty with sample size, he made a vallient effort to demonstrate how he could make to show increasing uncertainties.

Only when that failed, does he go round saying it’s the wrong equation, and it’s an abuse to use it.

Reply to  Bellman
July 3, 2023 2:05 pm

Strange, the trendology lot no longer points to the NIST TN after it was pointed out that it doesn’t say what they claimed.

Reply to  karlomonte
July 3, 2023 3:17 pm

What NIST TN? Do you mean TN1900 example 2 that Jim keeps banging on about, whilst Tim insists doesn’t work in the real world?

I don’t know about the “trendology lot”, but I have mentioned it lots of times. I think it’s correct with a few minor caveats. I’ve really no idea why you think I don’t. After all it’s just saying take the SEM of a collection of maximum temperatures as the uncertainty of the average.

Reply to  Bellman
July 3, 2023 4:08 pm

You do realize that TN 1900 has nothing to do with measurement uncertainty don’t you? Both random and systematic uncertainty is assumed to be negligible.

The purpose is to evaluate the effects of experimental uncertainty as caused by obtaining different varying values for the measurand under the same conditions of measurement. IOW, experiments.

The implication is that measurement uncertainty may be large enough to affect the results and needs to be added into the total uncertainty.

Every time another experimental daily data is added, the results need to be recalculated to find the new mean and experimental uncertainty.

This exactly how GUM Section 4 deals with experimental uncertainty.

Reply to  Jim Gorman
July 3, 2023 5:48 pm

You do realize that TN 1900 has nothing to do with measurement uncertainty don’t you?

I think i pointed that out to you when you first brought it to my attention. But it is called “Simple Guide for Evaluating and
Expressing the Uncertainty of NIST Measurement Results”. Really it depends on how you are defining the measurand, and measuring model. They define it as an observational model, which looks just like a statistical sample to me.

Which one makes more sense would depend on what question you were asking.

Both random and systematic uncertainty is assumed to be negligible.

They assume the calibration uncertainty is negligible. In terms of resolution, the readings are give to the nearest 0.25°C, but the temperatures vary by over 10°C, so it seems unlikely that would have any noticeable effect.

This exactly how GUM Section 4 deals with experimental uncertainty.

And is the same method you would apply to a random sample of different things. But for some reason you seem to think that needs different maths. The problem is none of these sources talk about random sampling as such, becasue they are focused on measurements, and you refuse to use statistical books becasue they don’t deal with measurement.

Reply to  Bellman
July 4, 2023 4:28 am

They assume the calibration uncertainty is negligible.”

Possolo also assumed no random error. Did you see *any* statement of uncertainty in the document? He assumed *no* uncertainty which means both systematic and random uncertainty.

It is just like climate science – and you – where it is assumed that all uncertainty is random, Gaussian, and cancels and can therefore be ignored in any statistical analysis of the temperature data sets. Trend lines based on stated values only are 100% accurate because the stated values are 100% accurate!

You just can’t help yourself, can you? You need to abandon your meme of all uncertainty being random, Gaussian, and cancels.

Reply to  Tim Gorman
July 4, 2023 5:45 am

Possolo also assumed no random error.

He does not. It would be stupid if he did as the measurements are only to the nearest 1/4 degree. What he assumes, is that any random uncertainty is negligible compared to what you are actually measuring, the daily the variance in temperatures.

I’ll ignore the rest of the credo. There’s only so many times I can heart the same lies repeated as a mantra.

Reply to  Bellman
July 4, 2023 9:22 am

He does not. It would be stupid if he did as the measurements are only to the nearest 1/4 degree. What he assumes, is that any random uncertainty is negligible compared to what you are actually measuring, the daily the variance in temperatures.”

As usual you haven’t even bothered to read TN1900 but here you are making comments about it!

For example, Possolo says: “This so-called measurement error model (Freedman et al., 2007) may be specialized further by assuming that E1, . . . , Em are modeled independent random variables with the same Gaussian distribution with mean 0 and standard deviation (. In these circumstances, the {ti} will be like a sample from a Gaussian distribution with mean r and standard deviation ( (both unknown).” (bolding mine, tpg)

This meets all requirements for assuming random error with a Gaussian distribution that cancels. Thus allowing the variation in the stated values to be used in defining the uncertainty of the average.

This has been pointed to you own three separate occasions that I know of, one of them being from me. And yet you INSIST on ignoring the implications of what Possolo laid out in the example. I can only assume that is deliberate. No one can be that unintenionally ignorant or forgetful.

Reply to  Tim Gorman
July 4, 2023 10:42 am

Why don’t try reading what I said rather than answering points I never made?

Your quote says nothing about the size of the measurement uncertainties, just that it’s assumed that the actual temperatures follow a Gaussian distribution.

Reply to  Bellman
July 3, 2023 4:56 pm

Go look in the mirror.

bdgwx
Reply to  Pat Frank
July 2, 2023 6:57 pm

That’s patently false Pat. It is unequivocal that Bevington intends his readers to use either 4.10, 4.14, or 4.23 to compute the uncertainty of the mean. 4.13 is but an intermediate step used in 4.14 just like 4.22 is an intermediate step used in 4.23.

Your criticism that 4.23 is for random error only cannot be solved with equation 4.22. To handle both random and systematic error you have to use general law of propagation of uncertainty 3.13 which has the covariance term. Of course, JCGM 100:2008 equation 16 does the same thing but uses the concept of correlation via r instead of the covariances.

Reply to  bdgwx
July 2, 2023 8:34 pm

bee’s wax doubles-down on his stupidity.

Reply to  bdgwx
July 2, 2023 9:17 pm

It’s exactly right, bdgwx, both by inspection and according to Bevington’s own text.

Propagation of error is an irrelevant nonsequitur. Shifting ground is dishonest.

bdgwx
Reply to  Pat Frank
July 3, 2023 7:11 am

PF: It’s exactly right, bdgwx, both by inspection and according to Bevington’s own text.

Where does Bevington say 4.22 is for systematic error and 4.23 is for random error?

Reply to  bdgwx
July 3, 2023 11:03 am

Bevington is all about random error.

A given researcher must bring knowledge of the sort of error to the calculation.

Doing science is making decisions based on training, knowledge, and standards of practice, bdgwx.

You’re second guessing all of those without possessing any of them.

bdgwx
Reply to  Pat Frank
July 3, 2023 1:01 pm

PF: Bevington is all about random error.

Let me get this straight. Bevington is all about random error except for this one isolated equation on page 58 in a section on relative uncertainties. Is that what you are saying? Where does Bevington say that?

And why would use an equation that is only for systematic error anyway? Surely you don’t believe that all measurements from all instruments have the EXACT same error?

Reply to  bdgwx
July 3, 2023 1:37 pm

What in Pete’s name makes you think anything on Page 58 has to do with systematic uncertainty?

YOU ARE CHERRY PICKING AGAIN!

Relative uncertainty doesn’t mean “systematic uncertainty”!

Bevington says right in the start of the book that systematic uncertainty is not amenable to statistical analysis! How many times have you been given the exact quote? Do you want it again? Will you remember it if I give it to you ONE MORE TIME?

You are totally lost in the weeds of uncertainty. You hvee been so for a long time!

Reply to  Tim Gorman
July 3, 2023 2:06 pm

There is no way out for them.

bdgwx
Reply to  Tim Gorman
July 3, 2023 4:41 pm

TG: What in Pete’s name makes you think anything on Page 58 has to do with systematic uncertainty?

Nothing. That’s my point. It sounds like you’re just as incredulous about as am I. Can you help me convince that Pat that Bevington does not mention anything about systematic uncertainty on page 58 or in relation to equation 4.22?

Reply to  bdgwx
July 3, 2023 4:59 pm

So what if he doesn’t mention it on Page 58. He says on Page 3, right at the start of the book, “The accuracy of an experiment, as we have defined it, is generally dependent on how well we can control or compensate for systematic errors, errors that will make our results different from the true values with reproducible discrepancies. Errors of this type are not easy to detect and not easily studied by statistical analysis.”

On Page 6 he says: “As we make more and more measurements, a pattern will emerge from the data. Some of the measurements will be too large, some will be too small. On the average, however, we expect them to be distributed around the correct value, assuming we can neglect or correct for systematic errors.” (bolding mine, tpg)

In the book, he assumes distributions that are random, with *NO* systematic uncertainty. As he says, systematic errors are not easy to detect and not easily studied by statistical analysis.

You and climate science want to ignore systematic bias in measurements so you can assume everything is random, Gaussian, and that it all cancels. Then you can play with your statistical analysis with no problem.

Pat Frank has shown that you simply 1. can’t ignore systematic uncertainty in the form of instrumental uncertainty and 2. that systematic uncertainty does not cancel.

And that just chaps your backside no end and you will make any claim, no matter how stupid it is, to try and refute those simple facts. Wake up and smell the roses the rest of us enjoy in the real world!

Reply to  bdgwx
July 3, 2023 5:00 pm

Time to sharpen the point on your head.

Reply to  Tim Gorman
July 3, 2023 4:59 pm

Incredible, he’s the self-proclaimed “expert” who doesn’t even grasp the basic concepts let alone the terminology. And now he has the temerity to lecture Pat Frank.

Reply to  karlomonte
July 3, 2023 5:24 pm

These are people that have never, NOT ONCE, put their knowledge of measurement and measurement uncertainty on the line for their reputation, financial well-being, or for personal/criminal liability if their estimates of uncertainty should prove to be wrong.

And yet, here they are lecturing us on how all uncertainty cancels. What a freaking joke!

Reply to  bdgwx
July 3, 2023 5:25 pm

Is that what you are saying?

No.

Reply to  bdgwx
July 3, 2023 4:51 am

To handle both random and systematic error you have to use general law of propagation of uncertainty 3.13 “

You are back to cherry-picking with no actual understanding of the context of what you are cherry-picking.

Bevington says in Section 3.1: “However, we may have some control over these uncertainties and can often organize our experiment so that the statistical errors are dominant.”

Both Bevington and the GUM develop Eq 3.13 with the understanding that systematic bias does not exist, only random error (Bevington’s “statistical errors”).

This carries over into Chapter 4 of Bevington as well. In the lead-in to Chapter 4 he states: “In Chapter 2 we defined the mean u of the parent distribution and noted that the most probable estimate of the mean u of a random set of observations is the average x_bar of the observations.”

This implies multiple measurements of the same thing with no systematic bias in the measuring device.

Yet you continue trying to apply these methods of statistical analysis to a situation where there is both systematic bias and likely no Gaussian distribution of random error – i.e. the temperature record.

It just confirms that you have the very same belief as bellman and Stokes: all uncertainty is random, Gaussian, and cancels.

You folks simply can’t get out the box you have built for yourself using that meme!

Reply to  Bellman
July 2, 2023 4:01 am

Of course you don’t understand what uncertainty is so your statement that the interval remains the same is meaningless.

Pat has never said you get the same error, only that you get the same uncertainty! The actual magnitude of “error” is unknown and forever unknowable. It will exist somewhere in the interval and you can’t know it – ever.

Reply to  Bellman
July 2, 2023 10:46 am

I’ll just draw my own conclusions.

You did that right from the start and have promulgated them ever since.

Reply to  Bellman
July 1, 2023 3:02 pm

“Unskilled and Unaware”

Look it up.

Reply to  karlomonte
July 1, 2023 3:09 pm

“Unskilled and Unaware”

At he risk of using a worn out cliche

IRONY OVERLOAD.

Reply to  Bellman
July 1, 2023 9:45 pm

Heh, more amusement from the pseudoscience GAT hoax crowd.

Reply to  Bellman
July 2, 2023 8:32 pm

In all my years working in the software industry, rtfm was my least favourite excuse. It’s usually just blaming the customer for your own bad design.

Logical fallacy of the Inappropriate Analogy—good job.

Reply to  Bellman
July 2, 2023 10:41 am

Because they’re not, “averaging thousands if instruments on a single day, ir the global average over a month, or a year ir 30 years.

Eqns. 5 and 6 obviously calculate the root-mean-square of the uncertainty over different time scales.

Obvious on its face, and obvious to everyone except you and bdgwx, apparently.

bdgwx
Reply to  Pat Frank
July 2, 2023 11:35 am

PF: Eqns. 5 and 6 obviously calculate the root-mean-square of the uncertainty over different time scales.

Obvious on its face, and obvious to everyone except you and bdgwx, apparently.

Yeah. We know. It’s obvious to Bellman and I too.

The question has always been…why? Why did you choose to use RMS?

Reply to  bdgwx
July 2, 2023 1:29 pm

Why did you choose to use RMS?

Because the analysis at that point required the rms of the uncertainty.

Reply to  Pat Frank
July 2, 2023 12:12 pm

Yes that’s what they do. The question is why?

RMS only tells you the individual measurement uncertainty over the different time periods, and as you already knew that your equations are just leaving you with the number you first thought of.

But you then try to pass these off as the uncertainty of the mean, and I think that’s misleading at at the least needs to be justified, rather than just being taken for granted.

Reply to  Bellman
July 2, 2023 12:47 pm

Hah! How can two humans possibly be this neutronium dense?

Reply to  Bellman
July 2, 2023 1:33 pm

as the uncertainty of the mean

No. The mean of the uncertainty.

You and bdgwx are caught in an infinite mistake loop. The prime failing of the ideologically committed.

Reply to  Pat Frank
July 2, 2023 2:55 pm

What you actually say in the paper is

The uncertainty in Tmean for an average month (30.417 days) is the RMS of the daily means

Likewise, for an annual land-surface air-temperature mean

If you meant you are calculating the average uncertainty then that seems an odd way of phrasing it. Why talk about an annual mean if you are not calculating the uncertainty of that mean?

And all you are doing here is saying the average uncertainty is the average uncertainty you already calculated in (4).

But if you do intend it to be the average uncertainty, how can you then claim in the next paragraph

Noteworthy is that the measurement uncertainty conditioning a temperature anomaly based upon the uncertainty in Tmean alone is, (TMmean − T30−yearnormal ) = TManomaly, and 2σManomaly = 1.96 × ±√0.1952 + 0.1952 = ±0.540 ◦C, where M is month.

and then go on to use this average uncertainty figure in 4.4 compute “the lowest limit of uncertainty in any global annual LiG-derived air-temperature anomaly prior to 1981”.

Whatever you may claim, somehow the average uncertainty is becoming the uncertainty of the average.

Reply to  Bellman
July 2, 2023 6:25 pm

Resolution provides the constant lower limit of uncertainty in each LiG measurement.

The resolution uncertainty in a daily mean is identical to the resolution uncertainty in a monthly mean because the uncertainty in every daily mean enters e.g., 30 times per month and is divided by 30 for the mean of uncertainty, i.e., eqn. 5.

Likewise, it’s identical to the resolution uncertainty in an annual mean of uncertainty, by eqn. 6, and necessarily the same in a 30-year normal.

When the anomaly is calculated, the 30 year mean of resolution uncertainty in the normal must be added in quadrature with the mean of uncertainty in, e.g., the annual mean temperature.

Reply to  Pat Frank
July 2, 2023 6:34 pm

Why is this hard to understand, especially about anomalies. It an article of faith that anomalies can exist in the thousandths.

Reply to  Jim Gorman
July 2, 2023 7:09 pm

I’ve really no idea why you keep repeating this nonsense. Nobody claims an anomaly for an individual station is more precise than the absolute value. That isn’t the point of using anomalies.

Reply to  Bellman
July 2, 2023 8:58 pm

Rohde, et al, 2013, p. 4: “For temperature differences, the C(x) term cancels (it doesn’t depend on time) and that leads to much smaller uncertainties for anomaly estimates than for the absolute temperatures. (my underline)”

Nick Stokes
Reply to  Pat Frank
July 3, 2023 2:53 am

Bellman is correct. Rohde is saying that an anomaly estimate of global average temperature has much smaller uncertainty than the global average of absolute temperatures. Which it does. The reason is that the main component of GAT uncertainty is coverage error – what if you had sampled in different places. Anomalies are more spatially homogeneous, so coverage error is smaller.

Reply to  Nick Stokes
July 3, 2023 4:32 am

Anomalies are more spatially homogeneous, so coverage error is smaller.”

Winter time temperatures have a different variance than summertime temperatures. Thus it follows that the anomalies calculated for each will also have different variances. That difference in variance is a measure of the uncertainty of the result. If the anomalies have the same variance as the absolute temperatures then they will have the same uncertainty.

Meaning the GAT has the same uncertainty as the absolute temperatures.

In climate science you can’t tell because no one ever calculates the variances of anything! Not of the base data, not of the daily averages, not of the montly averages, or of the annual averages.

Now, come back and tell us that variances cancel when you take an average or that they cancel when you do a subtraction to obtain an anomaly.

Reply to  Nick Stokes
July 3, 2023 5:59 am

Bellman is prima facie wrong. Rohde et al., produced exactly what Bellman supposed does not happen.

Reply to  Nick Stokes
July 3, 2023 7:40 am

Which it does. 

Oh look, Nitpick shows up in a vain attempt to rescue his flailing acolytes.

the main component of GAT uncertainty is coverage error

Bullshite — as usual, the trendology clowns throw away the variances from their myriad of averages.

bdgwx
Reply to  Pat Frank
July 3, 2023 7:09 am

And the law of propagation of uncertainty gives us the answer why anomalies have a lower uncertainty than absolute values. If you look at the correlation term for the measurement model y = a – b you’ll see 2*c_a*c_b*u(a)*u(b)*r(a, b) where c is the partial derivative of y. Notice that c_a = 1 and c_b = -1. So when r(a, b) > 0 then the whole term is negative thus reducing the uncertainty. And when r(a, b) = 1 then there is full cancellation of error thus u(y) = 0.

Reply to  bdgwx
July 3, 2023 7:34 am

And the law of propagation of uncertainty gives us the answer why anomalies have a lower uncertainty than absolute values. “

Malarky. Look at Taylor, Section 2.5. It doesn’t matter if you add or subtract, the uncertainties ADD!

It’s easily explained by variances. It doesn’t matter whether you add or subtract random variables, when you combine them their variances add. Variance is just a measure of uncertainty. The wider the variance the higher the uncertainty.

How many statistic textbook quotation do you need before this sinks in? I can give you quotes from five textbooks.

Why do you assume *any* correlation between temperatures measured in different locations by different measuring devices. Do you understand what a “confounding variable” is? Temperatures generally correlate to the seasons, not to themselves. Temperatures at Station A have many contributing factors, including pressure, elevation, humidity, geography, and terrain. Each of these can be different for each location leading to highly uncorrelated temperatures. The temperature on Pikes Peak is, generally, not correlated to the temperature in Colorado Springs, except for seasonal variations at both. The temperatures in Boston are not generally correlated to those in Kansas City. KC temps may go up and down with no relationship at all to what is happening in Boston (other than seasonal variation) because of Boston’s closeness to the ocean.

You keep trying to bring in things that make no physical sense. So do the climate scientists.

Reply to  Tim Gorman
July 3, 2023 12:17 pm

“It doesn’t matter whether you add or subtract random variables, when you combine them their variances add. ”

Independent random variables.

It’s always the same with you. You always come back to the meme that all uncertainties are random and independent. [\sarc]

Reply to  Bellman
July 3, 2023 4:16 pm

When you are measuring different things using different things they *are* random, independent, and uncorrelated.

We’ve been down the road before. Go look up confounding variables again.

Reply to  Tim Gorman
July 3, 2023 5:19 pm

When you are measuring different things using different things they *are* random, independent, and uncorrelated.

You keep forgetting which side you are supposed to arguing. I must be difficult to keep all your stories straight when you are trolling.

You are the one who keeps insisting that all measurements have some systematic error. You want to argue that averaging different things does not result in any cancellation. This only makes sense if you think there is perfect correlation between all values, or measurements, and they are not independent or random.

Reply to  Bellman
July 4, 2023 4:05 am

The *stated values +/- uncertainty interval” are what are random. The uncertainty interval *does* include systematic uncertainty.

You can’t even get this simple fact straight. It’s no wonder you can’t get *any* of it right!

You want to argue that averaging different things does not result in any cancellation. This only makes sense if you think there is perfect correlation between all values, or measurements, and they are not independent or random.”

Unfreakingbelievable. If I pick up two boards out of the ditch as I travel exactly how are their lengths correlated? How are they not independent and random? How do I expect any cancellation of uncertainty if I use them as part of a a roof truss for a shed I am building?

If I use two different measuring tapes when I get home how are the measuring devices correlated.

You really *are* lost in your cultists dogma!

Reply to  bdgwx
July 3, 2023 8:26 am

More tool abuse.

Reply to  bdgwx
July 3, 2023 11:00 am

30-year normals are independent of any monthly or annual temperature mean.

bdgwx
Reply to  Pat Frank
July 3, 2023 12:57 pm

PF: 30-year normals are independent of any monthly or annual temperature mean.

That’s not true at all. If there is a time invariant systematic effect on measurements (and there will be) then both the 30 yr average and monthly/annual anomaly based off it will contain that systematic effect. That portion cancels out when doing the subtraction.

The real issue is in how we quantify the components of uncertainty that do not cancel on the anomaly subtraction.

Reply to  bdgwx
July 3, 2023 2:22 pm

That portion cancels out when doing the subtraction.

Pardon me while I laugh.

bdgwx
Reply to  Pat Frank
July 4, 2023 9:01 am

PF: Pardon me while I laugh.

You can laugh all you want. The math is indisputable. Prove me wrong. Set y = a – b and r > 0 and show that the uncertainty is not reduced via the law of propagation of uncertainty via the law of propagation of uncertainty via JCGM 100:2008 equation 16.

Reply to  bdgwx
July 4, 2023 1:20 pm

Your math is irrelevant, your understanding of the systematic error due to environmental variables is wrong.

bdgwx
Reply to  Pat Frank
July 4, 2023 3:00 pm

It’s not my math. It’s the law of propagation of uncertainty. It appears in many texts on the topic of uncertainty analysis like Bevington, Vasquez, and JCGM which you’ve cited in your publications. And because you cited them I presume it is the math you wanted your readers to use.

Reply to  bdgwx
July 4, 2023 5:37 pm

The GUM equations are based on multiple measurements of the same thing and assumes no systematic bias in the measurements.

You simply can’t accept that it doesn’t apply to multiple measurements of different things using different devices that have different systematic uncertainties.

It would mean that you have to abandon the claims of climate science as being physically and mathematically unsound – you would, in essence, become a heretic to that which you are defending. And you simply can’t make that leap, no matter what proof you are given.

bdgwx: All uncertainty is random, Gaussian, and cancels. The first commandment of climate science. I’m sure its written on a tablet somewhere!

Reply to  bdgwx
July 3, 2023 2:39 pm

Dude, you are subtracting two random variables. There is NO cancelation. They add, simply add. You can’t even prove that they are orthogonal so you can use RSS

bdgwx
Reply to  Jim Gorman
July 4, 2023 9:02 am

JG: Dude, you are subtracting two random variables. There is NO cancelation.

Prove me wrong. Set y = a – b and r > 0 and show that the uncertainty is not reduced via the law of propagation of uncertainty via JCGM 100:2008 equation 16.

A simple request…please…use a computer algebra system to verify your work before posting.

Reply to  bdgwx
July 3, 2023 4:34 pm

You are dead wrong on this. Go look at Taylor, Section 2.5. He covers this in detail. Uncertainties add, they *always* add.

bdgwx
Reply to  Tim Gorman
July 4, 2023 9:04 am

TG: You are dead wrong on this.

Prove me wrong. Set y = a – b and r > 0 and show that the uncertainty is not reduced via the law of propagation of uncertainty via JCGM 100:2008 equation 16.

A simple request…please…use a computer algebra system to verify your work before posting.

Reply to  bdgwx
July 4, 2023 11:54 am

Someone needs to hit the big red switch on bgwxyz, he’s stuck in a loop again.

Reply to  karlomonte
July 4, 2023 5:45 pm

He’s never out of the loop!

Reply to  bdgwx
July 4, 2023 1:46 pm

That is an ill posed question. Start over.

If the variables have the same effect on “y”, then the uncertainty will will add . r > 0 – could mean anything from little correlation to to perfect correlation.

Have you never written an hypothesis and worked out the math from the data you have, and then designed an experiment to verify it. Then have to say shite something is wrong.

You don’t start with a half baked equation with no data, assumptions , or planned result. All you are wanting to is troll. You have plenty of equations in the essay.

Why don’t you write an essay explaining what you think uncertainty should be from Tmax and Tmin all the way through to GAT as shown in the database of your choice. You obviously think there are errors in everything done here. Put your money where your mouth is and show how you would do it.

Reply to  bdgwx
July 4, 2023 5:43 pm

I’ve given you the proof. I’ve pointed you to where it’s laid out in Taylor, Chapter 2. And you absolutely refuse to accept it for some reason.

If you have a +/- u1 and b +/- u2 then the range of the uncertainty interval for a is from a + u1 to a – u1. Similarly for b, b+ u2 to b – u2.

When you add the two, a + b, the uncertainty interval for the sum becomes (a-u1) to (b+u2) or (a+u1) to (b-u2). The interval increases.

When you subtract the two the same thing happens. The uncertainty interval becomes (a+u1) to (b-u2). The interval increases.

It doesn’t matter if you subtract or add two anomalies, the resultant uncertainty interval is the sum of the two component uncertainty intervals. You can add them directly if you think there is no cancellation between the two or you can add them in quadrature if you think there is *some* cancellation between the two. What you can’t do is assume they cancel!

Reply to  bdgwx
July 3, 2023 5:02 pm

Who are “we”? The voices inside your head.

You have no idea what you are talking about.

Reply to  Bellman
July 3, 2023 4:37 am

Nobody claims an anomaly for an individual station is more precise than the absolute value. That isn’t the point of using anomalies.”

Nick Stokes claims EXACTLY this!

If the precision of the anomaly is the same as the absolute temps then why use anomalies? Stokes claims its because it increases homogeneity somehow. Yet the variance of temps in a single month is different for the NH than for the SH. That carries through to the anomalies as well. So how does averaging those together increase homogeneity?

Reply to  Tim Gorman
July 3, 2023 12:01 pm

“Nick Stokes claims EXACTLY this!”

When? Please quote the exact words and the entire context.

You have demonstrated your imperfect undestanding of most of what you are told to allow me to trust your assertion.

Reply to  Bellman
July 3, 2023 4:14 pm

Stokes: “Because some deviate high and some deviate low, for whatever reason. And when you add them, the deviations cancel.”

“Errors cancel even if not normally distributed (which is not the “same as random).

If you would bother to follow the entire thread you would already know this.

Reply to  Tim Gorman
July 3, 2023 5:24 pm

Because some deviate high and some deviate low, for whatever reason. And when you add them, the deviations cancel

Talking about averaging not anomalies.

Errors cancel even if not normally distributed (which is not the “same as random).

Talking about averaging not anomalies.

I’ll take this as an admission you were lying about Stokes.

Reply to  Bellman
July 4, 2023 4:15 am

Talking about averaging not anomalies.”

OMG! When you add them they cancel but they don’t cancel when you subtract them? Or do they always cancel when you add them or subtract them?

Which is it?

Talking about averaging not anomalies.”

Averages of skewed distributions don’t sit at the left/right equality point. The median is a far better descriptor. What is the median of the climate databases? And again, averages are calculated from sums while anomalies are calculated from differences. You are apparently saying that in one case the uncertainties cancel and in the other they don’t. In which one do the uncertainties cancel?

Nope, no lying about Stokes. He believes even systematic errors cancel!

Reply to  Tim Gorman
July 4, 2023 5:18 am

Do you actually talk like this in real life?

OMG! When you add them they cancel but they don’t cancel when you subtract them?

No. When you add them they increase, when you subtract them they increase. It’s only when you take an average they decrease.

Reply to  Bellman
July 4, 2023 6:32 am

These subjects are completely beyond your ken, just admit it.

Your GAT hoax political agenda is completely transparent.

Reply to  Bellman
July 4, 2023 8:38 am

Sad. Truly, truly sad.

The average uncertainty is *NOT* the uncertainty of the average. You’ll never understand this, will you?

Reply to  Tim Gorman
July 4, 2023 9:35 am

The really sad thing is that you still can;t see I’m agreeing with you. The average uncertainty is *NOT* the uncertainty of the average. It’s what I’ve been trying to tell Pat Frank all this time.

Reply to  Bellman
July 4, 2023 1:43 pm

It’s what I’ve been trying to tell Pat Frank all this time.

Pardon me (again) while I laugh.

Delusional or extremely short attention span, one can’t decide which.

Reply to  Pat Frank
July 4, 2023 2:04 pm

Bellman: “The average uncertainty is *NOT* the uncertainty of the average. It’s what I’ve been trying to tell Pat Frank all this time.”

Pat Frank: “Delusional or extremely short attention span, one can’t decide which.”
(pointing to comment that says):.

In that case I would have no objection. The mean of the uncertainty is obviously the uncertainty of the instrument.

But then you go on to use these mean of the uncertainties to calculate the uncertainty of an anomaly. Specifically using them as the monthly and 30 year uncertainties. I don’t see how this makes sense if the values are the average uncertainty.

Then in section 4.4 you are using the same values to calculate the uncertainty of global annual temperatures and anomalies.

Could someone point to where the contradiction is?

bdgwx
Reply to  Pat Frank
July 4, 2023 2:43 pm

PF: Pardon me (again) while I laugh.

You can laugh all you want. It does not change the fact that u(Σ[x]/N) is different than Σ[u(x)^2/N] or Σ[u(x)]/N.

Reply to  Bellman
July 4, 2023 6:28 am

bellcurveman tries to rescue Nitpick Nick!

I love it.

Reply to  Tim Gorman
July 3, 2023 5:24 pm

It was short hand. It is fair to incorrectly impute that he inferred that they totally cancelled with less than an infinite sample size. But he explained it completely and correctly elsewhere in the thread. See his comment that ended his description of the few distributions that did not so tend towards the mean with increasing sample size.

Reply to  bigoilbob
July 4, 2023 4:23 am

You are confused. Either uncertainties cancel or they don’t. He did *NOT* explain it completely or correctly ANYWHERE. The standard deviation of the sample means gets smaller with larger sample sizes, no one disputes that. What is being disputed is whether the standard deviation of the sample means describes the accuracy of the population mean or if it just tells you how precisely you have calculated the population mean from your sampling.

It appears that in climate science it is usually assumed that the standard deviation of the sample means *is* the uncertainty associated with the population mean. That can only make sense if all the uncertainties are random, Gaussian, and cancel – leaving the population mean as the “true value”. That simply doesn’t apply when you have skewed distributions or systematic error in the dataset members. Temperature dataset *are* skewed and they all contain systematic error – as Pat has shown the instrumental systematic uncertainty is large enough to make the assumption that either there is no systematic error or that it cancels. Uncertainty is not error, it doesn’t have a scalar value and simply can’t cancel out.

Reply to  Tim Gorman
July 4, 2023 6:35 am

And to top it all off, the climastrologers throw away all the standard deviations from their myriad of averages.

Voila! milli-Kelvin “uncertainties”.

Reply to  karlomonte
July 4, 2023 9:45 am

If nothing else, every baseline has a variance/standard deviation. When subtracted from a monthly/annual average to calculate an anomaly, there should at least be a variance value from the baseline. I’ll guarantee this far exceeds the anomaly value in the 1/100ths decimal point. It will be something like 0.002 ±0.7

Reply to  Jim Gorman
July 4, 2023 11:56 am

Ever see any of them try to calculate an appropriate degrees of freedom?

Reply to  Pat Frank
July 2, 2023 6:43 pm

The resolution uncertainty in a daily mean is identical to the resolution uncertainty in a monthly mean because the uncertainty in every daily mean enters e.g., 30 times per month and is divided by 30 for the mean of uncertainty, i.e., eqn. 5.

Are you now saying equation 5 is the uncertainty of the monthly mean, and not the average uncertainty? Or are you saying it’s both?

Adding an uncertainty 30 times and then dividing by 30 is still only giving you the mean uncertainty, not the uncertainty of the mean.

Reply to  Bellman
July 3, 2023 6:02 am

How many times have you been told that the mean uncertainty is the desired quantity.

Reply to  Pat Frank
July 3, 2023 7:03 am

He simply doesn’t grok what you are saying at all.

Reply to  Tim Gorman
July 3, 2023 10:57 am

Agreed, Tim. An idée fixe in action.

Reply to  Tim Gorman
July 3, 2023 11:56 am

Indeed I don’t. It makes no sense to me and I think it’s an evasive answer. I have suspicions about why he thinks the mean uncertainty is the desired quantity, but I would like to give him the chance to explain it first.

Reply to  Bellman
July 3, 2023 2:19 pm

It makes no sense to me …

because you’re ignorant of the subject Bellman and are evidently unable to parse a clarifying explanation.

It’s been explained over and over, but to no evident avail.

Were someone like you in any class I’ve taught, I’d advise transferring to another major because scientific waters are far too deep for you.

…and I think …

A vast overstatement.

Reply to  Pat Frank
July 3, 2023 3:08 pm

Were someone like you in any class I’ve taught, I’d advise transferring to another major because scientific waters are far too deep for you.”

And if you were my teacher I’d drink it.

Reply to  Bellman
July 3, 2023 5:05 pm

What a comeback—does nothing to heal your abject ignorance.

But someone gave you an upvote!

Success is yours!

Reply to  Bellman
July 3, 2023 5:21 pm

Good. Let’s leave it there.

bdgwx
Reply to  Pat Frank
July 3, 2023 8:18 am

PF: How many times have you been told that the mean uncertainty is the desired quantity.

That’s absurd. The mean uncertainty is nearly useless. It tells you very little about the uncertainty of the mean.

Reply to  bdgwx
July 3, 2023 10:55 am

The analysis required the mean uncertainty, bdgwx. it’s far from useless. See Figure 17 and Figure 19.

Realize you don’t get it, and turn the page.

bdgwx
Reply to  Pat Frank
July 3, 2023 12:46 pm

PF: The analysis required the mean uncertainty, bdgwx. it’s far from useless. See Figure 17 and Figure 19.

I’m not saying the mean uncertainty is useless as a building block for one or more steps of the analysis.

I’m saying it is useless as a proxy for the uncertainty of the global average temperature anomaly because it doesn’t tell you the dispersion of errors typical of a global average temperature (GAT). Yet you are using it as a proxy for the GAT in figures 17 and 19. That is misleading at best.

I will repeat again and again. The mean uncertainty is not the same thing as the uncertainty of the mean. The mean uncertainty does not tell us the dispersion of errors typical of a mean. Only the uncertainty of the mean does that.

Reply to  bdgwx
July 3, 2023 2:13 pm

It tells you the minimum of uncertainty in every single measurement.

Uncertainty that does not average away. Uncertainty that increases when combined with the uncertainty in a normal on calculating an anomaly.

The uncertainty due to instrumental resolution and calibration should have been the first inventory of those compiling the air temperature record.

But it wasn’t. They ignored it. Perhaps they lacked the understanding of instrumental methods. Whatever.

You give no evidence of understanding instrumental methods either — of how to evaluate the resolution of instruments or the reliability of measurements.

You criticize from ignorance, bdgwx, and your partisanship evidently causes you to reject a clarifying explanation.

Reply to  bdgwx
July 3, 2023 6:29 pm

Another face-plant by the “expert”.

Reply to  Pat Frank
July 3, 2023 8:56 am

How closely you have calculated the mean, the SEM, is NOT* the desired quantity. The accuracy of the population mean is the desired quantity. That has to be propagated from the individual members of the data set (i.e. the temperatures) onto the average. How precisely you calculate the population average (i.e. the average of the stated values without considering the measurement uncertainty of those stated values) has nothing to do with the accuracy of the average.

Reply to  Pat Frank
July 3, 2023 11:52 am

“How many times have you been told that the mean uncertainty is the desired quantity.”

You keep saying it, but I’m trying to figure out why it would be the desired quantity. What makes it desirable to you? What question are you trying to answer?

Saying you are only interested in the mean uncertainty doesn’t explain how you are calculating the uncertainty of an anomaly, or why you publish graphs showing mean temperatures along side your average uncertainties. Nor does it explain why you are questioning the fact that estimates of the uncertainty of the mean are smaller than your mean uncertainty.

Reply to  Bellman
July 3, 2023 2:05 pm

but I’m trying to figure out why

Here’s why

Reply to  Pat Frank
July 3, 2023 5:09 pm

Thank-you for leaving indelible proof that you understand none of it.

Says it all, and it still boils down to Unskilled and Unaware.

Reply to  Bellman
July 2, 2023 8:35 pm

Who did you pay to give you two upvotes?

Reply to  Bellman
July 1, 2023 10:47 am

Peculiar. Your comment evidences no understanding of resolution.

There’s no point explaining bits of a paper to someone who hasn’t the grace to study it first,

bdgwx
Reply to  Pat Frank
July 1, 2023 12:18 pm

Our issue isn’t with the LiG resolution uncertainty. We understand that it exists and is similar if not the same for each measurement.

The issue is in how you combined that uncertainty into daily, monthly, and annual means.

What you have effectively done, whether you realized it or not, is to assume r = 0 for the daily average (after the typo in equation 4 is corrected anyway) and assume r = 1 for the monthly and annual averages. Why would r = 0 for daily average and r = 1 for a monthly and annual average? That makes no sense.

Note that the law of propagation of uncertainty reduces to RMS when r = 1. See JCGM 100:2008 section 5.2.2 pg. 21 note #1 directly underneath equation (16).

Reply to  bdgwx
July 1, 2023 1:22 pm

Why would you think that section of JCGM applies?

The resolution limit is a constant of every measurement.

bdgwx
Reply to  Pat Frank
July 1, 2023 2:12 pm

PF: Why would you think that section of JCGM applies?

It’s the law of propagation of uncertainty. It is the backbone of all uncertainty analysis and is the basis from which all other uncertainty formulas are derived. It all starts there.

PF: The resolution limit is a constant of every measurement.

I know. But that actually doesn’t matter. Even if the resolution limit were different the law of propagation simplifies to RMS when r = 1 when the measurement model is y = Σ[x_i, 1, N] / N.

Reply to  bdgwx
July 1, 2023 3:03 pm

No plug for the NIST machine?

What gives?

Reply to  bdgwx
July 1, 2023 5:06 pm

of measurement uncertainty.

The resolution limit is a property of the instrument, not of the measurement.

Reply to  Pat Frank
July 1, 2023 5:24 pm

How do you convince someone who has never had to live by their measurements that resolution is part of the instrument and you can’t statistically increase resolution. A measurement just is what it is.

Reply to  Jim Gorman
July 1, 2023 5:58 pm

How do you convince someone who has never had to live by their measurements that resolution is part of the instrument and you can’t statistically increase resolution

You think that’s hard. Try convincing someone with only a limited understanding of statistics that the resolution of a mean does not have to be the same as that of the instruments.

Reply to  Bellman
July 1, 2023 6:17 pm

If you only knew how insane your comment is you would be embarrassed.

Here is a story. It is a parable about resolution limits. It will also explain to you the importance of Significant Figures.

http://www.ruf.rice.edu/~kekule/SignificantFigureRules1.pdf

A student once needed a cube of metal that had to have a mass of 83 grams. He knew the density of this metal was 8.67 g/mL, which told him the cube’s volume. Believing significant figures were invented just to make life difficult for chemistry students and had no practical use in the real world, he calculated the volume of the cube as 9.573 mL. He thus determined that the edge of the cube had to be 2.097 cm. He took his plans to the machine shop where his friend had the same type of work done the previous year. The shop foreman said, “Yes, we can make this according to your specifications – but it will be expensive.” “That’s OK,” replied the student. “It’s important.” He knew his friend has paid $35, and he had been given $50 out of the school’s research budget to get the job done. He returned the next day, expecting the job to be done. “Sorry,” said the foreman. “We’re still working on it. Try next week.” Finally the day came, and our friend got his cube. It looked very, 3 very smooth and shiny and beautiful in its velvet case. Seeing it, our hero had a premonition of disaster and became a bit nervous. But he summoned up enough courage to ask for the bill. “$500, and cheap at the price. We had a terrific job getting it right — had to make three before we got one right.” “But–but–my friend paid only $35 for the same thing!” “No. He wanted a cube 2.1 cm on an edge, and your specifications called for 2.097. We had yours roughed out to 2.1 that very afternoon, but it was the precision grinding and lapping to get it down to 2.097 which took so long and cost the big money. The first one we made was 2.089 on one edge when we got finshed, so we had to scrap it. The second was closer, but still not what you specified. That’s why the three tries.”
up 8

Reply to  Jim Gorman
July 1, 2023 6:38 pm

If you only knew how insane your comment is you would be embarrassed.

If you only knew how little I cared about your opinion on my sanity.

Here is a story.

Is it about calculating the uncertainty of the mean using instruments with limited resolution?

It is a parable about resolution limits.”

It isn’t.

The moral of the story seems to be it’s better to use a rough estimate than a more precise one if it saves money.

Reply to  Bellman
July 1, 2023 7:08 pm

He thus determined that the edge of the cube had to be 2.097 cm.

Another moral is to check your workings. He actually wanted an edge of 2.123cm.

With an edge of 2.097cm he would only get 79.95 grams whilst his cheaper colleague got 80.29 grams. Both somewhat short of the required 83 grams.

Reply to  Bellman
July 1, 2023 9:48 pm

Oh my, please stop! I can’t handle it!

Hehehehe

bdgwx
Reply to  Pat Frank
July 1, 2023 5:56 pm

PF: The resolution limit is a property of the instrument, not of the measurement.

I’m not sure what point you’re making here. The resolution limit just means that each measurement has uncertainty. That’s not the issue.

The issue is how that uncertainty propagates into daily, monthly, annual, etc. means.

Reply to  bdgwx
July 1, 2023 6:14 pm

Incredible.

Reply to  Pat Frank
July 1, 2023 9:50 pm

All I can do is blink in astonishment, and then think about whether to laugh or cry.

Reply to  bdgwx
July 2, 2023 11:23 am

There’s no propagation of uncertainty in eqns. 5 and 6.

They are not meant to propagate uncertainty.

They are meant to establish the root-mean-square of the resolution limit over the different time-scales.

They do exactly that,

Your view is wrong bdgwx. You’re insistently wrong.

Correlation r is meaningless when there is nothing to correlate.

bdgwx
Reply to  Pat Frank
July 2, 2023 1:55 pm

PF: They are not meant to propagate uncertainty.

That’s a problem.

PF: They are meant to establish the root-mean-square of the resolution limit over the different time-scales.

And then used as if they were the uncertainty of the mean. That’s a problem.

PF: Correlation r is meaningless when there is nothing to correlate.

Unfortunately errors are correlated so r has to be considered at least in some capacity for an uncertainty analysis to be rigorous.. For a single instrument the systematic effect is a significant factor so r is going to be relatively high. However, since different instruments have different systematic effects these appear at least in part as random effects when the measurements of these different instruments are aggregated. Refer to JCGM 100:2008 E3.6 regarding how systematic effects can present as random effects when there is a context switch. The debate is not whether temperature instruments have systematic or random effects. They have both. The debate is over their proportions and thus what r actually is. In the real world r is neither 0 nor 1. Fortunately the law of propagation handles effects and any value -1 <= r <= 1.

Reply to  bdgwx
July 2, 2023 2:22 pm

Total gibberish, carefully crafted to seem like you know what you’re flapping gums about (pun intended).

Reply to  bdgwx
July 2, 2023 2:43 pm

And then used as if they were the uncertainty of the mean. That’s a problem.”

No, they’re not. They’re used as the mean of the uncertainty. The problem is yours: idée fixe.

Unfortunately errors are correlated so r has to be considered at least in some capacity for an uncertainty analysis to be rigorous.

The context is the uncertainty arising from detection limits, not measurement errors.

You’re invariably wrong, bdgwx. And that never stops you.

bdgwx
Reply to  Pat Frank
July 2, 2023 6:37 pm

bdgwx: And then used as if they were the uncertainty of the mean. That’s a problem.

PF: No, they’re not. They’re used as the mean of the uncertainty. 

Figure 17 depicts the uncertainty of the global average temperature anomaly both as published and as you calculate.

Figure 19 depicts the uncertainty of the global average temperature anomaly both as published and as you calculate.

If your 2σ grey whiskers are not the uncertainty of the global average temperature anomaly then why present them as such. It’s pretty obvious that everyone is considering your 2σ = 0.432 in figure 17 and 2σ = 1.94 in figure 19 as the uncertainty of the global average temperature anomalies.

Reply to  bdgwx
July 2, 2023 8:37 pm

It’s pretty obvious that everyone 

There is no “everyone” (except maybe inside your head).

Reply to  bdgwx
July 2, 2023 8:53 pm

Figure 17 presents the mean of resolution uncertainty, which applies to every single anomaly temperature.

Legend: “Grey whiskers:… the laboratory lower limit of instrumental resolution…”

Figure 19 presents the resolution mean plus the mean of systematic uncertainty, which applies to every single temperature anomaly.

Legend: “Grey whiskers: … the lower limit of laboratory resolution and the calibration mean of systematic error …”

Can it be any clearer?

Of course the uncertainties apply to the annual temperature anomalies. But they’re not the uncertainty of the mean. They’re the mean of the uncertainties.

Reply to  Pat Frank
July 3, 2023 5:31 am

But they’re not the uncertainty of the mean. They’re the mean of the uncertainties.”

It’s hard to reprogram cultists. My guess is that they will *never* figure out the difference.

Reply to  Tim Gorman
July 3, 2023 4:22 pm

Yes, I keep forgetting how many times you keep insisting that the uncertainty of the mean is the same as mean of the uncertainty.

Reply to  Bellman
July 3, 2023 5:18 pm

You just plain can’t read. The instrumental mean uncertainty exists as an uncertainty factor in each and every observation. The uncertainty of the mean is *NOT* the same as the instrumental mean uncertainty. There are a host of other factors in the observational uncertainty. Calibration drift is a major one that is ignored by using the meme “all uncertainty is random, Gaussian, and cancels.” Microclimate differences (green grass vs brown grass – see Hubbard and Lin, 2002) play a major role. It’s why Hubbard and Lin determined that regional adjustments to temperature values are scientifically wrong. They don’t allow for microclimate variation at each station.

bdgwx
Reply to  Pat Frank
July 3, 2023 8:15 am

PF: Of course the uncertainties apply to the annual temperature anomalies. But they’re not the uncertainty of the mean. They’re the mean of the uncertainties.

Then your publication is misleading and useless at best.

Reply to  bdgwx
July 3, 2023 10:49 am

You’re not equipped to judge, bdgwx.

bdgwx
Reply to  Pat Frank
July 3, 2023 12:35 pm

PF: You’re not equipped to judge, bdgwx.

Anybody can render a judgement. Here is the justification for mine.

You said the uncertainties in your publication are not the uncertainties of the mean.

The global average temperature is a mean.

What we want to know is the uncertainty of that mean (aka uncertainty of the global average temperature).

If you’re publication is not providing that metric then you cannot compare it to the published uncertainties from the various datasets because those are uncertainties of the mean (aka uncertainties of the global average temperature)

That makes your publication misleading because you present your uncertainties as being equivalent to those published by the various datasets.

That makes your publication useless because the central question is of the uncertainty of the mean (aka global average temperature).

I’ll repeat…the average of the individual measurement uncertainties is not the same thing as the uncertainty of the average of the individual measurement values. They are two completely different concepts.

Reply to  bdgwx
July 3, 2023 2:01 pm

The paper derives the mean resolution and calibration uncertainty of meteorological thermometers and sensors, bdgwx.

These are the instrumentally-relevant minimum average uncertainty in measurements.

The uncertainty from the limit of detection cannot average away because it’s a property of the instrument itself.

The uncertainty due to systematic measurement error proved to be non-random, and cannot be assumed to average away.

The total minimum of uncertainty proved to be much larger than the published official uncertainties that are purported to provide a full accounting. Oops.

Thank-you for leaving indelible proof that you understand none of it.

Reply to  bdgwx
July 3, 2023 2:15 pm

Repeat: Unskilled and Unaware you are. The characters publishing those “data sets” also lack in understanding of uncertainty.

Pat is correct.

And the GAT does not exist anywhere in the real world.

Reply to  bdgwx
July 3, 2023 4:32 pm

Are you dyslexic?

Reply to  Tim Gorman
July 3, 2023 5:10 pm

Good question.

Reply to  bdgwx
July 2, 2023 3:44 pm

You need to take some physical science classes at junior or senior level. You will deal with the resolution of devices and ultimately what you can measure with that resolution. You will learn to recognize what you can’t measure with the devices you have available. You will be taught that more and more resolution costs money. You will come to realize that averages and standard deviations can not overrule the physical restrictions of your measurements. Believing you can is rooted in faith, not in actually doing.

Reply to  bdgwx
July 2, 2023 5:07 pm

Unfortunately errors are correlated so r has to be considered at least in some capacity for an uncertainty analysis to be rigorous.”

Error is not uncertainty. Uncertainty is not error. Resolution is not error. Resolution is uncertainty. There is nothing to correlate when speaking of uncertainty. What would you correlate it against? Does a constant correlate with anything? A constant has no variance, how would you then calculate correlation? It should come out to be undefined – a division by zero!

Once again you are making an assertion that makes no sense in the real world!

Reply to  Tim Gorman
July 2, 2023 8:38 pm

He still can’t make it past the Go square, incredible.

billhaag
June 30, 2023 3:27 pm

This summation, and the subsequent comments, indicates to me, once again, that Upton Sinclair was correct when he said (in The Jungle, as I recall) “it is difficult to get a man to understand, when his salary depends upon his not understanding.”

billhaag
June 30, 2023 3:39 pm

A thought experiment that shows the absurd claims made about averages.

Consider a bathroom scale upon which one mounts, records the weight, and steps off of the scale 500,000 times. Add the weights, and divide by 500,000. One will get a number with a boat-load (a quantified scientific term, no doubt) of digits to the right of the decimal point. Then, if one removes a US quarter from ones pocket and repeats the process 500,000 times, again one will get a number with many digits to the right of the decimal point. The difference between these two weights IS NOT a fair estimate of the weight of the quarter, since the weight of the quarter is far smaller than the precision of the measurement device.

Reply to  billhaag
June 30, 2023 4:29 pm

One quarter short of a boat-load full of non-significant digits.

Reply to  billhaag
July 1, 2023 7:08 am

Bevington covers this in his book. In his section 4, under “A Warning About Statistics” he warns about assuming that the larger the sample size the more closely you can calculate the average. An infinite number of observations would reduce the error *in* the mean (most people read that as “of the mean” when it actually has a totally different connotation) to zero. But he lists three limitations that prevent actually getting to that point. 1. those of avaailabe time and resources, 2. those imposed by statistical errors, and 3. those imposed by nonstatistical fluctuations.

He says about nonstatistical fluctuations: “It is a rare experiment that follows the Gaussian distribution beyond 3 or 4 standard deviations. More likely, some unexplained data points, or outliers, may appear in our data sample, far from the mean. Such points may imply the existence of other contaminating points within the central probability region, masked by the large body of good points.” (bolding mine, tpg)

Somehow this never gets considered in climate science. Just how many contaminating points actually lie within the central probability region of the temperature data? That is an additional source of uncertainty in any calculation that is made using the data. It all gets masked in climate science by the meme “all error is random and Gaussian and cancels leaving the stated values 100% accurate”.

June 30, 2023 7:26 pm

1 July 2023
 
Dear Pat,
 
Australia occupies about the same land area as mainland USA, and for the southern hemisphere, Australian temperature data is widely represented in global surface temperature databases.   
 
Yes, I have read your paper and I found it informative from a global perspective, but not necessarily accurate from an Australian perspective. I also have a problem with the perennial dark-horse issue of instrument uncertainty/error vs accuracy, of which everybody seems to have an opinion. A single (manual) observation is a combination of both. Neither error source can be separated without a carefully conducted experiment involving multiple independent observers, observing calibrated thermometers instantaneously, within the same screen, which of course presents logistical problems. Nevertheless, I wonder why such an experiment has not been done?
 
Australian Fahrenheit meteorological thermometers had a single 1-degF index. Although encouraged to be observed to an implied precision of 0.1 degF, they were also often observed to the nearest ½ or full degF. For any daily timeseries, precision metrics can be calculated. However, the Bureau of Meteorology (BoM) database converts degF to one-decimal place degC. Consequently, degF do not directly back-transform (which requires 2-decimal degC places). Evaluating precision metrics requires this to be taken into account.
 
Australian Celsius meteorological thermometers have a ½ degC index, whereas you seem to indicate US-thermometers only have full-degreeC indices. Subdivisions affect instrument uncertainty. I note that you have not differentiated between long-stem laboratory-grade instruments, some of which may be true to 0.1 degF/C, and the less precise meteorological thermometers. While one may be used to calibrate the other, I don’t believe the two are comparable/interchangeable.
 
Early thermometers in Australia were calibrated using Kew standards imported for the purpose, usually by staff located at central observatories. While I would have to check my records and photographs, I believe a dry-bulb (non-recording) thermometer used by Henry Chamberlain Russell at Sydney Observatory in the 1880s is on display there. (It is a fabulous museum and free to enter).
 
In Australia, Tmax and Tmin are measured using recording thermometers (Hg and alcohol respectively) that lie at about 5 degrees off-horizontal on a frame within the Stevenson screen. Many people don’t realise this. Dry and wet-bulb thermometers are held vertically. Also, there were a variety of Stevenson screens in use up to about 1946. The standard screen was 230-lites in volume, these have been largely replaced by 90-litre screens and more recently, the BoM appear to be rolling-out PVC screens.
 
In my experience, including with processing and analysing data from the BoM database, while there are cascading sources of error in measurements (of which observer (eyeball) error is arguably the most important), most bias in timeseries arises from site and instrument changes. Bias falls into three categories: (i), Metadata bias, which in many cases is flagrantly misused by the BoM to imply the climate has changed; (ii), systematic bias resulting from time-correlated changes across sites, metrication being an example, but there are others; and, (iii) the use of correlated sites (comparators) to both detect and adjust homogenisation changepoints.
 
At the root of the problem is that BoM scientists often adjust for documented changes that made no difference to the timeseries of data, while omitting to mention (and therefore not adjusting) changes that did. This allows the BoM to create any trend they want. The second issue here is that 90-litre screens can cause the frequency of extreme temperatures to increase, independently of mean-Tmax. You use histograms in your paper, but I prefer probability density functions that can be overlaid to show where across the distribution contrasting datasets differ. Also, I like normality tests to be supported by Q-Q plots.     
     
Our BoM can cheat in subtle ways including only having comparator data for 90-litre screens; moving sites/changing instruments without the change(s) being reported in metadata, using parallel data to smooth transitions without publishing said data, and more. I found on a self-funded field-trip that the interior of PVC screens is mat-black; the number of louvers has also increased thus slowing ambient air exchange.

Tracking such changes requires a more subtle statistical approach than brute-force linear regression, which is a point I have made over and over.    
 
Australian T-data were never collected in the first-place to track changes in the climate, but rather to describe and understand the weather. Climate change/warming became a bolt-on experiment that started in the late 1980s as they started digitising data, firstly onto cards, then punch-tape then mag-tape then directly via key-board, and now directly by hard-wired instruments (not necessarily in that order). Thus, a lot of patching and adjusting has happened to both raw data and homogenised data, which is pretty tricky to identify/unravel. As most groups that produce global timeseries use similar homogenisation methods (SST included), although not a focus of the paper I am surprised you did not mention homogenisation as a source of bias.  
  
I did a preliminary study on SST in 2021, which I published at http://www.bomwatch.com.au and also at WUWT (https://wattsupwiththat.com/2021/08/26/great-barrier-reef-sea-surface-temperature-no-change-in-150-years/).

As part of that project, to gain understanding of the issues of measuring intake temperatures, I visited a WW-II Corvette (HMAS Castlemaine), which is moored at Williamtown near Melbourne. As it was a Friday, they kindly gave me a ‘private’ tour (I made donation). I clambered down into the engine-room and searched around in-vain for signs of them having measured ‘intake-temperature’. As the guide explained (he was an ex-submariner), with everything banging around (2, 3-cylinder triple-expansion steam engines each side) in such a highly dangerous tight space and all gauges being analogue, it was highly unlikely that it would be possible to measure anything ‘accurately’. Same on an Oberon sub, except they did have to have some “rough” idea of the temperature outside in order to negotiate the thermocline (rough being the operative word). Standing on the deck-plates about 30 cm higher than the inside of the hull, a mark on a stanchion indicated sea level outside was about eye level – about 2m in total depth from inside the hull.
 
Having drawn a blank, I then went to Brisbane, to the Maritime Museum and managed to smile and guile my way into the engine room of the former late WWII Frigate HMAS Diamantina. Two-levels below decks (more narrow ladders than the Corvette), roomier but also more moving parts and more complete in terms of bits and pieces. Still no gauge. Same story. With three larger pistons each side of the narrow walkway thumping up-and-down, the whole machine was pretty lively and noisy when underway. The water intake of both vessels was actually near the keel so air would not be sucked-in in a rough sea.
 
So how they managed to derive “accurate” numbers from analogue gauges under such conditions I have no idea. I also had another dataset from 1876 for a paddlewheel cable ship between Sydney and Darwin. However, compared with contemporary data there was a constant positive offset. I suspect (but cannot confirm) they used bucket samples collected mid-ship (not at the bows), and that data were biased by proximity of the boiler etc, otherwise the thermometer may have been biased. (The data I used in the GBR study was calibrated to the Kew standard at Sydney Observatory and was collected by Henry Chamberlain Russell).   
 
I have access to much more data which is not digitised. Some from navy ships, and some from Burns Philip ships servicing the islands and Papua. The RAN undertook research in the 1960s into the effect of the isocline on sonar. They chased submarines around and so-on but little of what they found is in published or digitised form. CSIRO also operated with the Navy out of Townsville I think in the 1950s, but I would have to check my records to be sure. While they could draw off a sample, I don’t think they could safely place a LIG thermometer in an engine intake and expect it not to break, or for the mercury column to remain intact for very long. Consequently, I take all the SST data with a shovel of salt.
 
These are just thoughts about the paper, not criticisms, neither is it a review.
 
From having observed the weather for almost a decade from 1971, analysed some 300 medium and long-term weather station datasets from across Australia, including developing protocols for same, and using other sources of information such as aerial photographs, maps and plans, once-secret Royal Australian Air Force plans and documents, stuff from museums and archives and having traveled around looking at (and photographing) sites in NSW, Vic, WA, Qld, and having clambered around in ships, I am confident that no evidence exists that suggests the world is warming. I am also convinced that SST data and satellite data in particular has been fiddled to show warming that does not exist. I am therefore not surprised at the outcome of Pat Frank’s paper.
 
Yours sincerely,
 
Dr Bill Johnston
 
http://www.bomwatch.com.au       
 

Reply to  Bill Johnston
June 30, 2023 8:38 pm

The standard screen was 230-lites in volume, these have been largely replaced by 90-litre screens should read 60-litre screens!

June 30, 2023 8:06 pm

Pat,

You have hit a 450 ft home run. It is great to finally see a useful paper for analyzing measurements.

Any field so dependent on measuring devices should have done this work many decades ago.

CONGRATULATIONS👏🤠🌻

Reply to  Jim Gorman
June 30, 2023 11:33 pm

Thanks, Jim. You and Tim have been real stalwarts.

June 30, 2023 8:38 pm

The complete change in air temperature between 1900-2010 is then 0.94±1.92 C.

If put in the more conventional format of the precision of the uncertainty term corresponding to the last significant figure of the nominal value, that would then round to “1±2 deg. C.” In other words, one can only justifiably state the change to the nearest order of magnitude, with the possibility that the change could be positive or negative. That is hardly the 2 or 3 significant figures to the right of the decimal point commonly cited for claims about the “hottest yeah evah!”

Reply to  Clyde Spencer
July 1, 2023 2:56 am

So what just happened?

I spent half-a-day that I could have been doing something else, considering issues raised by Pat Franks. Then I provided multiple perspectives related to Australian T-data that did not dispute his findings.

Aside from insult-trading and finger-pointing why did the whole thing (including my time) suddenly evaporate down the toilet like a raw-prawn Bali-sandwich? Any prospective will do.

Yours sincerely,

Dr Bill Johnston

http://www.bomwatch.com.au

Reply to  Clyde Spencer
July 1, 2023 7:37 am

It’s like significant figure rules weren’t invented as far as climate science goes.

Reply to  Tim Gorman
July 1, 2023 9:29 am

And they don’t care, at all. Anything to keep the global warming hoax alive.

Reply to  Clyde Spencer
July 1, 2023 10:33 am

You’re right, Clyde. The reported numbers are pretty much numerical precision. The accuracy expression is as you stated, 1±2 C. Zero information about rate or magnitude.

For others, just to add that the “±2” is an ignorance width, where no information is available. It doesn’t mean the change in air temperature since 1900 could have been +3 C or -1 C. It just says no one knows the physically real magnitude.

By all independent information (growing season, etc.), the warming since 1900 has been real and mild. See We Live in Cold Times.

Reply to  Pat Frank
July 1, 2023 1:40 pm

The word “unknown” doesn’t seem to be understood in climate science. Unknown means “you don’t know and can never know”. It can be anywhere inside the uncertainty interval and it can also be GREATER THAN or LESS THAN the interval as well!

Reply to  Tim Gorman
July 1, 2023 2:14 pm

Unknown means “you don’t know and can never know”.

Odd definition. So if I don’t know tomorrow’s football results, I can never know them?

It can be anywhere inside the uncertainty interval and it can also be GREATER THAN or LESS THAN the interval as well!

Gosh, almost as if that 95% in a 95% confidence interval has some meaning.

Reply to  Bellman
July 1, 2023 3:05 pm

Odd definition. So if I don’t know tomorrow’s football results, I can never know them?

Just paste a sign on your forehead that says: Clueless Am I

Reply to  karlomonte
July 1, 2023 3:28 pm

Or maybe one saying “It was a joke. Lighten up.”

Reply to  Bellman
July 2, 2023 5:13 am

Odd definition. So if I don’t know tomorrow’s football results, I can never know them?”

Does the football score have an uncertainty interval? Only a fool would make this kind of comparison.

“Gosh, almost as if that 95% in a 95% confidence interval has some meaning.”

It’s why statistics can NEVER give an exact answer, cannot increase resolution, and cannot decrease uncertainty if systermatic uncertainty exists. Things you simply can’t bring yourself to admit!

Reply to  Bellman
July 2, 2023 5:30 am

Football scores are COUNTING NUMBERS, not measurements. In essence they are constants without uncertainty.

You are displaying your ignorance of scientific physical measurements.

Reply to  Jim Gorman
July 2, 2023 7:59 am

Here we go again. I make a simple example to refute what seems like a minor incorrect claim. And now the Gorman bros are going to spend all time spectacularly missing the point, and arguing the minutia of a point that was never made.

Tim says unknown means you can never know something. I simply point out that isn’t always correct and use what seemed like an amusing counterexample if football scores. Rather than admit the original statement may have been badly worded, we are now arguing that football scores are not the same thing as temperatures.

I know. That wasn’t the point. The point was to give an example of something that might be unknown at one point of time but could be known at a different point. At no point have I ever claimed it’s possible to know in an absolute sense what the temperature is or how big your stud wall is.

Reply to  Bellman
July 2, 2023 4:20 pm

Tim says unknown means you can never know something.”

You are a total aZZ. A measurement is not a football score yet you are trying to conflate the two!

Is that the best you have to offer in rebuttal? It’s perfect proof that you have no clue as to what is being discussed!

Reply to  Tim Gorman
July 2, 2023 4:51 pm

A measurement is not a football score yet you are trying to conflate the two!”

And so the distraction continues.

  1. I think you could say it is a measurement.
  2. I didn’t say it was a measurement . My point was about the word “unknown” not about measurements.
Reply to  Bellman
July 2, 2023 5:16 pm

What measurement device do you use to measure a football score? I’m really interested in what you think that device might be.

You compared the football score to a measurement with uncertainty. *YOU* made the comparison, no one else.

Now you are trying to back out of something that you *KNOW* was wrong. Be a man and admit it.

Reply to  Tim Gorman
July 2, 2023 5:46 pm

Counting. Or that’s beyond you try using your fingers or a scoreboard.

The actual measurement is going to depend on the rules of the game and the on field equipment.

You compared the football score to a measurement with uncertainty.
r
No I did not. Maybe you’re not so hot on this English melarky yourself. I’ve explained this enough times so you should have got it by now. I said it was an example of something that might be uncertainty, but would not always be uncertain. Nothing about it being a measurement.

I see you still haven’t taken the hint about maybe not wasting your time obsessing over what was a very light incidental remark.

Be a man and admit it.

Is lying the mark of a man? No I’m not going to admit something that is patently untrue. There are things that can be unknown at some points of time and known at a later date. Your claim that

Unknown means “you don’t know and can never know”.

is wrong. This has also got nothing to do with the topic, so unless you say something even more stupid I will try to ignore any more discussion on the subject.

Reply to  Bellman
July 2, 2023 5:54 pm

Horse hockey. Football scores have a defined value, a constant. There is no uncertainty.

The fact that you don’t know the future is more applicable to linear regressions.

Reply to  Jim Gorman
July 2, 2023 6:23 pm

That’s at least 6 angry comments objecting to my off the cuff claim that it was possible to know a football score.

Football scores have a defined value, a constant.”

OK. I’m not that much of a sports fan. I didn’t realize that football scores never changed.

There is no uncertainty.

Yes, that’s why I said it wasn’t unknown.

The fact that you don’t know the future is more applicable to linear regressions.

Not this again. It’s almost as if you want to detract from Pat Franks masterpiece.

Reply to  Bellman
July 2, 2023 8:40 pm

Poor baby, your pseudoscience trendology GAT hoax agenda is showing again.

Reply to  Bellman
July 3, 2023 3:57 am

Counting. Or that’s beyond you try using your fingers or a scoreboard.”

You don’t count points. You count scores! And those scores are weighted with different point values depending on how they are made!

“The actual measurement is going to depend on the rules of the game and the on field equipment.”

You aren’t MEASURING anything! You don’t get any points for being close to the end zone or for bouncing the puck off the goal post support or for just missing the net in soccer.

No I did not.”

Then why did you bring up football as a comparison to actually measuring something? You had to resort to the crutch that you weren’t talking about actual measuring but to the metaphysical concept of “unknown”.

Uncertainty is an interval of “unknown and unknowable”. The fact that you simply can’t admit that is just more of your meme that all uncertainty is random, Gaussian, and cancels. You can’t escape that false belief no matter how hard you try.

July 1, 2023 5:37 am

Thank you, Pat Frank! I read and downloaded the paper and the supplemental information. This work is so very much appreciated for its thorough penetration of the core issues arising from the records of instrument readings. Congratulations on its publication.

I have two questions from the paper:
1) In 4.6: “From  Figure 19 , the mean global air-temperature-record anomaly over the 20th century (1900–1999) is 0.74 ± 1.94 °C.” 
It appears that 0.74C here is 100 times the slope (which would be deg C per year) of the linear regression of the “Mean Published Anomaly” column of values from 1900 through 1999 in the file named “Uncertainty in Global Air Temperature.txt”.
Is this correct?

2) In 5.1: “The compilation of land- and sea-surface LiG uncertainty yield a 1900–2010 global air-temperature record anomaly of 0.86 ± 1.92 °C (2σ), which renders impossible any conclusion regarding the rate or magnitude of climate warming since 1850 or earlier.”
It appears that 0.86C here is 100 times the slope (which would be deg C per year) of the linear regression of the “Mean Published Anomaly” column of annual values from 1900 through 2010 in the file named “Uncertainty in Global Air Temperature.txt”.
Is this also correct?

One more thing from the paper: “This research received no external funding.” A gift of gem quality, in my view.

Reply to  David Dibbell
July 1, 2023 10:26 am

Hi David — thanks for your kind words. And both kudos and thanks for your interest especially in downloading the data and working with it.

In answer to your questions. 1), yes, and 2) yes. The numbers are each the 100-year trend, which seemed like the most readily grasped metric.

July 1, 2023 12:03 pm

To the pseudoscience GAT hoaxers currently making fools of themselves in this thread (you know who you are):

None of you could offer anything remotely close to a refutation of the LIG thermometer problems and issues painstakingly researched and documented in Pat Frank’s paper as listed in 5.1 Major Findings, such as Joule-drift (I won’t list all 15 bullet points). This includes the magnitudes of the uncertainties — zip, zero, nada. The paper represents a huge amount of work and effort, including 284 references! If any of you lot had any inkling of what it takes to just assemble a collection of references for a paper, you might have thought twice before jumping into the deep end, but no. Y’all have to keep those teeny teensy leetle GAT uncertainty numbers alive, at any cost.

To recap a bit:

Nitpick Nick — “Buh, buh, buh errors all cancel!!”

Keep chanting this to yourself, Stokes, maybe someday you might even believe it yourself.

bgxyz-whatever —
“You put it in a ‘predatory journal’!” (WTH this means).
“You used RMS!”

Beyond pathetic.

bellboy — “You think UAH uncertainty is way way way too big!”, along with more inanities about “RMS”.

big ol’ oil blob — high-fived the journal idiocy.

mosh — I won’t repeat his racist garbage.

And you wonder why no one takes you bankrupt yappers seriously.

July 2, 2023 7:16 am

Pat, one question: does Joule-drift bulb contraction cover or include what I remember about (long time ago now), which was called glass creep? Equivalent to old window panes becoming thicker at the bottom over time?

Reply to  karlomonte
July 2, 2023 10:27 am

“Equivalent to old window panes becoming thicker at the bottom over time?”

They don’t. It’s a myth.

Reply to  karlomonte
July 2, 2023 12:07 pm

KM, the glass-creep of windows (if that really happens) would result from gravity acting on an extremely viscous fluid (glass).

Joule-drift results from the relaxation of residual strain left in thermometer glass after hot manufacture and cooling.

Scientific glassblowers typically anneal their glass apparatus at just below the softening temperature, for a day or more, to let most of that strain work its way out.

Both processes follow from glass as an amorphous solid that has not achieved internal thermodynamic equilibrium.

Reply to  Pat Frank
July 2, 2023 12:50 pm

Thanks, Pat.

Reply to  karlomonte
July 2, 2023 8:39 pm

Happy to oblige, KM. You’re a good guy.

Reply to  karlomonte
July 3, 2023 1:11 am

Noooo no, no. I learnt that old people looking through windows were thicker in the bottom!

Cheers,

b.

Reply to  Bill Johnston
July 3, 2023 1:12 am

Maybe it was old creeps … looking through windows …

July 5, 2023 3:43 pm

The lengthened growing season, the revegetation of the far North, and the poleward migration of the northern tree line provide evidence of a warming climate. However, the rate or magnitude of warming since 1850 is not knowable.”

“Provide evidence”, but not proof of warming.

It is more likely, the change is plants loving greater access to CO₂. A fact that has been identified in studies as increased ability of plants to weather temperatures and drought.

Reply to  ATheoK
July 5, 2023 3:59 pm

It’s quite likely it’s a combination of both. Which makes it difficult if not impossible to identify the magnitude of each factor without more information than temperature alone can give us.