The Greatest Scientific Fraud Of All Time — Part XXX

From the MANHATTAN CONTRARIAN

Francis Menton

Friday’s post principally reported on the recent (February 8, 2022) article by O’Neill, et al., in Atmospheres, “Evaluation of the Homogenization Adjustments Applied to European Temperature Records in the Global Historical Climatology Network Dataset.” In the piece, O’Neill, et al., dramatically demonstrate that the NOAA/NCEI “homogenization” algorithm is wildly off the mark in its intended mission of identifying and correcting for supposed “discontinuities” or “breakpoints” in weather station location or instrumentation in order to provide a more accurate world temperature history. At the same time, although not mentioned in O’Neill, et al., the NOAA/NCEI algorithm is wildly successful in generating a world temperature history time series in the iconic hockey stick form to support the desired narrative of climate alarm.

What should be done? O’Neill, et al., for reasons that I completely cannot understand, buy into the idea that having a group of government-paid experts correct the temperature record with a “homogenization” algorithm was and is a good idea; therefore, we just need to tweak this effort a little to get it right. From O’Neill, et al.:

[W]e are definitely not criticizing the overall temperature homogenization project. We also stress that the efforts of Menne & Williams (2009) in developing the PHA . . . to try and correct for both undocumented and documented non-climatic biases were commendable. Long-term temperature records are well-known to be frequently contaminated by various non-climatic biases arising from station moves . . ., changes in instrumentation . . ., siting quality . . ., times of observation . . ., urbanization . . ., etc. Therefore, if we are interested in using these records to study regional, hemispheric or global temperature trends, it is important to accurately account for these biases.

Sorry, but no. This statement betrays hopeless naïveté about the processes by which government bureaucracies work. Or perhaps inserting this statement into the piece was the price of getting it published in a peer reviewed journal that, like all academic journals in the climate field today, will suppress any piece that overtly challenges “consensus” climate science.

Whichever of those two it is, the fact is that any collection of government bureaucrats, given the job to “adjust” temperature data, will “adjust” it in the way that best enhances the prospects for growth of the staff and budget of the bureaucracy. The chances that scientific integrity and accuracy might intrude into the process are essentially nil.

Is there any possibility that a future Republican administration with a healthy skepticism about the climate alarm movement could do anything about this?

For starters, note that President Trump, despite his climate skepticism and his focus on what he called “energy dominance,” never even drained a drop out of this particular corner of the swamp. It took Trump until September 2020 — just a few months before the end of his term — to finally appoint two climate skeptics, David Legates and Ryan Maue, to NOAA to look into what they were doing. Before they really got started, Trump was out and so were they.

Even if a new Republican President in 2025 got started on his first day, the idea that he could quickly — or even within four years — get an honestly “homogenized” temperature record out of NOAA/NCEI, is a fantasy. The existing bureaucracy would fight him at every turn, and claim that all efforts were “anti-science.” Those bureaucrats mostly have civil service protection and cannot be fired. And there don’t even exist enough climate skeptics with the requisite expertise to re-do the homogenization algorithm in an honest way.

But here are some things that can be done:

  • Do an audit of the existing “homogenization” efforts. Come out with a report that points to five or ten or twenty obvious flaws in the current algorithm. There are at least that many. The O’Neill, et al., work gives a good starting point. Also, there are many stations with good records of long-term cooling that have been “homogenized” into long-term warming. Put the “homogenizers” on the hot seat to attempt to explain how that has happened.
  • After the report comes out, announce that the government has lost confidence in the people who have been doing this work. If they can’t be fired, transfer them to some other function. Don’t let the people stay together as a team. Transfer some to one place, and some to another, preferably in different cities that are distant from each other.
  • Also after the report comes out, announce that the U.S. government is no longer relying on this temperature series for policymaking purposes. It’s just too inaccurate. Take down the website in its current form, and all promotion of the series as something providing scary information about “hottest month ever” and the like. Leave only a link to hard data in raw form useful only to “experts” with infinite time on their hands.
  • Stop reporting the results of the USHCN/GHCN temperature series to the hundredth of a degree C. The idea that this series — much of which historically comes from thermometers that only record to the nearest full degree — is accurate to one-hundredth of a degree is completely absurd. The reporting to an accuracy of a hundredth of a degree is what gives NOAA the ability to claim that a given month was the “hottest ever” when it says temperature went from an anomaly of 1.03 deg to 1.04 deg. I suggest reporting only to an accuracy of 0.5 of a degree. That way, the series would have the same temperature anomaly for months or years on end.
  • Put error bars around whatever figures are reported. Appoint a task force to come up with appropriate width of the error bars. There should be some kind of sophisticated statistical model to generate this, but I would think that error bars of +/- 0.5 deg C are eminently justifiable. Again, that would make it impossible to claim that a given month is the “hottest ever,” unless there has been some sort of really significant jump.

Read the full article here.

4.9 37 votes
Article Rating
631 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
lee
February 22, 2022 10:27 pm

HadCRUt takes it further. Precision to 1/1000th of a degree.
To 6 or 7 decimal places for monthly “data”.

Last edited 3 months ago by lee
Steve Case
Reply to  lee
February 22, 2022 10:44 pm

And Colorado University’s Sea Level Research Group tells us that acceleration of sea level rise is 0.098 mm/yr². Besides it being phony, why didn’t they round it up to 0.1 mm/yr².

Climate science comes up short on the various aspects of numerate competence, understanding the concept of the number of significant places is only one.

Reply to  Steve Case
February 22, 2022 10:58 pm

Perhaps they thought 98 was a standard acceleration to use for everything on earth

works for gravity, why not SLR?

Vuk
Reply to  Steve Case
February 23, 2022 12:34 am

Increased rainfall accelerates sea level rise. Falling rain accelerates at 0.0981hm/sec2 (hectometers/second/second).
There you have it, they just rounded down to 3 decimals.

Reply to  Vuk
February 23, 2022 1:08 am

More rain is result of more evaporation before, reducing sealevel.

Vuk
Reply to  Krishna Gans
February 23, 2022 1:19 am

May be they measure sea level just before and after rainfall, take the difference, and ‘hey presto’ they get + SLR acceleration. Measure it just after one rain fall and just before the next next one there will be – SLR. Hence, there is a natural oscillation in SLR. Statistics is just decorative ‘numerical origami’ of science or even more so of the quasi- and pseudo- science.

Gordon A. Dressler
Reply to  Vuk
February 23, 2022 7:51 am

Well, then, it’s no wonder that the news lately has been so full of reports of flooding around the world. 🙂

Would that be true if, say, the rainfall acceleration was in reality 0.0979 hm/sec2 instead of the reported 0.0981 hm/sec2?

Inquiring minds want to know.

Last edited 3 months ago by Gordon A. Dressler
BobM
Reply to  Steve Case
February 23, 2022 4:29 am

They wanted to publish an even better number that 97% of “climate scientists” would agree with.

philincalifornia
Reply to  BobM
February 23, 2022 1:23 pm

…. and if they rounded it up to 0.1, people might think they were faking it.

Reply to  BobM
February 23, 2022 3:59 pm

97% is so 2020
2021 was 99%.
The current 2022 consensus
is 105%

Scissor
Reply to  Steve Case
February 23, 2022 5:17 am

The notion that an acceleration derived from a line fit over a period of 30 years is ridiculous in its own right. The 2 significant figures might be justified on a statistical basis, but is their data splice?

BTW, the acronym for the “University of Colorado” is CU. This leads to people then mistakenly referring to it as Colorado University.

Other universities follow this convention, e.g. the University of Oklahoma is OU. But the University of Michigan is UM of U of M.

Carlo, Monte
Reply to  Scissor
February 23, 2022 7:30 am

“CU” is what they put on the football helmets.

The Denver campus isn’t called “CU Denver”, but rather UCD.

Scissor
Reply to  Carlo, Monte
February 23, 2022 7:45 am

Yes. Sometime University of Colorado at Boulder is abbreviated as UCB.

Clyde Spencer
Reply to  Scissor
February 23, 2022 8:08 am

Not to be confused with the University of California, Berkeley.

Carlo, Monte
Reply to  Clyde Spencer
February 23, 2022 8:28 am

And in other news, Denver will finally get above freezing on Saturday. Someone needs to adjust the numbers from this week upward PDQ.

william Johnston
Reply to  Steve Case
February 23, 2022 7:15 am

Stating something like temp or SLR to 3 decimal places shows the drones are really, really hard at work and doing very precise cyphering.

Scissor
Reply to  william Johnston
February 23, 2022 7:56 am

With the level of acceleration stated, then one could expect 6″ of SL rise around 2050 and just over 2 feet of rise by 2100. However, going backwards using their quadratic equation, indicates a bottoming of SL around 1975 and a fall of about a foot from 1900 to 1975.

Obviously, the fit does not describe past reality. Of course, they aren’t overtly making the claim that it does, but back to Steve’s point, it would seem that the two significant figures that they report is perhaps wrong or deceptive.

I think they should explain their motives around the descriptive statistics that they use and perhaps how it should be interpreted. That would give us more information at least.

Clyde Spencer
Reply to  Scissor
February 23, 2022 8:09 am

It is known as “over-fitting.”

chris
Reply to  Steve Case
February 23, 2022 11:48 am

“why don’t they round up …?”
well, perhaps because then folks like you will proclaim “they ROUNDED UP!”

also, this is an acceleration, not a rate/velocity. If a rate is significant at 0.1 mm/yr, then acceleration would be significant at 0.01 mm/yr2

The error in this whole line of thought (the article, not your quibble) is that Classical Statistics is known to be more misleading to the public – and to many Statisticians – than necessary. Classical Statistics was invented to suggest possible causal models from small [O(10)] data sets with no time component (see R.M. Fischer, et al). The proper way to assess temporal-causal hypotheses is with Bayesian methods (as is done by Attribution Science). There are no ‘statistics’ in Bayesian analysis. Rather, Bayesian models are judged based on the odds that the model explains the data, not if some shape parameter is ‘significantly’ greater than zero.

Reply to  Steve Case
February 23, 2022 3:57 pm

because three decimal places is more “scientific” sounding.

bdgwx
Reply to  lee
February 23, 2022 5:42 am

The data files say the 95% CI is ±0.05 [1]. Where are seeing an uncertainty of 1/1000th of a degree?

Reply to  bdgwx
February 23, 2022 7:24 am

The resolution limit of an MMTS sensor is ±0.1 C, under lab conditions. https://doi.org/10.1175/1520-0426(2001)018<1470:SPORIS>2.0.CO;2

A 1σ CI of ±0.025 C is four times smaller than the lower limit of instrumental resolution.

Data magicked from thin air. More proof, if we needed it, that climate science is ever so special.

Last edited 3 months ago by Pat Frank
bdgwx
Reply to  Pat Frank
February 23, 2022 7:53 am

PF said: “Data magicked from thin air. More proof, if we needed it, that climate science is ever so special.”

It’s not magic. It’s how the propagation of uncertainty plays out when you follow it through all of the steps of the process.

I will say that I did a Type A evaluation of the HadCRUT, GISTEMP, BEST, and ERA uncertainty a few months back and I got a 1σ CI of ±0.038 C which is a bit higher than what each are claiming via their respective Type B evaluations. But I don’t think anyone is going to cry foul with a difference that small. Either way I think everyone will agree that 0.025 or 0.038 is significantly different than 0.001.

Clyde Spencer
Reply to  bdgwx
February 23, 2022 8:14 am

Assuming you did the calculations properly, then the temps should only be reported to be significant to the hundredths position, and the CI should be reported as ±0.04 deg C, at best.

bdgwx
Reply to  Clyde Spencer
February 23, 2022 9:06 am

I don’t disagree. Just know that if they did that someone will inevitably say the datasets are not being transparent and hiding the digits. On Dr. Spencer’s blog I would frequently discuss the trend to 2 decimal places and get accused of fraudulently making the trend look higher than it was since it was actually getting rounded up to +0.14 C/decade instead of the lower +0.135 C/decade value. Over here I provide 3 and sometimes 4 digits and then I’m accused of displaying too many digits and even accused doing the linear regression calculation wrong. It’s basically a no-win situation. Anyway, the other nice thing about having the extra digits is that you can do calculations with less rounding error and you can see how often past months are changing. For example, if you only used the 2 digit UAH TLT file you might not notice changes in past month values like you can plainly see with the 3 digit file. That’s the other thing. UAH TLT reports to 3 digits even though their own assessed uncertainty is ±0.2 C yet few people care to indict of Dr. Spencer and Dr. Christy of the same sf rule violations they indict others of. My point is this…someone is always going to “nuh-uh” the reporting of digits either way. And in the end reporting more digits does not harm especially when the uncertainty is explicitly provided.

Last edited 3 months ago by bdgwx
Clyde Spencer
Reply to  bdgwx
February 23, 2022 12:57 pm

The accepted practice is if a number is anticipated to be used in subsequent calculations, is to place a ‘guard digit’ in brackets to the right of the significant digits. To play is safe show both: +0.14 C/decade (+0.13[5] C/decade).

bdgwx
Reply to  Clyde Spencer
February 23, 2022 1:35 pm

I don’t disagree. In fact, I have reported values both with and without the guard clause and I still see people getting bent out shape on here. In one blog post I got fed up with it all and said I’d just post all of the IEEE 754 digits that Excel provides and let the WUWT audience figure out how to display the value. I had one guy respond insinuating that IEEE 754 was either stupid and/or an unfortunate invention that allows scientists to report a precision that isn’t justified. I had another guy tell me that if I didn’t use sf rules strictly then the entire result including the calculation is wrong. If this all sounds absurd it’s because it is absurd. Clyde, I hear what you’re saying. I’m just saying it doesn’t matter. I significant percentage of the blog posts I participate in descend into distracting conversations about how I or someone report a figure even if the sf rules are applied strictly and correctly.

Last edited 3 months ago by bdgwx
Clyde Spencer
Reply to  bdgwx
February 23, 2022 2:25 pm

Unfortunately, that seems to be the way it is. While I’m a dyed in the wool skeptic, I have my detractors on both sides. I think that part of the problem is that the alarmists have pulled so many shenanigans with respect to measurements that some skeptics have come to dislike any and all alarmists, and look for something to criticize.

bdgwx
Reply to  Clyde Spencer
February 23, 2022 5:38 pm

Clyde, I think if you and I sat down for a coffee we’d likely get along fine and probably even agree on a lot of topics related the climate. I get tired of the alarmist of claptrap like the erroneous and evidentiary unsupported claims from Al Gore and the likes too. I’m completely open to the idea that the uncertainty on global average temperatures is higher than the reported ±0.05’ish C figures. But claims that it is on the order of 1 C or higher is not consistent with the evidence. Even claims that it is 0.5 C does not fit the evidence.

Jim Gorman
Reply to  bdgwx
February 23, 2022 7:03 pm

Quit whining. The resolution of most temps in the 21st century is only to 1/10th of a degree. That’s 1 decimal place. The floating point calculations for 1 decimal place in a computer have little to with determining Significant Digits. I can do those calculations to one decimal place on my old Pickett slide rule.

bdgwx
Reply to  Jim Gorman
February 24, 2022 8:14 am

For the United States it is actually about 0.3 C. The reason is because temperatures where recorded in Fahrenheit to the nearest integer. F to C has a 5/9 scaling so ±0.5 F is about ±0.3 C. And that’s only the reporting uncertainty. There would be a instrumental and human read uncertainty to factor in as well.

Carlo, Monte
Reply to  bdgwx
February 24, 2022 10:24 am

Are you able to read with comprehension?

bdgwx
Reply to  Carlo, Monte
February 24, 2022 1:32 pm

I think so. JG said “The resolution of most temps in the 21st century is only to 1/10th of a degree.” which is only true for stations that reported in C. For stations that report in F like those in the United States it is actually 3/10th of a degree because of the scaling factor between F and C. Do you disagree?

Last edited 3 months ago by bdgwx
Carlo, Monte
Reply to  bdgwx
February 24, 2022 3:33 pm

Kool! Caught a double-down-vote.

Reply to  bdgwx
February 23, 2022 2:20 pm

UAH TLT reports to 3 digits even though their own assessed uncertainty is ±0.2 C yet few people care to indict of Dr. Spencer and Dr. Christy of the same sf rule violations they indict others of.

It’s ±0.3 C, actually. I discussed that a bit with Roy Spencer when we met long ago in Chicago. He shrugged me off.

My interest is focused on the surface air temps because those are the data that are being abused in the name of AGW.

Reply to  bdgwx
February 23, 2022 4:10 pm

The past global average temperature trends have no value in predicting the future temperature. Check the data in the past 120 years. Therefore, the number of decimal places is irrelevant. You probably have never noticed, because you are not very bright, that predictions of future global average temperature rise rates are unrelated to measurements of the past global average temperature.

Jim Gorman
Reply to  bdgwx
February 23, 2022 6:45 pm

Two wrongs don’t make a right. Don’t use someone else’s practice to justify what you do.

UAH may have the resolution to discern more precise radiance values, and from that more precise temps. I don’t know the answer to that. Yet the expanded uncertainty may also be pretty large when clouds, drift, etc. are used to develop the uncertainty. Your

What it should tell you is that trends of less than 0.3 are irresolute. You simply can’t discern what a real value is when inside the uncertainty window.

Carlo, Monte
Reply to  bdgwx
February 23, 2022 8:30 am

Either way I think everyone will agree

Here is one of your problems.

bdgwx
Reply to  Carlo, Monte
February 23, 2022 9:16 am

CM said: “Here is one of your problems.”

Do you think 0.025 or 0.038 are the same as 0.001?

Carlo, Monte
Reply to  bdgwx
February 23, 2022 10:55 am

No, I think all three are optimistic bollocks.

bdgwx
Reply to  Carlo, Monte
February 23, 2022 12:13 pm

Do you think anyone else thinks 0.025 or 0.038 are the same as 0.001?

Carlo, Monte
Reply to  bdgwx
February 23, 2022 12:44 pm

I think you completely missed the point.

bdgwx
Reply to  Carlo, Monte
February 23, 2022 1:40 pm

If no one things 0.025 or 0.038 are the same thing as 0.001 then why is it problem if I think everyone would agree that they are different? Why do we continue to have mindless distracting discussions of things that really don’t matter?

Jim Gorman
Reply to  bdgwx
February 23, 2022 7:34 pm

0.025, 0.038, and 0.001 all have a measurement resolution of 1/1000th.

That is 25 thousandths, 38 thousandths, and 1 thousandth.

You show you ignorance by asking the question. A mechanic or machinist can tell you what these measurements mean.

A simple example is a brake rotor. The run out shouldn’t exceed 0.002 inches. Can you average measurements from a number of different rotors using a ruler that reads out to the nearest inch (an integer) to get what the uncertainty actually is?

Here is a link discussing engine main bearing clearances. Think you can do them with a ruler to the nearest 1/16th of an inch?

https://www.bracketracer.com/engine/mains/mains.htm

That is what you are doing with temps and anomalies. You are “calculating” digits of resolution that don’t exist.

Digits in a measurement (resolution) provide a certain amount of information and only that amount. You can not add information to that measurement through mathematical calculations of any kind. Pretending that you can create additional information out of thin air via calculations is naivete at its finest.

Rick C
Reply to  bdgwx
February 23, 2022 9:52 am

Just curious – what was N in your uncertainty analysis? What factors were included in your uncertainty budget? Even the simplest MU evaluation must include at least three factors.

  1. One half of the instrument resolution
  2. The Uncertainty of the calibration reference
  3. The standard deviation of repeated measurements of the reference standard.
bdgwx
Reply to  Rick C
February 23, 2022 11:10 am

N was 3096. It was a purely type A evaluation so no specific factors were considered. It was just an analysis using 3096 comparisons of the monthly global average surface temperature between 1979 and 2021. Type B is the one where you combine specific factors like 1, 2, and 3 you mentioned above plus a bunch of others like spatial and temporal sampling, grid infilling, etc. Individually Morice et al. 2020 describe the procedure for HadCRUT, Lenssen et al. 2019 for GISTEMP, and Rhode et al. 2013 for BEST. It is interesting that each uses wildly different methods for computing the global average surface temperature and wildly different methods for assessing uncertainty and yet the results are consistent with each.

Rick C
Reply to  bdgwx
February 23, 2022 5:24 pm

From your response I infer that your data set had a standard deviation of 2.11 which you divided by the square foot of 3096 to arrive at 0.038. What you have calculated is not the uncertainty it is the standard error of the mean. Thus the 95% confidence limit of the mean of your data set is +/- 0.076. To arrive at the uncertainty of your mean you must add (in quadrature) all the other factors that increase the uncertainty. These factors are not subject to the square root of N reduction. And once you arrive at the correct estimate of the actual uncertainty multiply it by 2 to get the 95% confidence level.

bdgwx
Reply to  Rick C
February 23, 2022 8:47 pm

I never took the square root of 3096 in this particular case. What I did was compared 3096 measurements. The difference in measurements fell into a normal distribution with a standard deviation of 0.053. And sqrt(0.053^2/2) = 0.038. Notice that the uncertainty of the difference of two different measurements each with 0.038 standard uncertainty combine via root sum square or sqrt(0.038^2 + 0.038^2) = 0.053. It is important to understand that 0.053 is the standard uncertainty of the difference of two measurements while 0.038 is the standard uncertainty of a single measurement.

There is another type A method where you can take the standard deviation of the values in the grid mesh and divide by the number of cells in the grid mesh. I have used that method before. In fact, I just did it for the UAH TLT Jan. 2022 grid which has 9504 cells The standard deviation was about 13 which yields a type A evaluation of 13/sqrt(9504) = 0.13 C. Interestingly the Christy et al. 2003 type B evaluation was 0.10 C.

Jim Gorman
Reply to  bdgwx
February 23, 2022 7:40 pm

“N” should not be the number of samples. It should be the sample size. You may have 3096 samples but. “N” is probably 12, the number of months in a year.

“N” is only the number of measurements when you are measuring the same thing multiple times with the same device.

Paul Penrose
Reply to  bdgwx
February 23, 2022 11:04 am

You must distinguish between the two types of uncertainty: accuracy and precision. There is a big difference on how you handle them. Uncertainty due to precision errors can be reduced using mathematical processes such as averaging multiple readings; not so with accuracy. When you mix measurements from instruments that have different accuracy ratings and non-identical precision error models, determining the true uncertainty gets very complicated. As far as I can tell, none of the creators of those temperature sets has undertaken that kind of analysis. Personally, I discount everything to the right of the decimal point in the final results. Daily values seem believable at +/- 0.5, for what that’s worth.

Carlo, Monte
Reply to  Paul Penrose
February 23, 2022 11:14 am

As bwx has been told many times, there is no way to use averaging of multiple measurements of a time series to reduce random uncertainty—after value is measured once it is gone forever.

bdgwx
Reply to  Carlo, Monte
February 23, 2022 12:00 pm

I’ve never claimed that averaging measurements reduces the uncertainty of the individual measurements themselves. That’s your strawman. You own it. Don’t expect me to defend it.

What I have said is that the uncertainty of an average is less than the uncertainty of the individual measurements that went into it. That is something completely different. And my statement here is backed by literally every statistics reference out there including the one you prefer.

Carlo, Monte
Reply to  bdgwx
February 23, 2022 12:38 pm

Keep on fooling yourself, see if I care.

Temperature is a time-series, see if you can figure out what this means.

Last edited 3 months ago by Carlo, Monte
Rick C
Reply to  bdgwx
February 23, 2022 5:47 pm

bdgwx- Sorry, your claim only applies to repeated measurements of the same artifact with the same instrument by the same person. To illustrate, when precision studies are done on a particular measurement process two results are reported. Repeatability which is a measure of variance when the same person using the same instrument measures the same artifact. Reproducibility evaluates the variance when a different person using a different instrument measures the same artifact. Repeatability is virtually always better than reproducibility for obvious reasons. Further, the results of these studies are commonly used to evaluate measurement uncertainty along with other factors I mentioned previously.

bdgwx
Reply to  Rick C
February 23, 2022 8:27 pm

Rick C said: “bdgwx- Sorry, your claim only applies to repeated measurements of the same artifact with the same instrument by the same person.”

I’ve never seen any such requirement. In fact, it’s the exact opposite. In the GUM their examples frequently use a combining function that not only accepts uncertainties of measurements of different artifacts but uncertainties of measurements using different instruments that measure completely different kinds of things with completely different units combined in arbitrarily complex output functions.

Rick C said: “Repeatability which is a measure of variance when the same person using the same instrument measures the same artifact. Reproducibility evaluates the variance when a different person using a different instrument measures the same artifact.”

R&R is a different topic.

Jim Gorman
Reply to  bdgwx
February 23, 2022 7:47 pm

No! The mean (average) has an SEM which is the interval within which the mean may lay. It has nothing to do with the precision of the mean.

If you are finding the mean of a number of integer measurements, you may find that the SEM is very small. That very close to the actual mean of a POPULATION. Yet the average can have no more digits of resolution than the measurements themselves.

Reply to  Carlo, Monte
February 23, 2022 4:20 pm

bwx never listens !

There are virtually no measurements (raw data) in the global average temperature compilation.

There are “adjusted” numbers and infilled numbers.

The adjusted numbers are personal opinions of what the raw measurements would have been if measured correctly in the first place.

These adjusted numbers are no longer real data.

The infilled numbers were never real data.

There is no way to verify whether the adjustments created a more accurate picture of reality, or whether the infilled numbers were in the vicinity of reality.

Ads a result, it is impossible to calculate a margin of error for such adjusted and infilled numbers.

The number of decimal places is not very relevant.

Carlo, Monte
Reply to  Richard Greene
February 23, 2022 6:37 pm

Exactly! What they do with data turns science into religion.

bdgwx
Reply to  Paul Penrose
February 23, 2022 12:08 pm

FWIW, I would put the uncertainty on individual temperature observations at closer to 1 C prior to WWII. I think 0.5 C is pretty reasonable for automated stations. One exception might be USCRN which I believe is reported to be around 0.3 C.

Carlo, Monte
Reply to  bdgwx
February 23, 2022 12:39 pm

You know nothing about real metrology, your estimates are worthless.

Tim Gorman
Reply to  bdgwx
February 24, 2022 10:25 am

That’s not what Berkeley thinks!

Here’s just one of their records:

462373 1 1929.625 22.500 0.0500 -99 -99

See the 0.0500 value? That’s the uncertainty they assign to a measurement taken in 1929! Huh?

Berkeley seems to use the precision instead of the actual uncertainty. .05C is 0.1F. How many thermometers in 1929 had a resolution of 0.1F?

Here’s what Berkeley says about their Breakpoint Corrected data set:

“4) “Breakpoint Corrected”: Same as “Quality Controlled” except a post-processing homogenization step has been applied to correct for apparent biasing events affecting the long-term mean or local seasonality. During the Berkeley Earth averaging process we compare each station to other stations in its local neighborhood which allows us to identify discontinuities and other inhomogeneities in the time series for individual weather stations. The averaging process is then designed to automatically compensate for various biases that may appear to be present. After the average field is constructed, it is possible to create a set of estimated bias corrections that suggest what the weather station might have reported had apparent biasing events not occurred. This data set is recommended for users who want fully quality controlled and homogenized station temperature data. This data set is created as an output of our averaging process, and is not used as an input.” (bolding mine, tpg)

In other words it’s all guess work. As I’ve pointed out multiple times you cannot average stations more than 50 miles away and the actual distance may be less than that depending on elevation, terrain, humidity, etc. How many groupings of stations has Berkeley used that are outside this restriction?

And this isn’t all:

“5*) “Non-seasonal”: Same as “Single-valued” except that each series has been adjusted by removing seasonal fluctuations. This is done by fitting the data to an annual cycle, subtracting the result, and then readding the annual mean.  This preserves the spatial variations in annual mean.” (bolding mine, tpg)

Tell me again how the temperatures are not “adjusted” using guesses?

bdgwx
Reply to  Tim Gorman
February 24, 2022 1:30 pm

TG said: “See the 0.0500 value? That’s the uncertainty they assign to a measurement taken in 1929! Huh?”

What does the data say about that value? Would mind posting the text verbatim so that the WUWT audience can see for themselves?

If you don’t then I will. I just want to give you the opportunity to post it out of respect.

Carlo, Monte
Reply to  bdgwx
February 24, 2022 3:34 pm

How magnanimous of you.

bdgwx
Reply to  Tim Gorman
February 25, 2022 7:01 am

Seeing as you don’t seem interested in posting what Berkeley Earth actually said I guess I’ll do it myself.

For raw data, uncertainty values usually reflect only the precision at which the measurement was reported. For higher level data products, the uncertainty may include estimates of statistical and systematic uncertainties in addition to the measurement precision uncertainty.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 7:17 am

More spamming.

Translation from bzx-ese:

“WHAAAA! NOTICE ME! PLEEEZE!”

Last edited 3 months ago by Carlo, Monte
Tim Gorman
Reply to  bdgwx
February 25, 2022 11:21 am

What does the data say about that value?”

For raw data, uncertainty values usually reflect only the precision at which the measurement was reported.”

And the measurement precision in 1929 was 0.05C?

I’m not surprised you didn’t see fit to address that!

bdgwx
Reply to  Tim Gorman
February 25, 2022 1:16 pm

TG said: “And the measurement precision in 1929 was 0.05C?”

That looks correct to me. I discuss this in more detail down below.

Tim Gorman
Reply to  bdgwx
February 25, 2022 3:04 pm

Then why is the uncertainty in 2000 greater than the uncertainty in 1929? Only a climate scientist could come up with this!

Reply to  Paul Penrose
February 23, 2022 4:13 pm

Please pontificate on the accuracy and precision of the infilled tempetature data.

Reply to  bdgwx
February 23, 2022 2:16 pm

Accuracy greater than instrumental resolution is impossible.

Reply to  Pat Frank
February 23, 2022 4:21 pm

ha ha
Maybe in real science,
but not in the magical world of “climate change” !

bdgwx
Reply to  Pat Frank
February 24, 2022 8:05 am

PF said: “Accuracy greater than instrumental resolution is impossible.”

I never said it was.

Reply to  bdgwx
February 24, 2022 10:05 am

I never said it was.

Yes you did. Right here.

bdgwx
Reply to  bdgwx
February 24, 2022 1:27 pm

Ah…but I’m evaluating the combined uncertainty of a function that computes the AVERAGE of its inputs in that post. I’m not saying the uncertainty of the inputs themselves are lowered when fed into that function. I’m also not saying that you can reduce the uncertainty of individual measurements. All you can expect is a lower uncertainty on the AVERAGE relative to the uncertainty of the individual measurements that went into it. I have capitalized, bolden, and underlined the word AVERAGE so that there can be no mistake what I, Taylor, the GUM, NIST, your own source Bevington, and other of the other texts dealing with propagation of uncertainty say.

Reply to  bdgwx
February 24, 2022 2:24 pm

All you can expect is a lower uncertainty on the AVERAGE relative to the uncertainty of the individual measurements that went into it.

Not true. Not when the uncertainty is from instrumental resolution. That never, ever, diminishes no matter the number of measurements.

And the subject under discussion has been instrumental resolution, right from the start. Not averaging measurements with only random error.

Your supposed ±0.025 C uncertainty is 1/4th the MMTS resolution. Claiming to know historical air temperature to ±0.025 C is impossible. It’s magicking data out of thin air.

Even the new aspirated sensor of the USHCN claim ±0.03 C field resolution.

bdgwx
Reply to  Pat Frank
February 24, 2022 6:04 pm

PF said: “Not when the uncertainty is from instrumental resolution. That never, ever, diminishes no matter the number of measurements.”

We’re going to work through this together.

Take a temperature reported to the nearest integer of X. Considering only the reporting uncertainty the true value could be between X – 0.5 and X +0.5. If X is a random variable then that means the probability of true > X is 50% and true < X is 50%.

Now consider two values X and Y. The probability of both X and Y being lower than true is 25%. Given 3 values X, Y, and Z the probability of all 3 being lower than true is 12.5%.

The reason is this is the way it is because the uncertainty is uniform. And when you average values of this nature the positive and negative errors tend to cancel thus reducing the uncertainty of the average.

Don’t take my word for it. Prove this out for yourself with a monte carlo simulation. Or you can use the NIST uncertainty calculator which do both the monte carlo simulation and the partial derivative procedure for you.

BTW…bonus points…what is the standard deviation of a uniform distribution with a lower bound of A and an upper bound of B? What is it for integer values like the temperature reports? Hint…there is a well known formula for it.

Carlo, Monte
Reply to  bdgwx
February 24, 2022 8:34 pm

We’re going to work through this together.

Aren’t you the special one.

Reply to  bdgwx
February 24, 2022 8:59 pm

You continue equating instrumental resolution to measurement uncertainty. It’s not.

A resolution limit is the perturbation level below which the instrument is no longer sensitive to the observable. The data do not exist.

bdgwx
Reply to  Pat Frank
February 25, 2022 7:00 am

PF said: “You continue equating instrumental resolution to measurement uncertainty. It’s not.”

No I’m not. And I agree that they are different things. You have to combine them via Bevington 3.14 or the well known root sum square formula.

This doesn’t change the fact that reporting measurements to the nearest integer results in a reporting uncertainty that is uniform from -0.5 to +0.5 of the reported value. This too must be combined with all of the other sources of uncertainty via Bevington 3.14 or the well known root sum square formula.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 7:03 am

This too must be combined with

So YOU do it and quit yammering about Pat Frank’s paper.

Obviously it represents a threat to your worldview.

bdgwx
Reply to  Carlo, Monte
February 25, 2022 8:30 am

I do it because combining different uncertainty sources is appropriate. What I wouldn’t do is take two estimates of the same uncertainty source and combine them. That’s one of my concerns with Frank 2010. It combines the Folland and Hubbard uncertainties but does so only under the suspect assumption that they are different. We can’t tell from Folland’s publication either way if it is different or not because he was so terse about it. I’m willing to accept that they are different but no one can or is motivated to show me the evidence that they are.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 10:06 am

Quite obviously you’ve never had to use a manufacturer’s spec sheet inside a real UA.

Quit lecturing intelligent people on subjects for which you have no knowledge.

Reply to  bdgwx
February 25, 2022 11:17 am

It combines the Folland and Hubbard uncertainties but does so only under the suspect assumption that they are different.

They are demonstrably different. The ±0.2 C is Folland’s judgment call. It is it a guesstimated mean of error not known to be stationary, not known to be random, and of unknown distribution. The ±0.2 C must then and necessarily condition every measurement as a constant in uncertainty.

The H&L uncertainty is an empirically determined field calibration error.

Their difference could not be more apparent or more stark.

Brohan, et al., 2006 provide a further explanation of Folland’s ±0.2 C guesstimate: “The random error in a single thermometer reading is about 0.2 C (1 σ) [Folland et al., 2001]; the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean. So the error in the monthly average will be at most 0.2/[sqrt(60)] = 0.03 C and this will be uncorrelated with the value for any other station or the value for any other month.

The centrally important phrase is “in a single thermometer reading” The 0.2 C error is a read error — the visual LiG thermometer parallax error of the station agent.

Folland and Brohan each assume the mean magnitude of historical parallax read error is 0.2 C and further assume it is of random distribution. All that without any possibility of validating either assumption.

The ±0.2 C error is therefore a judgment call, exactly as described in my paper. Not known to be random, and having no known distribution.

The ±0.2 C read error must then enter every estimate of temperature measurement uncertainty as a constant.

In 2011, WUWT hosted a repost about thermometer metrology, Mark Cooper original post here, that discussed the errors that accrue to meteorological thermometers, including discussion of parallax read errors. It’s not all random and 1/sqrtN roses.

Mark Cooper’s final meteorological LiG uncertainty of ±1.3 C is about 3x the MMTS calibration lower limit of uncertainty from systematic error.

As Willis Eschenbach pointed out in an extended comment, read errors tend to cluster around the most probable error, rather than around zero.

bdgwx
Reply to  Pat Frank
February 25, 2022 1:14 pm

PF said: “Brohan, et al., 2006 provide a further explanation of Folland’s ±0.2 C guesstimate”

Now we’re getting somewhere!

Brohen said: “So the error in the monthly average will be at most 0.2/[sqrt(60)] = 0.03 C and this will be uncorrelated with the value for any other station or the value for any other month.”

Didn’t this catch your eye?

Brohen used σ / sqrt(N).

You used sqrt[N*σ/(N-1)].

BTW…we know that the 0.2 figure is a standard uncertainty because 1) it is described as being “standard” and 2) it is used in a formula that only accepts a standard deviation. We also know that it is random and uncorrelated. The uncorrelated bit is important because it proves that Bevington 3.14 or the shortcut 4.12 which is derived from 3.14 are the appropriate formulas for the propagation of uncertainty in this case.

PF said: “Mark Cooper’s final meteorological LiG uncertainty of ±1.3 C is about 3x the MMTS calibration lower limit of uncertainty from systematic error.”

That seems reasonable. I’ve been telling people on here the LiG uncertainty is higher than the oft cited 0.5’ish C value I’ve seen thrown around.

PF said: “As Willis Eschenbach pointed out in an extended comment, read errors tend to cluster around the most probable error, rather than around zero.”

Why would that not cancel out with the anomaly subtraction. I mean, if it truly clusters around a specific value then that specific value will be equally represented in both components of the anomaly subtraction. Mathematically this is..

ΔT = (Tb + E) – (T + E) = (Tb – T) + (E – E) = Tb – T

…which as you can see the error E gets removed by the subtraction.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 1:25 pm

Why would that not cancel out with the anomaly subtraction. I mean

Subtracting a baseline does NOT cancel uncertainty, it INCREASES IT!

Jim Gorman
Reply to  Carlo, Monte
February 25, 2022 1:34 pm

Dont you know that the CALCULATED baseline is a CONSTANT and has no uncertainty? /sarc

Carlo, Monte
Reply to  Jim Gorman
February 25, 2022 1:53 pm

Silly me!

bdgwx
Reply to  Carlo, Monte
February 25, 2022 2:25 pm

CM said: “Subtracting a baseline does NOT cancel uncertainty, it INCREASES IT!”

Strawman. I never said subtracting the baseline cancels uncertainty. What I’m saying is that systematic error as described by Willis cancels when subtracting the baseline.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 2:46 pm

So “systematic error” is not uncertainty on the planet you inhabit?

No wonder you are so confused.

Tim Gorman
Reply to  bdgwx
February 25, 2022 4:11 pm

What makes you think the systematic error cancels?

You *add* uncertainties whether you are adding or subtracting.

a = present measurement
b = baseline average
anomaly = a – b

The uncertainty of a + b and a – b is the same!

The total uncertainty is the sum of the uncertainties in “a” and in “b”. So the total uncertainty in the anomaly (a-b) is ẟa + ẟb. Or, if you like sqrt[ ẟa^2 + ẟb^2 ].

The uncertainties do *NOT* cancel!

Tim Gorman
Reply to  bdgwx
February 25, 2022 2:59 pm

We also know that it is random and uncorrelated.”

How do you know it is random? If it is a *stated* value then it can’t be random.

What do you think it is correlated with? Itself?

then that specific value will be equally represented in both components of the anomaly subtraction.”

we: “cluster around the most probable error”

Why do you think a “cluster” will be equally represented in both components? It’s the old Taylor rule that only random error cancels. A cluster doesn’t represent either a random error or constant error.

Reply to  bdgwx
February 25, 2022 9:38 am

You have to combine them via Bevington 3.14 or the well known root sum square formula.

You just refuted your own program of (specious) fault-finding.

uniform from -0.5 to +0.5 of the reported value.

Systematic error is not a uniform distribution.

bdgwx
Reply to  Pat Frank
February 25, 2022 10:11 am

PF said: “Systematic error is not a uniform distribution.”

I didn’t say it was. I also never called the uncertainty that occurs as a result of reporting temperatures to the nearest integer as systematic error.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 10:29 am

And you STILL refuse to acknowledge what is clearly stated upfront in Pat Frank’s paper, that it is a LOWER LIMIT. Do you understand that this implies a more rigorous value is likely to be considerably higher?

Tim Gorman
Reply to  bdgwx
February 25, 2022 10:33 am

Take a temperature reported to the nearest integer of X. Considering only the reporting uncertainty the true value could be between X – 0.5 and X +0.5. If X is a random variable then that means the probability of true > X is 50% and true < X is 50%.”

Nope. Uncertainty has no probability distribution. All you know that it is somewhere in the interval. You simply don’t know if X is a random variable or not.

“Now consider two values X and Y. The probability of both X and Y being lower than true is 25%. Given 3 values X, Y, and Z the probability of all 3 being lower than true is 12.5%.”

Again, uncertainty is not a random variable. And if X and Y are measurements of different things, e.g. temperature at location X and location Y, you simply don’t know that there is anything random about the measurements.

If there is any systematic error in either one then all of the measurements may be on one side of the true value or on the other side. You have no way of knowing.

The reason is this is the way it is because the uncertainty is uniform.”

Again, uncertainty has no probability distribution, not even uniform. If there is any systematic error at all, and there is almost *always* some systematic error in any physical measurement, then you simply have no idea where in the uncertainty interval the true value might lie.

I don’t need to do any simulation. I’ve run machine tools. I know from experience that because of wear on the cutting tool that error in the outputs can all lie on one side of the “true value”. That’s one reason for re-calibrating the machine on a regular basis against a standard gauge block.

Jim Gorman
Reply to  Tim Gorman
February 25, 2022 3:26 pm

It is like measuring all the rod journals in an engine and averaging them to find what single value of oversize bearings to order. I mean the average is more accurate and precise than any of the individual measurements, right?

Last edited 2 months ago by Jim Gorman
Tim Gorman
Reply to  Pat Frank
February 25, 2022 10:21 am

The sensor uncertainty is only PART of the uncertainty of the measurement station.

Carlo, Monte
Reply to  bdgwx
February 24, 2022 3:35 pm

Still spewing your standard bullshite.

This is really just the same as your manufactured data.

Last edited 3 months ago by Carlo, Monte
Tim Gorman
Reply to  bdgwx
February 25, 2022 11:18 am

Ah…but I’m evaluating the combined uncertainty of a function that computes the AVERAGE of its inputs in that post. I’m not saying the uncertainty of the inputs themselves are lowered when fed into that function.”

Of what use is the average of the uncertainties? What if each element has a different uncertainty? You’ll get the same overall uncertainty using the u_1 + u_2 + … + u_n as you’ll get with N * u_avg.

” I’m also not saying that you can reduce the uncertainty of individual measurements.”

Then of what use is the average of the uncertainties? What can you do with it?

“All you can expect is a lower uncertainty on the AVERAGE relative to the uncertainty of the individual measurements that went into it.”

In other words, mathematical masturbation!

bdgwx
Reply to  Tim Gorman
February 25, 2022 12:57 pm

TG said: “Of what use is the average of the uncertainties?”

I have no idea. I never said anything about the average of the uncertainties. What I said is that it is the uncertainty of a function that computes the average. That is a completely different thing with completely different math.

TG said: “What if each element has a different uncertainty?”

Then you have to use the partial differential method described by Bevington 3.14, Taylor 3.47, or GUM 10.

TG said: “Then of what use is the average of the uncertainties?”

I still have no idea. And I still never said anything about the average of the uncertainties.

Tim Gorman
Reply to  bdgwx
February 25, 2022 2:50 pm

I have no idea. I never said anything about the average of the uncertainties.’

What in Pete’s name do you think you were calculating?

“What I said is that it is the uncertainty of a function that computes the average”

So a function that computes the average uncertainty is not calculating the average uncertainty?

“Then you have to use the partial differential method described by Bevington 3.14, Taylor 3.47, or GUM 10.”

And as I showed you that just winds up with the root-sum-square of the individual uncertainties, not with the average uncertainty.

“Ah…but I’m evaluating the combined uncertainty of a function that computes the AVERAGE of its inputs in that post.”

In other words you are running away from your own assertion!

The function that computes the average of the inputs is

q_avg = (x_1 + x_2 + x_3 + …. + x_n)/n

And the average uncertainty is

ẟq/q_avg = ẟx_total/x_total ->

ẟq = (x_total/n)(ẟx_total/x_total) = ẟx_total/n

What is ẟx_total/n except the average uncertainty?

You’ve converted individual uncertainties in the data set to an average so that the ẟx_total = (ẟx_total/n) * n

The average uncertainty is merely mental masturbation. It tells you nothing that you don’t already know. It is *certainly* not the total uncertainty nor is it equal to the stadard deviation of the mean calculated from sample means.

Jim Gorman
Reply to  bdgwx
February 23, 2022 6:12 pm

You still don’t understand the GUM or uncertainty textbooks. The equations you are using are based on either multiple measurements of the same measurand with the same device OR that there is a functional relationship between multiple measurements that make up a total measurement like a volume or the total height of a staircase with multiple risers.

Independent measurements of different measurands (different microclimates) with different devices have no functional relationship. Using an statistical parameter like a mean is not a substitute for a functional relationship.

You can not use an SEM as descriptor of measurement uncertainty. It is only a descriptor of the precision of the mean in a distribution of unrelated stand alone measurements. It also is calculated from an assumption where the data points are assumed to be 100% accurate and with a precision way beyond the resolution of the measuring devices. If Significant Digit rules were properly used, anomalies would have no more resolution/precision than the recorded temperatures they are derived from.

Reply to  Jim Gorman
February 24, 2022 10:16 am

How many times have you and so many others gone around and around about uncertainty and error with bdgwx, Jim?

And he just keeps repeating the same mistake over and yet over again. It’s hard to believe bdgwx’ repetitive insistence on wrongness is due to a supernaturally refractive ignorance.

A variation upon Higher Superstition comes to mind. Perhaps Higher Trollness. Über alles dedication to a subjectivist narrative.

Carlo, Monte
Reply to  Pat Frank
February 24, 2022 10:27 am

A very good question; bellman is almost a carbon copy in this respect.

bdgwx
Reply to  Pat Frank
February 24, 2022 1:22 pm

Pat, if I’m wrong then all of the texts on propagation of uncertainty are wrong including those provided by Taylor, the GUM, NIST, and even your own source Bevington. It would also mean a large portion of the knowledge science has obtained over the centuries is wrong. Is this really the claim you are wanting to make?

Reply to  bdgwx
February 24, 2022 2:37 pm

You’re insistently misapplying their work, bdgwx.

Jim and Tim Gorman know that, Carlo, Monte knows that, Richard Greene knows that, Rick C knows that, I know that, Paul Penrose knows that. And others unnamed.

We’ve all explained the problem ad nauseam. You insist on making the same wrong claims ad nauseam + 1.

Ad nauseam + 1 seems to be your only goal.

bigoilbob
Reply to  Pat Frank
February 24, 2022 4:10 pm

Pat Frank – rightly – cites, Carlo, Monte, Richard Greene, Rick C, and “others unnamed”.

bdgwx – also rightly – cites “all of the texts on propagation of uncertainty”.

Oooh, tough choice…. 

Carlo, Monte
Reply to  bigoilbob
February 24, 2022 4:15 pm

Another voice from the idiocy squad—blob.

Reply to  bigoilbob
February 24, 2022 9:01 pm

bdgwx wrongly cites the texts, bob.

Either you’re as ignorant as he is, or you’re just being opportunistically disingenuous.

Tim Gorman
Reply to  bigoilbob
February 25, 2022 12:43 pm

A False Appeal to Authority is still an argumentative fallacy. When you don’t understand the difference between the standard deviation of a mean and the total uncertainty of a dataset then trying to say that the authorities quoted agree with you makes no sense.

Do *YOU* understand the difference between the two? Have *YOU* ever built a beam to span a foundation by using only the standard deviation of the mean to figure the length of the beam you’ve put together?

Jim Gorman
Reply to  Pat Frank
February 25, 2022 8:09 am

These folks have never used real physical measuring devices to try and obtain the best measurement possible. They have never said to themselves, is the instrument I am using in good repair. With a micrometer, has someone dropped it and bent the frame, is the anvil the correct one, do i need to get a new $1000 one with better resolution? I suspect they have never had to take the width of a saw blade into account or the wobble in a miter saw blade when doing finish carpentry on crown molding when doing carpentry.

As a result, they are dealing with numbers, which to a mathematician, can be manipulated to a never ending extent.

Reply to  bdgwx
February 24, 2022 10:01 am

It’s not magic. It’s how the propagation of uncertainty plays out when you follow it through all of the steps of the process.

Very amusing, bdgwx. There are no data to be had below the instrumental resolution limit.

Even assuming that all measurement error was iid random and averaged away (pretty much never true), the ultimate uncertainty attached to the measurement would be the instrumental resolution limit.

Uncertainty never is, and never can be, less than instrumental resolution.

bdgwx
Reply to  Pat Frank
February 24, 2022 1:19 pm

PF said: “Uncertainty never is, and never can be, less than instrumental resolution.”

But…the uncertainty of the average of several measurements can be less than the resolution of the instrument used to provide those measurements and is less than the total uncertainty of the individual measurements.

That is a fact. Your own source (Bevington) says that the uncertainty of a function that computes the average is σ_avg = σ/sqrt(N) when σ is the same for all uncertainties that are being propagated into the average. And this is consistent with all of the other texts on propagation of uncertainty like Taylor and the GUM and which you can confirm with the NIST uncertainty calculator.

bigoilbob
Reply to  bdgwx
February 24, 2022 2:34 pm

But…the uncertainty of the average of several measurements can be less than the resolution of the instrument used to provide those measurements and is less than the total uncertainty of the individual measurements.”

Even Dr. Frank, a man with undeniably strong pockets of intelligence, has amygdal overamp and retreats into hysterical blindness when confronted with this truth. Dan Kahan redux, in his discussion of “ideologically motivated cognition as a form of information processing that promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups.”

 http://journal.sjdm.org/13/13313/jdm13313.pdf

Carlo, Monte
Reply to  bigoilbob
February 24, 2022 3:38 pm

blob to the rescue with a fine blob word salad!

Well done blob!

Jim Gorman
Reply to  bigoilbob
February 24, 2022 5:41 pm

Bull Snot.

Let’s use the brake rotor test. Brake rotors with a runout of 0.002″ should be replaced. Would you trust a mechanic who pulls out his trusty 1 foot tape measure with 1/16″ markings, measures yours, then measures 25 rotors from the junk pile, and gives you an answer that need/don’t need repairs?

That is what you are saying. “… the uncertainty of the average of several measurements can be less than the resolution of the instrument …”

You obviously have no idea about the concept you trying to discuss.

bigoilbob
Reply to  Jim Gorman
February 24, 2022 6:02 pm
  1. You – either purposefully, or subconsciously, a la Dan Kahan – picked a sample size too small and/or a measurement method too coarse.
  2. You picked an instance that never would occur in real life.
  3. There is no comparable climactic temperature measurement analog.

Maybe if you try harder….

Last edited 3 months ago by bigoilbob
Carlo, Monte
Reply to  bigoilbob
February 24, 2022 8:26 pm

Idiot.

Jim Gorman
Reply to  bigoilbob
February 25, 2022 9:12 am

Try addressing the point!

I even put it in bold.

“… the uncertainty of the average of several measurements can be less than the resolution of the instrument …”

Reply to  bigoilbob
February 25, 2022 11:24 am

bob — “But…blah…”

No it can’t. bdgwx is wrong and so are you.

Lil-Mike expressed it most succinctly: “you can’t justify a precision greater than the granularity of your equipment.

You and bdgwx claim detail is available within a single pixel. It’s not. You’re wrong.

Reply to  bdgwx
February 24, 2022 2:40 pm

But…the uncertainty of the average of several measurements can be less than the resolution of the instrument used to provide those measurements…

No, it can’t.

And you’re misapplying Bevington. Again. Ad nauseam.

bdgwx
Reply to  Pat Frank
February 24, 2022 5:48 pm

PF said: “No, it can’t.
And you’re misapplying Bevington. Again. Ad nauseam.”

Oh yes it can. And that is exactly what Bevington says. Skip to example 4.1 if you don’t want to derive the formula on your own. However, I highly advise starting with the generalized propagation equation 3.14. Define your function x = f(y_1, y_2, …, y_3) = (1/N)Σ[y_i, 1, N] and follow the partial differential procedure for equation 3.14. You will see that the uncertainty of the average is less than the uncertainty of the individual inputs into the function computing the average.

Don’t take my word for it. Actually do it yourself. If you want I’ll work through it with you.

bigoilbob
Reply to  bdgwx
February 24, 2022 6:11 pm

 If you want I’ll work through it with you.

Ordinarily, I would accuse you of condescension. But I’ll bet you a coke that Dr. Frank won’t take this golden opportunity to prove you wrong. Rather, the usual prevarications that, so far have resulted in him preaching to the ever reducing acolytion here.

The Rule of Raylan still applies:

Carlo, Monte
Reply to  bigoilbob
February 24, 2022 8:26 pm

Look into a mirror, blob.

Reply to  bigoilbob
February 24, 2022 9:32 pm

The low quality of your comments make it often difficult to respond to you with civility, bob. But this time you’ve been stupidly scurrilous.

bdgwx has misread or misunderstood 4.1 in Bevington. And you’re so foolish as to disparage from ignorance. No surprise there.

Reply to  bdgwx
February 24, 2022 9:23 pm

Skip to example 4.1 if you don’t want to derive the formula on your own.”

Example 4.1 does not deal with instrumental resolution at all. The uncertainties in that example are given by Bevington eqn. 4.13, which is just the variance of the mean measurement.

Bevington doesn’t deal with instrumental resolution at all. The closest he gets is instrumental uncertainties— finite precision — which isn’t the same thing as below the detection limit.

Your understanding is incorrect, bdgwx.

bigoilbob
Reply to  Pat Frank
February 25, 2022 6:05 am

Bevington doesn’t deal with instrumental resolution at all. The closest he gets is instrumental uncertainties— finite precision — which isn’t the same thing as below the detection limit.”

““- Angela de Marco: God, you people work just like the mob! There’s no difference.
– Regional Director Franklin: Oh, there’s a big difference, Mrs. de Marco. The mob is run by murdering, thieving, lying, cheating psychopaths. We work for the President of the United States of America.””

Carlo, Monte
Reply to  bigoilbob
February 25, 2022 6:40 am

You’re insane, blob.

HTH

bdgwx
Reply to  Pat Frank
February 25, 2022 6:52 am

PF said: “Example 4.1 does not deal with instrumental resolution at all.”

Add it in using the procedure defined in section 3.2 and see if changes the fundamental conclusion that the uncertainty of an average is less than the total uncertainty of the individual measurements that went into it.

PF said: “Bevington doesn’t deal with instrumental resolution at all.”

First, he tells you have to propagate all uncertainty in section 3.2.

Second, if you disagree then why are you using him as a source?

bigoilbob
Reply to  bdgwx
February 25, 2022 7:17 am

Worst case for your view, in my view, is that the instrumental resolution introduces stair steps into the overall uncertainty envelope. So, there would be no change in the method of evaluation of their propagation.

And FYI. folks w.r.t. Dr. Franks unexamined (by him) head fake about errors in accuracy, they are either correlated, or not. If they’re correlated, thereby introducing systemic accuracy error, then that has been spotted and accounted for already. If they are uncorrelated, then the normal evaluation of propagation still applies.

Carlo, Monte
Reply to  bigoilbob
February 25, 2022 7:20 am

blob hath spake, the world is shaken.

Reply to  bigoilbob
February 25, 2022 11:58 am

head fake about errors in accuracy,”

Taken from published field calibration experiments.

Yet another fake head comment from you, bob. Completely impervious to reality.

that has been spotted and accounted for already

Where?

bigoilbob
Reply to  Pat Frank
February 25, 2022 1:54 pm

Where”

Judicious use of the rule of Hitchens:

“What can be asserted without evidence can be dismissed without evidence.”

Your explicit assertion is that there are systemic, unremediated accuracy errors in old temp data. Errors large enough to change the temp trends under discussion. But you repeatedly fail to present any.

I am merely dismissing what has been asserted without evidence.

Carlo, Monte
Reply to  bigoilbob
February 25, 2022 2:01 pm

Another hypocrite.

Tim Gorman
Reply to  bigoilbob
February 25, 2022 3:07 pm

So you believe that the uncertainty of temp measurements in 1929 is 0.05 but in 2000 it is 0.1?

What evidence do you need to just know intuitively that something is wrong with this?

bigoilbob
Reply to  Tim Gorman
February 25, 2022 3:40 pm

RUOK? If so, then you might want to address this post to whoever you think believes that. Directionally, the 2000 uncertainty is less than 1/3 that of 1929.

Tim Gorman
Reply to  bigoilbob
February 25, 2022 4:19 pm

Your explicit assertion is that there are systemic, unremediated accuracy errors in old temp data.”

““What can be asserted without evidence can be dismissed without evidence.”

You wanted evidence? I just gave it to you. Of course you just used the argumentative fallacy of Argument by Dismissal to avoid addressing the issue. There are systemic errors in the data and they are un-remediated.

I am merely dismissing what has been asserted without evidence.”

You now have the evidence. My guess is that you will *still* dismiss Pat’s assertion!

bigoilbob
Reply to  Tim Gorman
February 25, 2022 4:35 pm

Dismissing for lack of proof of a claim is not “Argument By Dismissal”. “Argument By Dismissal” is dismissal of a claim because it is “absurd”. I did not claim that about Dr. Franks claim. Rather, I dismissed it because he offered no proof and because it is part and parcel of his Dan Kahan “flight” mindset.

“You now have the evidence.”

To quote Dr. Frank, “where”?

Last edited 2 months ago by bigoilbob
Tim Gorman
Reply to  bigoilbob
February 26, 2022 6:37 am

Argument by Dismissal: an idea is rejected without saying why. Dismissals usually have overtones. For example, “If you don’t like it, leave the country” implies that your cause is hopeless, or that you are unpatriotic, or that your ideas are foreign , or maybe all three. “If you don’t like it, live in a Communist country” adds an emotive element.

Poisoning the Well: discrediting the sources used by your opponent. This is a variation of Ad Hominem

““What can be asserted without evidence can be dismissed without evidence.”

This is Argument by Dismissal. You provided no evidence as to why you dismissed the argument.

Pat *did* provide you the evidence. You just used the argumentative fallacy of Poisoning the Well by saying his evidence was not evidence.

You are GREAT at using argumentative fallacies.

bdgwx
Reply to  Tim Gorman
February 25, 2022 7:07 pm

TG said: “So you believe that the uncertainty of temp measurements in 1929 is 0.05 but in 2000 it is 0.1?”

The reporting uncertainty for an F reading from station 63 in 1929 with 30 days of observations is (0.5 * 5/9) / sqrt(30) = 0.05.

The reporting uncertainty for a C reading from station 993313 in 2000 with 24 days of observations is 0.5 / sqrt(24) = 0.1.

I’m not seeing what the issue is.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 9:05 pm

I’m not seeing what the issue is.

Very simple, you are an incompetent fool who doesn’t know what he doesn’t know.

I showed how this root-N garbage is illogical and nonphysical, and you pull out a ridiculous fable about “correlation”.

Got any more rabbits in your hat?

Tim Gorman
Reply to  bdgwx
February 26, 2022 1:58 pm

Where did you get 30 measurements? The record I posted for 1929 shows the number of observations as 99 – i.e. unknown!

And you are *STILL* trying to say average uncertainty is the same as total uncertainty! If these are monthly averages of temperatures then the uncertainty is the TOTAL uncertainty, not the average uncertainty of each individual temperature reading.

You are, once again, trying to divide by n when it is not justified.

As both Bevington and Taylor show:

(σ_total)^2 = (σ_u1)^2 + (σ_u2)^2 + … + (σ_30)^2

It is NOT

(σ_total)^2 = [(σ_u1)^2 + (σ_u2)^2 + … + (σ_u30)^2] / 30

Why do you keep dividing the total uncertainty by the number of data elements in order to get something you consider to be the uncertainty? It is *NOT* the uncertainty!



Reply to  bigoilbob
February 25, 2022 10:48 pm

Your explicit assertion is that there are systemic, unremediated accuracy errors in old temp data. Errors large enough to change the temp trends under discussion. But you repeatedly fail to present any.”

The MMTS sensor is more accurate than any historical meteorological LiG thermometer in a Stevenson screen. Of that, there is no dispute.

Hubbard and Lin 2002, for example showed that the PRT in a Stevenson screen produced far more error than the equivalent PRT in an MMTS shelter. That greater error was due greater susceptibility to the effects of irradiance and wind speed.

Between about 1880 and 1980 most meteorological air temperatures globally were measured with LiG thermometers housed in a Stevenson screen. After 1980, they were gradually replaced with MMTS sensors.

Therefore, the lower limit of MMTS systematic field measurement calibration error will necessarily be less than the lower limit of error of a historical LiG thermometer in a Stevenson screen.

That makes the MMTS lower limit globally and historically relevant.

That’s definitive evidence, bob. You can stop dismissing it now.

bigoilbob
Reply to  Pat Frank
February 26, 2022 5:54 am

I never claimed that there were no systemic errors. Just that they have been – yes – adjusted, based on our knowledge of them. And it has also been obvious for some time that those “adjustments”, while valid, have nada relevance to the trends now under discussion.

Folks, the common path of these subthreads:

  1. Old temperature measurements have relatively poor precision and accuracy.
  2. But we hate the know, proven methods of “adjusting” for any systemic errors.
  3. We hand wave and pronounce the resulting trends statistically indurable without doing the work to actually find out.
  4. Return to step 1.

Please point out any false statements in this link.

https://www.carbonbrief.org/explainer-how-data-adjustments-affect-global-temperature-records

Last edited 2 months ago by bigoilbob
Carlo, Monte
Reply to  bigoilbob
February 26, 2022 6:30 am

The fact that the “adjustments” continually change over time (as has been well-documented by others) reveals the claim that they are known is false.

And if, as you claim, they have no effect on the Holy Trends, why bother with them?

Reply to  bigoilbob
February 26, 2022 1:06 pm

I never claimed that there were no systemic errors.

Evasive. You claimed I “repeatedly fail to present any.” And were insulting in doing so.

And yet, the paper does present direct evidence and cites the H&L calibration experiment.

The MMTS calibration error is a valid global lower limit. All these things were and are available for investigation by any competent investigator. (inference alert)

Just that they have been – yes – adjusted, based on our knowledge of them.

Systematic errors due to solar irradiance and wind speed are both deterministic and variable in time and space.

The past systematic measurement errors that arose from these uncontrolled environmental variables cannot be known.

The reason is because the magnitudes and signs of unrecorded past measurement errors are forever lost.

Errors unknown in sign or magnitude cannot be adjusted away. Ever.

Knowing this is not rocket science.

Your link doesn’t have one word about irradiance or wind speed sensor measurement errors. It is oblivious to the greatest source of systematic error in air temperature measurements.

And for good reason. To recognize the existence of these errors is to toss out any climatological significance of the purported trend.

And with loss of significance goes loss of climatological employment income. Which likely explains the widespread professional reticence to operate with professional integrity.

Your whole link is beyond false, bob. It’s misleading, it’s a lie by omission, and it’s a monument to incompetence. It’s a scientific farce.

bigoilbob
Reply to  Pat Frank
February 26, 2022 1:41 pm

To recognize the existence of these errors is to toss out any climatological significance of the purported trend.”

180 out. To recognize these errors and to correct them in a timely manner increases the significance of those trends. But it’s all moot anyhow. Whether you use “adjusted” or raw data, the concerning trends are still there and are still statistically/physically undeniable. It’s why, when Nick Stokes and others commits truth, and asks you all to compare them, you deflect big time.

I believe you know that as well, since, even with your inflated uncertainties, you still get statistically/physically durable trends. Which is why, in your 2010 paper you just threw up your hands and declared victory instead of actually evaluating those trends. with your concomitant uncertainties.

AGAIN, Dan Kahan, redux….

Reply to  bigoilbob
February 26, 2022 5:23 pm

To recognize these errors and to correct them in a timely manner increases the significance of those trends.

More blather. As noted, the systematic measurement errors corrupting past temperature records cannot be “corrected.”

Whether you use “adjusted” or raw data, the concerning trends are still there and are still statistically/physically undeniable.

So says bob about a net 0.8 C trend with a lower limit 95% physical uncertainty of ±1 C. Ever so statistically/physically great thinking. No doubt about that. (is an irony alert necessary here?)

It’s why, when Nick Stokes and others commits truth, and asks you all to compare them, you deflect big time.

You’ve obviously never been present at any of those debates. In this one, for example, Nick and I debated his claim that thermometers have perfect accuracy and infinite precision.

I even went to Nick’s site to debate him there. He deleted my comment.

With your continued side-stepping of the impossibility of raising past measurement errors from the dead, you’re not a guy to accuse others of deflection.

even with your inflated uncertainties, …”

Pejorative dismissal. The empirical field calibration uncertainties are analytical and entirely defensible.

“… you still get statistically/physically durable trends.

No you don’t. Evident just by inspection.

The graphic below is the 2010 Figure, but with a 2σ lower limit uncertainty bound stemming from the MMTS field calibration.

Where is your, “statistically/physically durable trend,” bob?

Try squinting. Maybe that will help.

Figure03-2 sigma.png
Tim Gorman
Reply to  Pat Frank
February 27, 2022 5:24 am

my guess is that bob won’t even understand what the graph shows!

bigoilbob
Reply to  Pat Frank
February 27, 2022 6:48 am

Where is your, “statistically/physically durable trend,” bob?”

As the nerds told Al in “Married With Children”, when they were lording their riches over him at their HS reunion, “You should’a done your homework, Al”.

https://wattsupwiththat.com/2022/01/14/2021-tied-for-6th-warmest-year-in-continued-trend-nasa-analysis-shows/#comment-3436608

Tim Gorman
Reply to  bigoilbob
February 27, 2022 6:51 am

ROFL! The NASA trend doesn’t properly evaluate uncertainty! At best they use the resolution of the measurement device as the uncertainty, at worst they just ignore the uncertainty!

bigoilbob
Reply to  bigoilbob
February 27, 2022 7:37 am

A little expansion:

  1. The trends were evaluated on my antique freeware, available at the publishing date of the scantily cited 2010 paper.
  2. The standard trend errors agree to with a few pptt, whether or not they were done analytically – taking the uncertainties that no one else can match, using them to form an additional variance which is then included in the standard trend error arithmetic, or by merely bootstrapping a couple thousand realizations. I used the (slightly) higher analytical results as a Dr. Frank bone throw.

All easily replicable to non Rorschachian evaluators with a few minutes to spare….

Last edited 2 months ago by bigoilbob
Carlo, Monte
Reply to  bigoilbob
February 27, 2022 8:18 am

Free clue, blob: the standard practice in climastrology of “evaluating and comparing trends” is not an uncertainty analysis.

Reply to  bigoilbob
February 27, 2022 8:41 am

The standard trend errors agree to with a few pptt,” because the inconvenient published field calibration experiments have been self-servingly ignored.

Your work is right up to the standards of consensus climatology, bob — namely ignore field calibrations, assume all the measurement errors are small and random, and all the means are perfectly accurate.

Hopeless.

Reply to  bigoilbob
February 27, 2022 8:35 am

Your prior comment completely missed the boat, bob. The 1980 through 2005 air temperature measurements increasingly relied on MMTS sensors.

The lower limit of uncertainty is based exclusively on field calibrations of MMTS sensors.

That makes the MMTS lower limit of uncertainty exactly applicable to the 1980-2010 temperature record.

And applicable to the prior record as well, which relied upon LiG thermometers in Stevenson screens that are less accurate than the MMTS.

Your supposed “statistically/physically durable trend” is submerged beneath the uncertainty bounds.

These are obvious analytical points. And are game, set, and match, against your position.

bigoilbob
Reply to  Pat Frank
February 27, 2022 9:14 am

Your supposed “statistically/physically durable trend” is submerged beneath the uncertainty bounds.”

I already demonstrated that they are not. They are accurately evaluated by me. I think you missed the part about how I used your uncertainties. And true to form, you’re just deflecting into more hand waving instead of actually evaluating the trends, with your uncertainties. Bone throw – you’re obviously capable of doing so, but, for whatever reason, would rather not.

Folks, more Dan Kahan avoidance, or just garden variety wanna be rightism? You be the judge….

Carlo, Monte
Reply to  bigoilbob
February 27, 2022 10:22 am

Who are these “folks” you have a need to appeal to?

Reply to  bigoilbob
February 27, 2022 12:58 pm

I already demonstrated that they are not.

You merely demonstrated the common self-serving neglect of sensor calibration error.

I used your uncertainties” Actually, Folland’s assigned uncertainty combined with the MMTS calibration error in H&L

If you’d used those, as you claim, you’d have produced my result. Because the derived uncertainty can’t average away.

Folks, more Dan Kahan avoidance...” Projection. A personal failure common among demagogues.

Jim Gorman
Reply to  Pat Frank
February 26, 2022 3:09 pm

It is why records should be stopped and a new one started when a break occurs. One can not know what the microclimate was so any “correction” is pure guesswork. If the belief is that the record is not fit for purpose, discard it. To claim that one can use statistics to correct a record when you do not have access to the phenomena to remeasure it is a false confidence in statistics.

Reply to  Jim Gorman
February 26, 2022 5:26 pm

Agreed, Jim. AGW pseudoscience lives and breathes on false precision.

Tim Gorman
Reply to  Pat Frank
February 27, 2022 5:23 am

Even MMTS measurement devices have uncertainty because of the tolerance of the electronic components used, e.g. 1% resistors. The more 1% components you have from the same production batch the higher the uncertainty will grow since not all random effects will cancel.

Carlo, Monte
Reply to  Tim Gorman
February 27, 2022 7:28 am

Don’t forget “the greatest enemy of the electrical engineer” (as I still remember a professor saying) — temperature. All those little components have temperature coefficients, which are never specified as plus-% or minus-%, but rather plus/minus-%. Meaning you don’t even know the directions that component values change, and it is impossible to claim they cancel as “random”..

It is all uncertainty.

Reply to  Tim Gorman
February 27, 2022 1:10 pm

You’re dead-on right, Tim.

Lin & Hubbard discuss electronic sensor errors in detail, in their (2004) Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks,
doi: 10.1175/1520-0426(2004)021<1025:SAEEIA>2.0.CO;2.

From their abstract, “For the temperature sensor in the U.S. Climate Reference Networks (USCRN), the [RMS] error was found to be 0.2° to 0.33°C over the range −25° to 50°C. The results presented here are applicable when data from these sensors are applied to climate studies and should be considered in determining air temperature data continuity and climate data adjustment models.

The final sentence further obliterates all of b.o.b.’s fondest misrepresentations.

Tim Gorman
Reply to  bigoilbob
February 27, 2022 5:18 am

Just that they have been – yes – adjusted, based on our knowledge of them. And it has also been obvious for some time that those “adjustments”, while valid, have nada relevance to the trends now under discussion.”

Hubbard and Liu in 2006 demonstrated that regional adjustments introduce bias.

“Thus, the constant Quayle surface temperature adjustment factors are not applicable for extreme temperature bias adjustments”

This may be an inconvenient truth for you but it is the truth nonetheless. Microclimate conditions vary widely from station to station. That doesn’t make their readings any less accurate. What it means is that trying to force them into a perceived “more accurate” reading is nothing more than adding a subjective bias into the temperature record.

This is what uncertainty is meant to address. You simply cannot reduce uncertainty in the temperature record by adding a subjective bias into the record. You have to *objectively* evaluate the uncertainty and take it for what it is!

“But we hate the know, proven methods of “adjusting” for any systemic errors.”

You can’t adjust for systemic errors if you don’t know what they are. And you simply don’t know on a station-by-station basis what the systemic error is in 1929. All you can do is make a judgement on what the uncertainty is for those stations and then propagate that uncertainty throughout your data set.

“We hand wave and pronounce the resulting trends statistically indurable without doing the work to actually find out.”

Statistics simply can’t tell you what you don’t know and can never know. Pretending you can is nothing more than adding subjective bias into the data record.

Reply to  bdgwx
February 25, 2022 11:31 am

“… the uncertainty of an average is less than the total uncertainty of the individual measurements that went into it.

Nothing to do with instrumental resolution or with deterministic non-iid systematic error.

Second, if you disagree then why are you using him as a source?

Because my paper is about systematic measurement error. Not about instrumental resolution.

Bevington is entirely appropriate for assessing that error.

I don’t disagree with Bevington at al. Acknowledging that he doesn’t deal with instrumental resolution isn’t disagreement.

bdgwx
Reply to  Pat Frank
February 25, 2022 12:30 pm

PF: “Bevington is entirely appropriate for assessing that error.”

Then why not use the procedure defined in Bevington section 3.2 exactly the way it is written. Why use the formula sqrt[N * σ / (N-1)] which is inconsistent with the procedure or any other method in Bevington for propagating uncertainty?

BTW…I still have no idea where you got your equation 9. It doesn’t show up anywhere in Bevington that I can see.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 12:46 pm

10 GOTO 10

Tim Gorman
Reply to  bdgwx
February 25, 2022 1:03 pm

You are your own worst enemy!

sqrt[N * σ / (N-1)]  is nothing more than the root-sum-square of the individual uncertainties. It is *NOT* the standard deviation of the mean.

In fact N/(N-1) is GREATER THAN 1. So multiplying  σ by a value greater than 1 makes it larger!

And exactly what do you think the sqrt( σ) gives you? You wind up with the sqrt of the sqrt of the variance. Huh?

bdgwx
Reply to  Tim Gorman
February 25, 2022 2:19 pm

TG said: “sqrt[N * σ / (N-1)] is nothing more than the root-sum-square of the individual uncertainties. “

First…typo. My bad. That should have been sqrt[N * σ^2 / (N-1)].

But, no that’s not even the root-sum-square formula. The RSS formula when the individual uncertainties are the same is sqrt[N * σ^2].

Tim Gorman
Reply to  bdgwx
February 25, 2022 3:43 pm

So what? N/(N-1) is *still* greater than 1!!

The uncertainty is going to grow whether you have σ or σ^2.

Thus you do *NOT* decrease average or total uncertainty with more observation!

Reply to  bdgwx
February 25, 2022 10:30 pm

Paper eqn 9 is Bevington eqn 1.9 with (x_i – x_bar)² = (σ_bar)’² because Folland’s measurement uncertainty is a constant across every instance.

Recognize that (x_i – x_bar)² is the variance of each measurement about a mean. So is Folland’s (σ_bar)’² as purported.

Bevington eqn 3.13 reduces to eqn 1.9 with two combined uncertainties and with the correlation term excluded because the uncertainty is a constant guesstimate.

bigoilbob
Reply to  bdgwx
February 25, 2022 5:59 am

Total sidebar. If you want “boot heel values” in St. Charles, vote for Bob Onder for your executive. I know nada about you or your pol views, but I’m guessing that you don’t want a Hawley clone (not) running the show.

Do the right thing, and I’m willing to consider any carbon based life form you recommend who is willing to primary Cori Bush. Her reply to my letter to her on the 45q give aways championed by David Middleton was depressingly anodyne…

Tim Gorman
Reply to  bdgwx
February 25, 2022 9:55 am

But…the uncertainty of the average of several measurements can be less than the resolution of the instrument used to provide those measurements and is less than the total uncertainty of the individual measurements.”

Malarky! Like Dr. Frank said not *all* random error will cancel, even when you are measuring the same thing with the same instrument.

When you divide the  σ by sqrt(N) you are determining the AVERAGE UNCERTAINTY VALUE FOR EACH ELEMENT IN THE DATASET. That is *NOT* the same thing as the total uncertainty obtained when the measurements are combined into a total.

If you average 20 +/- 0.2 and 40 +/- 0.4 then the average uncertainty becomes +/- 0.3. But that is *NOT* the total uncertainty of the combined measurements. That total uncertainty is +/- 0.6. You can get that by adding 0.2 and 0.4 = 0.6 or by multiplying the average uncertainty by two -> 2 * 0.3 = 0.6.

Assume those are two boards you are putting together to form a beam to span a foundation. The uncertainty of the total length is *NOT* +/- 0.3. It is +/- 0.6. If you put enough boards together you might do a root-sum-square instead of direct addition but the overall uncertainty *will* still GROW, just at a slower rate!

As for Bevington, his equation 4.12 is the same thing you can’t seem to understand from Taylor. The standard error of the mean is how precisely you have calculated the estimate of the mean It is *NOT* the same thing as the total uncertainty propagated from combining all of the data elements.

If you have a pile of 2’x4′ boards in your backyard you can pull some out of the pile (your sample) and use them to estimate the average length of all of the boards. The more samples you pull the more precisely you can calculate the average length of the boards, i.e. the smaller the standard deviation of the calculated mean. BUT! When you are combining some of the boards to build a beam spanning a distance you cannot use that standard deviation of the mean to determine your uncertainty in what the final length of the beam will be. The uncertainty of the final length is the *sum* of the uncertainties for the boards. It is *not* the uncertainty of the mean!

bdgwx
Reply to  Tim Gorman
February 25, 2022 11:31 am

TG said: “When you divide the  σ by sqrt(N) you are determining the AVERAGE UNCERTAINTY VALUE FOR EACH ELEMENT IN THE DATASET.”

Bevington derives that formula from the general propagation of error formula 3.14.

TG said: “As for Bevington, his equation 4.12 is the same thing you can’t seem to understand from Taylor.”

Bevington 3.14 and Taylor 3.47 are the exact same partial differential method. Bevington 4.12 can be derived from either Bevington 3.14, Taylor 3.47, or GUM 10. All statistics texts agree that when you run a bunch of measurements through a function that computes the average the shortcut formula is σ_avg = σ / sqrt(N). You can also derive this shortcut formula via Taylor 3.18 which you said was your preferred method. The reason you don’t get σ_avg = σ / sqrt(N) when you do it is because you keep arithmetic mistakes.

Maybe you can do me a favor. Use Taylor 3.18 (your preferred method) and convince Pat Frank that the propagation of uncertainty through an averaging function is σ_avg = σ / sqrt(N). Just make sure you do the arithmetic correctly this time.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 12:29 pm

Hypocrite! When confronted with applying your fav formulas to the temperature sampling problem, you run away under the cover of some vague claims about “correlation” that supposedly invalidate your fav formulas.

Tim Gorman
Reply to  bdgwx
February 25, 2022 1:19 pm

“Bevington derives that formula from the general propagation of error formula 3.14.”

So what? It is *still* the average value of all the uncertainties!

“Bevington 3.14 and Taylor 3.47 are the exact same partial differential method.”

And you don’t understand either one of them!

(σ_x)^2 =(σ_u)^ 2 ) (ẟx/ẟu)^2 + (σ_v)^2) (ẟx/ẟv)^2 + …..

In this case u and v represent two different measurements.

If these measurements are linear in nature then (ẟx/ẟu) and (ẟx/ẟv) will equal 1. (e.g. x = u + v)

Then (σ_x)^2 = (σ_u)^ 2 + (σ_v)^2)

The standard root-sum-square version of adding uncertainties.

Do it for multiple measurements, u,v,w,y,z,….

you’ll still get

(σ_x)^2 = (σ_u)^ 2 + (σ_v)^2) + (σ_w)^2 + (σ_y)^2 + (σ_z)^2

Still root-sum-square for the total uncertainty — NOT THE STANDARD DEVIATION OF THE MEAN and NOT THE AVERAGE UNCERTAINTY OF EACH INDIVIDUAL ELEMENT!

σ_avg = σ / sqrt(N) gives you the AVERAGE uncertainty per data element. It does *NOT* give you total uncertainty for the entire data set. It’s nothing more than mental masturbation.

If you already know all the uncertainty values, which you must know if you are going to calculate their average, then you’ve already pretty much calculated the final uncertainty! Either use the direct sum or take its square root, your choice.

Last edited 3 months ago by Tim Gorman
Jim Gorman
Reply to  Tim Gorman
February 25, 2022 2:01 pm

Excellent! Getting tiresome having to explain physical measurements to statisticians who have no experience in making physical measurements.

I’ve recently done all the finish carpentry in the house. You think all walls have 90° corners, ha. All walls are vertical, ha. Trying to miter the cuts with minimal gaps makes you understand uncertainty.

Tim Gorman
Reply to  Jim Gorman
February 25, 2022 2:32 pm

But if you know the *average* value of all the ceiling-to-floor heights then shouldn’t all the heights be equal to the average? If you know the average value of the uncertainty of all your ceiling-to-floor heights then shouldn’t the average of all those heights be the uncertainty in all your measurements? You should be able to cut all the trim boards to the same length and have them all fit, right?

Jim Gorman
Reply to  Tim Gorman
February 25, 2022 2:39 pm

Ha, bwaaa, ha, ha!

bdgwx
Reply to  Tim Gorman
February 25, 2022 6:59 pm

TG said: “If these measurements are linear in nature then (ẟx/ẟu) and (ẟx/ẟv) will equal 1. (e.g. x = u + v)”

And if x = (u + v) / 2 then then (ẟx/ẟu) and (ẟx/ẟv) will be 1/2.

Remember x = u + v is the sum while x = (u + v) / 2 is the average. I really don’t mean for this to be condescending but since you have conflated the two before it is important for me to point this out again.

Carlo, Monte
Reply to  bdgwx
February 25, 2022 9:00 pm

I really don’t mean for this to be condescending

Liar. Can someone that makes up fake data be trusted?

Tim Gorman
Reply to  bdgwx
February 26, 2022 1:49 pm

I simply don’t know how to get this through to you. The average uncertainty is merely an artificial way to equally distribute the total uncertainty across all the data elements making up the data set.

If you have just two data points, one with 0.4 uncertainty and one with 0.6 uncertainty then the average uncertainty is 0.5.

0.5 * 2 = 1.0. The exact same thing as 0.4 + 0.6 = 1.0

The total uncertainty in both cases is 1.0. All you have done is mask the variance of the uncertainty by making each element have the same uncertainty value.. Mathematical masturbation. The average uncertainty is useless when it masks the characteristics of the population such as variance.

Let’s say you are the engineer designing the big o-ring on the space shuttle keeping the fuel and oxidizer separated. You have several technicians dispatched to different places around the country to measure the thickness of the various o-rings and get the uncertainty data set

0.2, 0.3, 0.6, 0.7, 0.8

The average uncertainty is 0.45.

You run to your boss and say the average uncertainty of these o-rings is 0.45, well within safety requirements.

One day your boss calls you in, says you’re fired, and armed guards escort you off the premise.

Do you have even an inkling as to what happened?

bdgwx
Reply to  Tim Gorman
February 26, 2022 8:40 pm

TG said: “I simply don’t know how to get this through to you. “

The best way to convince me is demonstrate that Bevington, Taylor, GUM, NIST, monte carlo simulations, decades of texts and experiments concerning propagation of uncertainty are all wrong.

Tim Gorman
Reply to  bdgwx
February 27, 2022 5:35 am

I’ve demonstrated what Bevington shows. Taylor is no different.

See the attached photo of Bevington. There is no division by “n” anywhere in the equation.

Again, the “average” uncertainty merely spreads the total uncertainty across all data elements making the variance of the uncertainty zero.

The GUM is exactly the same. The GUM has the same equation that Bevington and Taylor has. Nor are there any decades of texts and experiments that prove any differently.

What there *IS* is just you misunderstanding basic physical metrology. Statistical masturbation that changes the basic characteristics of the underlying data is either due to ignorance or fraud, take you pick as to which applies to you.

bev1.jpg
Carlo, Monte
Reply to  bdgwx
February 23, 2022 7:31 am

Idiot.

Reply to  Carlo, Monte
February 23, 2022 4:22 pm

Please give bdgwx the respect he deserves in the future !
Call him “Mister Idiot”

Carlo, Monte
Reply to  Richard Greene
February 23, 2022 6:38 pm

Noted.

Reply to  lee
February 23, 2022 3:56 pm

Anyone who has taken a science course
knows three decimal places or more is real science.
Anything less is baloney.

Paul Penrose
Reply to  Richard Greene
February 24, 2022 9:57 am

What’s really sad is how many “scientists” there are that don’t realize you were being sarcastic.

Steve Case
February 22, 2022 10:30 pm

But here are some things that can be done:
__________________________________

I realize the the scope of this particular edition of “The Greatest Scientific Fraud Of All Time — Part XXX” only covers the temperature recording but there’s a lot more than just that. Trump pulling out of the Paris Agreement was a very good step in the right direction.
How ’bout cancelling subsidies and funding for the wind mills and solar panels – Forcing NPR and PBS to do a 180 on the climate nonsense.
Resurrect Harry and Louise who did the series on Hillary’s health care plan dissect the climate charade bit by bit; polar bears, hurricanes, tornadoes, floods, fires, heat waves, glaciers, sea ice, ice caps, sea level rise, methane, ocean pH, scientific corruption — feel free to add to the list.

AndyHce
Reply to  Steve Case
February 22, 2022 10:39 pm

Your suggestions are akin to asking that some non-believer outsiders be brought in audit, analyze, and reformulate the Pope’s decreess to better … whatever. Chances are nil.

Tom Abbott
Reply to  Steve Case
February 23, 2022 2:00 am

They could ask James Hansen why he said 1934 was the hottest year, and now he says it is not? Was Hansen lying then, or now? (Hint: Now).

Derg
Reply to  Steve Case
February 23, 2022 3:36 am

“ How ’bout cancelling subsidies and funding for the wind mills and solar panels – Forcing NPR and PBS to do a 180 on the climate nonsense. ”

For all the good that Trump did this is one area he let us all down on. But sadly it is very difficult to get rid of subsidies. 😣

joe
Reply to  Steve Case
February 23, 2022 6:21 am

great list and here is one more. stop diluting gasoline with ethanol.

Drake
Reply to  joe
February 23, 2022 8:11 am

I would like to require all gas pumps to show, based on a standard vehicle MPG EPA estimate, a listing for MPG of each octane value with and without ethanol.

Then require the pump to provide $ per mile for the fuels provided, you know, like the feds require stores to show $ per oz. lb. etc. for foodstuffs.

The same for diesel and biodiesel. ALL diesel MAY have up to 5% biodiesel, but require the individual pumps to list the actual volume of biodiesel in the mix.

I usually buy my diesel at Costco, but this last go round I filled up at a Smith’s. I “seemed” to get a higher MPG from the Smith’s fuel over a 250 mile drive. Hard to base anything on this since it is winter. With the politics of Costco management, I would assume their fuel has the most “good stuff” possible. I KNOW that I get MUCH WORSE mileage with biodiesel #5 than standard D #2. I will never put that crap in my truck again. The winter after I mistakenly put #5 in my truck I had to remove the fuel filter and clean the gunk out and add cold weather additive to the fuel to get the truck running right. I have not had that problem since, and it is around 0 f now where I am. So 2 winters no problem, 1 winter after biodiesel, problem.

Clyde Spencer
Reply to  Drake
February 23, 2022 1:07 pm

Longer chain hydrocarbon molecules inherently contain more energy. Adding shorter chain ethanol reduces the average energy content of the fuel and hence gas mileage. But, I suspect the difference with available grades might be less than the effect of ambient temperature, especially in the Winter.

Mike Jonas(@egrey1)
Editor
February 22, 2022 10:41 pm

There’s a serious risk that uninvolved people might think that the ONLY scientific fraud is the manipulation of the temperatures, when the simple fact is that the whole of climate science is the real fraud, and the temperature manipulation is, by comparison, a tiny fringe issue. Yes, it’s massively important and probably one of the worst scientific frauds in its own right, but it is a mere minnow (bacterium?) beside the whole of climate science.

Reply to  Mike Jonas
February 23, 2022 12:21 am

I remember a previous occasion when WUWT got excited about this “worst scientific fraud”. GWPF was really going to do something about it. The Telegraph was onto it to. They announced:
“An international team of eminent climatologists, physicists and statisticians has been assembled under the chairmanship of Professor Terence Kealey, the former vice-chancellor of the University of Buckingham. Questions have been raised about the reliability of the surface temperature data and the extent to which apparent warming trends may be artefacts of adjustments made after the data are collected.”

Of course, nothing ever happened.

Reply to  Nick Stokes
February 23, 2022 1:21 am

….indicated evidence of a human ancestor living 500,000 years ago. They announced their discovery at a Geological Society meeting in 1912. For the most part, their story was accepted in good faith.

However, in 1949 new dating technology arrived that changed scientific opinion on the age of the remains Using fluorine tests, Dr Kenneth Oakley, a geologist at the Natural History Museum, discovered that the Piltdown remains were only 50,000 years old….

That fraud lasted 37 years…while ‘nothing ever happened’ …

“From 1934 to 1940, under Lysenko’s admonitions and with Stalin’s approval, many geneticists were executed (including Izrail Agol, Solomon Levit, Grigorii Levitskii, Georgii Karpechenko and Georgii Nadson) or sent to labor camps. The famous Soviet geneticist and president of the Agriculture Academy, Nikolai Vavilov, was arrested in 1940 and died in prison in 1943.

In 1936, the American geneticist Hermann Joseph Muller, who had moved to the Leningrad Institute of Genetics with his Drosophila fruit flies, was criticized as a bourgeois, capitalist, imperialist, and promoter of fascism, so he left the USSR, returning to America via Republican Spain. In 1948, genetics was officially declared “a bourgeois pseudoscience”. Over 3,000 biologists were imprisoned, fired, or executed for attempting to oppose Lysenkoism and genetics research was effectively destroyed until the death of Stalin in 1953. Due to Lysenkoism, crop yields in the USSR actually declined.

That fraud lasted 20 years while ‘nothing ever happened

Note the typical Marxist use of cancel culture to stifle the opposition…

In the 1700s, French naturalist Georges-Louis Leclerc, Comte de Buffon estimated an Earth age of about 75,000 years, while acknowledging it might be much older. And geologists of the 19th century believe it to be older still — hundreds of millions of years or more — in order to account for the observation of layer after layer of Earth’s buried history. After 1860, Charles Darwin’s new theory of evolution also implied a very old Earth, to provide time for the diversity of species to evolve. But a supposedly definite ruling against such an old Earth came from a physicist who calculated how long it would take an originally molten planet to cool. He applied an age limit of about 100 million years, and later suggested that the actual age might even be much less than that. His calculations were in error, however — not because he was bad at math, but because he didn’t know about radioactivity.

Radioactive decay of elements in the Earth added a lot of heat into the mix, prolonging the cooling time. Eventually estimates of the Earth’s age based on rates of radioactive decay (especially in meteorites that formed around the same time as the Earth) provided the correct current age estimate of 4.5 billion years or so.

So that was a pretty long time while the accepted scientific answer’ was in fact wrong.

And nothing ever happened.

Models are only as good as their assumptions, and the assumption that the only parameters that were significant were the molten core and radiative cooling provided and answer that was out by 4.4 billion years, or almost 87%…

Maybe Nick, history is doomed to repeat itself for those who do not learn it, and maybe the reason why ‘nothing ever happened’ was that people were paid handsomely not to report it.

We all know that the whole IPCC/ClimateChange hypothesis hinges on one particular assumption, namely that all late 20th century climate variation whose ’cause’ cannot be otherwise established by simple linear models, is down to the increase in anthropogenic CO2, and if the radiation equations don’t match the rather steeper rise from 1980 to 2000, then positive feedback must be happening.

But they went looking for that years ago, and ‘nothing ever happened’

The alternative hypothesis that, like Lord Kelvin’s age of the Earth estimate, something else was going on they didn’t know anything about, was entirely dismissed from any consideration.

Why?

Cui Bono, Nick Stokes, Cui Bono?

Reply to  Leo Smith
February 23, 2022 2:30 am

“We all know that the whole IPCC/ClimateChange hypothesis hinges on one particular assumption, namely that all late 20th century climate variation whose ’cause’ cannot be otherwise established by simple linear models, is down to the increase in anthropogenic CO2, and if the radiation equations don’t match the rather steeper rise from 1980 to 2000, then positive feedback must be happening.”

Leo, the notion that something must be so because we can’t think of anything else should be like a red rag to a bull for anyone with the least amount of scientific curiosity. I have a piece on the TCW Defending Freedom blog which suggests a warming mechanism which is anthropogenic, would have begun to make a major difference from the early 20th century, explains Tom Wigley’s blip and shows how the mechanism is currently causing anomalies in temperatures around the world.

Briefly: some areas are warming very much faster than climate science can explain — for example, certain lakes and seas are warming at two to three times the global average. There are ways to stop those from continuing to heat up, but for TCW I’ve approached it from the opposite direction, looking for ways that we could warm the world if a cooling trend resumed and got out of hand, then pointing out that we are already carrying out those warming measures inadvertently.

Look at

http://www.youtube.com/watch?v=J0FRqrI__D0

wwwdotconservativewomandotcodotukslashcold-comfortslash

Briefly, oil/surfactant/lipid smoothing of water surfaces warms by lowering albedo and reducing low level stratus — other reasons like reduced evaporation etc. Lipid increases because we are dumping nutrients into the water from sewage and agricultural run-off. The latter also feeds in the dissolved silica that diatoms, a very efficient lipid producer, need.

Do you still have my email? it’s the same.

JF
Unlike most of climate science this is testable. Lord Rayleigh and Benjamin Franklin are on my side.

commieBob
Reply to  Leo Smith
February 23, 2022 5:30 am

Excellent.

When I was a pup, we were taught that there were scientific frauds and fallacies in other places and in the past. The implication was that we were so much wiser, and it couldn’t happen here.

But it was happening here. Ancel Keys’ focus on dietary fat as the cause of an epidemic of heart disease, and his suppression of science pointing to sugar as the true villain, probably killed millions of people.

Cui bono? Evidence points to the sugar industry.

Then we have the statement, by a former editor of the British Medical Journal that it’s …

time to assume that health research is fraudulent until proven otherwise.

link

I’m guessing that the pandemic has resulted in enough fraudulent research to sink a battleship.

So, when it comes to fraudulent science, there’s nothing special about us.

When you have something like climate science, which is highly politicized, the temptation to cook up results will be yielded to by many scientists. So, overwhelmingly, we have to assume fraud.

Frank from NoVA
Reply to  Leo Smith
February 23, 2022 9:20 am

Excellent!

And the physicist referenced here…

“But a supposedly definite ruling against such an old Earth came from a physicist who calculated how long it would take an originally molten planet to cool.”

…was no other than Lord Kelvin – the classic ‘appeal to authority’, and now the primary ‘go to’ tactic of alarmists of all stripes (climate, public health, etc.).

Clyde Spencer
Reply to  Leo Smith
February 23, 2022 2:34 pm

The lesson seems to be that when politicians, or even scientists with unassailable credentials, get involved, real science loses out. That is sort of the opposite of ad hominem attacks, or ‘Defenders of the Faith’ insisting that only ‘real’ climatologists publishing in pay-walled journals have anything to say worth listening to.

Reply to  Clyde Spencer
February 23, 2022 4:32 pm

Science + politics = politics

LdB
Reply to  Nick Stokes
February 23, 2022 5:16 am

It’s climate science things like truth and accuracy are not important as long as everything supports emission control it gets the nod. The fact emission control is never going to happen adds to the humour of the field.

jeffery p
Reply to  Nick Stokes
February 23, 2022 6:49 am

“Nothing ever happened”

Nice deflection.

I suppose you should pat yourself on the back for nothing happening.

Graemethecat
Reply to  jeffery p
February 23, 2022 9:04 am

Nothing has happened YET.

Truth is the daughter of time.

Reply to  Nick Stokes
February 23, 2022 4:30 pm

A lot of numbers in the average temperature statistic are wild guesses.
Infilled numbers that can never be verified.
Still true today.
But especially true before 1920.

william Johnston
Reply to  Mike Jonas
February 23, 2022 7:20 am

It is called distraction. Illusionists do it all the time.

Reply to  Mike Jonas
February 23, 2022 4:27 pm

You comment is baloney.
There is real climate science
It involves data
The study of the present and past climates

The always wrong wild guess predictions
of the future climate are not science
They are data free.
There are no data for the future.
Just unproven theories
and wild guess speculation.
But that IS NOT SCIENCE.

Eric Worrall(@eworrall1)
Admin
February 22, 2022 10:41 pm

I wonder if insisting that all original measurements be included inside the error bars would work? That way if the adjustments get really out of hand, you’d get an embarrassingly large error bar. The estimated error of the original measurement should also be included, of course

Reply to  Eric Worrall
February 23, 2022 12:38 am

Adjustments are not usually made because measurements were in error. They are made because something changed which was not a reflection of temperature change in the region.

A classic case is that of Cape Town, frequently bandied about here. There was a very long record at the former Royal Observatory, in town by the sea. But in 1961, the government decreed that the new airport would be used for official CT temperatures, and GHCN followed suit. But the Observatory continued to report as a GHCN station.

The situation is shown in this graph (from here).

comment image

The red curve is the GHCN V3 unadjusted, the green is adjusted. There is a big deviation in 1961. But up to that year, the red is readings from the Observatory, and that continued as the blue curve. So you can see that the airport really is consistently cooler, as measured. The change in 1961 did not come from a climate change, but from an administrative change.

The GHCN adjusted corrected for this by extending back, allowing for this 1.5°C difference in observed temperature. The convention with adjustment is that you take the current situation as given, and make any adjustments backwards. But as you can see the adjusted tracks the continuous Observatory record, but 1.5°C cooler. Either is a valid representative of the region, which is what is really wanted. Anomalies sort out the difference.

Eric Worrall(@eworrall1)
Admin
Reply to  Nick Stokes
February 23, 2022 12:51 am

I understand the rationale, nevertheless when you see an adjustment vs raw series which looks like a hockey stick it naturally raises suspicions about whether the adjustment process is flawed in some way.

comment image

https://wattsupwiththat.com/2020/11/03/recent-ushcn-final-v-raw-temperature-differences/

I think including the raw temperatures inside the scope of the error bars seems a rational way of resolving the situation.

Last edited 3 months ago by Eric Worrall
Reply to  Eric Worrall
February 23, 2022 1:11 am

Another post that doesn’t seem to realise that USHCN was replaced (by ClimDiv) eight years ago. These averages were not calculated by NOAA.

In the old USHCN, the main difference between raw and final was TOBS, based on actual records of time of observation. ClimDiv does it differently, but probably with similar results. Time of observation matters; it is known to have changed, and you have to adjust for that.

Derg
Reply to  Nick Stokes
February 23, 2022 3:40 am

Why is there no hockey stick with sea measurements 🤔

Clyde Spencer
Reply to  Nick Stokes
February 23, 2022 2:56 pm

You don’t have to adjust for it. You could just work with what you have and acknowledge that the data have limitations. How are temperatures handled that are in immediate proximity, but on opposite sides of a time zone boundary? How are temperatures handled that are on opposite sides of a time zone? Are all readings converted to solar time? How is the situation of a cold front passing through a region between readings handled when the stations used for homogenization aren’t impacted by the cold front? Are such transient events just ignored in correcting what seems to be bad data?

esalil
Reply to  Nick Stokes
February 23, 2022 3:06 am

At the end of the 19th century the difference is much more than 1.5C. So, the past has been made cooler. How come?

Reply to  esalil
February 23, 2022 4:50 pm

Outside of the US. Europe and the east coast of Australia,
there were few land weather stations pre-1900
Ocean surface measurements were even worse.
Little data from the Southern Hemisphere
Not useful temperature numbers for science until the use of
satellite data in 1979, with only 5% infilling required
over both poles.

DaveS
Reply to  Nick Stokes
February 23, 2022 5:11 am

Or we could just accept that we don’t have a continuous record and stop trying to knit discontinuous data sets together.

Reply to  DaveS
February 23, 2022 7:49 am

We did have a continuous record, at the Observatory.

Jim Gorman
Reply to  DaveS
February 23, 2022 8:08 pm

Yes. The nail went in with one swing!

LdB
Reply to  Nick Stokes
February 23, 2022 5:30 am

That is actually a very very funny graph, I cracked up at it and it is very instructive of the problem with doing that junk and why in real sciences you can’t do that shit.

So Nick what caused the warming between what looks like 1850 and 1950 well before the hockey stick takeoff off CAGW 🙂

I will give you a much much more likely answer .. the land use around the site is changing heavily over the years. That is the problem with altering historic data you have absolutely no idea what is going on and you are making adjustments based on guesses but hey it’s the norm for Climate Science tm.

Last edited 3 months ago by LdB
4 Eyes
Reply to  LdB
February 23, 2022 12:57 pm

Why the warming rate from 1900 to 1940 is the same rate as from late 1970s to present is a question that has been asked many times and needs answering. No-one has yet offerred a rigorous explanation, just waffle. I might even soften my stance on AGW by CO2 if a proper answer is forthcoming but in following this for 20 years nothing remotely resembling a physical explanation has been put forward.

Smart Rock
Reply to  Nick Stokes
February 23, 2022 9:43 am

Nick: I get what you said. But why have the pre-1900 readings been adjusted downwards?

From rough scaling on your graph, the raw numbers from about 1900 to 1961 are all adjusted downwards by about 1.1°, to correct for the 1961 discontinuity. Fine. But the downward adjustment increases progressively going back from 1900 to about 1858, when it’s a whopping 2.0° down.

It looks like a bad example to use as a demonstration of how rigorous and objective the adjustments have been.

You brought up the Cape Town record. Do you know something about 19th century Cape Town that would have led to introducing a 0.22°/decade warming trend that wasn’t in the original record?

The cynic in me says that was just helping to cool the pre-industrial past on order to further dramatize anthropogenic warming. But, if you do know something definitive, please share it with us.

Reply to  Smart Rock
February 23, 2022 11:36 am

“Do you know something about 19th century Cape Town…”

No. But the fact that I don’t know it and you don’t doesn’t mean that the adjustment isn’t justified. 

In fact a lot of adjustments at about that time came from the introduction of Stevenson screens, which cut out spurious radiant warming.

Carlo, Monte
Reply to  Nick Stokes
February 23, 2022 12:34 pm

What are the uncertainties of these “adjustments”?

Smart Rock
Reply to  Nick Stokes
February 23, 2022 1:33 pm

Very true, and I only raise questions rather than make accusations.

I don’t think the introduction of a Stevenson screen at Cape Town was something phased in over 40 years – it would have surely led to a visible discontinuity in the series. So there’s probably some other reason, that should be documented somewhere, and if it’s not, that would be a bit suspicious in itself. After all, I’m pre-disposed to be suspicious about anything coming from the climate orthodoxy; there’s too much politics and too much money involved.

Nick, your comments at WUWT are unfailingly polite, literate and usually accompanied by some evidence. I hope to see you here more often in the future.

Clyde Spencer
Reply to  Smart Rock
February 23, 2022 2:59 pm

He has the patience of a saint, and the ethics of a sophist.

Clyde Spencer
Reply to  Nick Stokes
February 23, 2022 2:45 pm

If an original measurement is not correct, then it is in error. If it is a random error, such as from transcription of field notes, then it probably should be dealt with in the same manner as extreme outliers — delete it. If it is a systematic error, which is repeated numerous times, and is easy to miss, then it probably should be incorporated into the formal uncertainty assessment for the entire data series.

A good test is whether the adjusted numbers show a very different shape or slope from the raw data. That should be a red flag that the data are of poor quality.

Last edited 3 months ago by Clyde Spencer
Reply to  Clyde Spencer
February 23, 2022 4:53 pm

The character of the government bureaucrats INVOLVED tells you whether or not you should trust their data. YOU SHOULD NOT TRUST SURFACE TEMPERATURE DATA !

Clyde Spencer
Reply to  Richard Greene
February 24, 2022 1:15 pm

I would rather have poor quality data — even just order of magnitude — than no data at all. The issue is that those using the data should acknowledge the true uncertainty of the raw data and not pretend that their largely subjective manipulations actually improve the accuracy or precision of the raw empirical measurements.

Reply to  Nick Stokes
February 23, 2022 4:46 pm

A total dingbat comment.
If the measurements are not in error,
then there is no need for adjustments.

It would be an error to change the location
of a specific weather station without
a period of time to see if the old location
and new location were both
registering the same temperatures.
If the new station location fails to replicate
measurements at the old station location,
it will be creating an error in the global
average temperature dataset.
That error should be corrected.

Although there should have been
a very good reason to move the
weather station to begin with.
The first question is whether the new location
will provide more accurate measurements
than the old location. If not, why the move?

Reply to  Richard Greene
February 23, 2022 4:59 pm

“That error should be corrected.”
Exactly. And that is just what they are doing.

In the Cape Town example, there is no dispute about the accuracy of measurement at the respective stations, nor about their suitability as representative of the region. What is not representative is the drop of 1.5°C in going from one to the other in 1961. That is an artefact of an administrative decision, and needs to be corrected.

As to “why the move”, historically temperatures were not measured for the needs of climate science. Choices were made for other reasons. In this case, the airport station was likely to be more reliably maintained (and it was). That doesn’t change the fact that it was a cooler location.

Jim Gorman
Reply to  Richard Greene
February 23, 2022 8:16 pm

All temperature stations measure the microclimate surrounding it. Microclimates can change up for many different reasons. A windbreak a half mile away may grow higher and lower the prevailing wind speed. Grass underneath can change. On and on. The criteria is if the data is usable as is or not. If it is unfit, discard it.

The rationale of needing LONG DATA RECORDS is anathema to well known and followed scientific methods. It leads one to do things like made up “corrections”.

Tim Gorman
Reply to  Richard Greene
February 24, 2022 10:43 am

With measurement uncertainty of 0.5C or more for each station how do you determine which one is right? The old station or the new station? You are assuming that the old station has to be the one that is incorrect. On what basis do you justify that assumption?

Measurement stations, even new ones, are affected by the ground cover below the station. If the new station is over gravel/concrete it will read differently than an old one sited over bermuda grass. Which one is giving the “correct” reading?

Jim Gorman
Reply to  Richard Greene
February 24, 2022 5:50 pm

Thermometers measure the microclimate surrounding it, nothing more and nothing less. If a station is moved, then it has a new microclimate. Comparing it to the old station at the original station tells you nothing. There can be all kinds of things that change the temperature even within mere feet of the old station.

If a station is moved, the old record should be terminated and a new data record started.

Jim Gorman
Reply to  Nick Stokes
February 23, 2022 8:06 pm

What a load. This is what Anthony is looking for in another thread.

Why was the station reduced for the whole preceding time of existence. In real science the record would have been stopped and an entirely new one started. Replacing DATA with MADE UP information is simply not done in science. You can’t justify doing so.

Would the FAA let Boing replace recorded data with calculated “corrections”? Would the EPA allow previous recorded data to be replaced with calculated “corrections”? Could Pfizer tell the FDA that they want to replace previous data with calculated “corrections”? The answer is a big NO. Any “correction” would also need to be NEW MEASUREMENTS and would replace the old in its entirety. That is a far cry from replacing data with MADE up “corrections”.

LdB
Reply to  Jim Gorman
February 23, 2022 11:32 pm

As per above even the original data has some issues because it is warming heavily pre 1950. There is clearly a lot going on at the site and yes it needs to be tossed out or at least make some real investigations as to what was happening.

About all you can see is why the CAGW faithful want to keep the site in because it is going the right way even if likely for very wrong reasons.

Last edited 3 months ago by LdB
Reply to  Eric Worrall
February 23, 2022 4:37 pm

Admiral Worrall, I like your many articles here. You and WE are the kings of WUWT.

There are original raw measurements, and also wild guesses called infilling

How can one estimate error bars on infilled numbers that can never be verified?

And the rest of the numbers in the global average are “adjusted” numbers,
not raw data. Adjustments are personal opinions on what the raw measurements
would have been if measured correctly in the first place.

How can one state margins of error for adjusted and infilled numbers?

mal
February 22, 2022 11:09 pm

Anyone one who changes data should be fired or lose their job. You can’t “fix” data, it is what it is. If you guess does not align with the data the guess is wrong not the data.

Measurement of temperature in a given place at a given point and time is just that, what you can get out of that only applies to that place and the time the measurement was made. It is not a reflection of the next station or the world, it is a reflection of what happening at that place in a given time period.

The though you can come up with the “world temperature” is insanity now if you can explain why some are going up and other are going down and make a guess as to why that happen you best explain why some are going up while other are going down. Adjusting those going down to up is fraud.

I also don’t give a whit that the adjustment are approved and well understood and peer reviewed. That was also true with the mafias’ books.

Last edited 3 months ago by mal
Mike McMillan
Reply to  mal
February 23, 2022 2:24 am

In 2009, they did “fix” the data, the original, raw data. They made it better, but they didn’t tell anyone that they fixed it, and they left the raw (pure, unadulterated data) label on it, so that any scientist looking for global warming would see that, by golly, it’s right there in the raw data.

The poster child was Olney, Illinois.
comment image

I blinked every station in Illinois, Iowa, and Wisconsin.
https://www.rockyhigh66.org/stuff/USHCN_revisions.htm

Scissor
Reply to  Mike McMillan
February 23, 2022 5:21 am

Blinking awesome!

bdgwx
Reply to  Mike McMillan
February 23, 2022 6:59 am

Are you sure your November 09 raw images are correct? I’m asking because I looked at several of the stations on that website including Olney that you posted here and I’m seeing raw data that is consistent with the July 09 raw images when using the station data provided by GISTEMP here.

Carlo, Monte
Reply to  bdgwx
February 23, 2022 7:35 am

Don’t you have data you should be busy corrupting rather than spending time on WUWT?

Mike McMillan
Reply to  bdgwx
February 23, 2022 2:00 pm

The charts were absolutely correct at the time I posted the blinks in 2009, when they went internationally viral before viral was an internet word. I had 130,000 downloads when I finally took the counter off the pages.

With their fraud exposed, they went through and calmed down the obviously egregious alterations, and re-labeled everything as USHCN version 2. So you won’t get the same charts today that I discovered in 2009.

The currently downloadable “raw” data are still not the original numbers. The only way to get that is to download the original B-91 forms that the observers filled out when they read the thermometers.

Old B-91 scans are downloadable here:
https://www.ncdc.noaa.gov/IPS/coop/coop.html
Many are unreadable. Of course.

E.M. Smith’s web site helped spread the word in 2010. The similar French and Scandinavian web site pages aren’t online anymore.
https://chiefio.wordpress.com/2010/01/15/ushcn-vs-ushcn-version-2-more-induced-warmth/

bdgwx
Reply to  Mike McMillan
February 24, 2022 8:03 am

Thanks for the response. I wonder what happened in November of 2009 then and why we don’t see it today?

Clyde Spencer
Reply to  Mike McMillan
February 23, 2022 3:05 pm

Something to note is that by ‘correcting’ the data, rationalized as improving it, the standard deviation has increased significantly because of the trend introduced. That means that the uncertainty for the entire data set has increased.

Tim Gorman
Reply to  mal
February 24, 2022 10:56 am

The “world temperature” is a joke from the beginning. At any point in time the average temp in Kansas may be 80F = 27C and in South America it may be 20F = -7C.

The average of the two averages is 50F. Does that average of averages tell you *anything* about the climate in Kansas and South America? It certainly doesn’t tell me squat.

The global average temperature is no different. Does a change in the GAT happen because of Tmax changes, Tmin changes, or a combination? You simply cannot tell. If the change is because of Tmin going up is that somehow a bad thing? If it’s not a bad thing then why is so much money being spent on it and quality of life being degraded so badly.

Thomas
February 22, 2022 11:43 pm

How about just throw the whole record in the trash bin? Temperature isn’t a measure of the heat content of an atmosphere that includes water vapor. Enthalpy is.

Enthalpy is a measure of the heat content of atmospheric air plus water vapor. It’s the temperature plus the “latent heat” that is associated with water vapor. It takes energy to evaporate water, and energy is released when water vapor condenses. Enthalpy accounts for that energy. Its units are BTUs per pound of dry air and associated water vapor (or kJ/kg).

The enthalpy (total heat content) of air at 115 °F with 5% relative humidity, is the same as air at 70 °F and 80% relative humidity, but the difference in temperature is 45 °F. Temperature has nothing to do with the heat content of Earth’s atmosphere.

A global average temperature is a metric without meaning.

Burbank, California, 20 September 2021 at 4:53 PM, 87 °F, 45 °F dew point = total heat content of 28.2 BTU/lb.

Burbank, California, 12 February 2022 at 1:53 PM, 87 °F, 8 °F dew point = total heat content of 22.3 BTU/lb.

The temperature is the same, in the same location, but the total heat content is 25% greater in September, because there is more moisture in the air.

Or consider this fact, during an El Niño, warm water that was stored in the Pacific warm pool sloshes back across the Pacific, causing the atmosphere to warm. But that heat is just on its way from the warm pool to deep space. It passes through the atmosphere, causing warming, but the plant is actually cooling.

A global average temperature is a metric without meaning.

Last edited 3 months ago by Thomas
Peta of Newark
Reply to  Thomas
February 23, 2022 12:18 am

Yup..
A warming atmosphere is a cooling Earth. Entropy and The Entire Universe is absolutely screaming that fact – why does nobody hear it?

And while they’re sorting that AND getting a handle on it AND because it is supposedly all about the surface of the Earth, let’s include the energy contained inside the top 1 metre of soil/dirt ##

To get a true reflection of Earth’s actual average temperature (and energy content) use the most sensitive and accurate thermometer there ever could be.
It’s Gas Thermometer.
Such things work on First Principles and are what scientists of centuries ago could and did use to read temperatures down to 0.001 Celsius. Repeatable and accurate.

It’s called The Stratosphere
Modern Science including the Sputniks tell us it is cooling. Apparently – but is it actually? Because the insane theory of the GHGE says that heat is trapped in the Troposphere – hence the Stratosphere must be cooling.
Is that another ‘adjustment’

## My pet rave since forever and why I have a temperature data-logger buried 50cm deep in my garden, directly 2 metres below an identical logger
They do not follow any sort of identical trajectory over any and all 24 hour periods nor over and periods of 365 days

Michael S. Kelly
Reply to  Thomas
February 23, 2022 3:56 pm

Temperature isn’t a measure of the heat content of an atmosphere that includes water vapor. Enthalpy is.”

Ditto. Well said, and well illustrated. It’s a point I’ve made more than once in these pages, only to be rebuffed almost every time.

The rebuffs make no sense. People cite their favorite TOA radiation imbalance number, then do all sorts of calculations to demonstrate the effect on atmospheric temperature. But a radiation balance is a difference between energy in and energy out, so a change in the total energy on Earth is the thing we are talking about. When it comes to the atmosphere, that is enthalpy – nothing more nor less. And the water vapor content at the points of measurement have an enormous influence on the enthalpy.

bdgwx
Reply to  Thomas
February 23, 2022 8:15 pm

Song et al. 2022 provide the equivalent potential temperature (theta-e) data. Since 1980 the dry-bulb temperature has increased about 0.75 C while theta-e increased about 1.3 C.

Jim Gorman
Reply to  Thomas
February 24, 2022 6:01 pm

The unanswered question is just what does CO2 add the enthalpy at that point in the atmosphere. We can easily calculate what H2O does to the enthalpy. Do you ever wonder why no scientist has worked out how CO2 adds heat to the atmosphere and how much? Maybe it is too small to measure when compared to water vapor?

February 22, 2022 11:50 pm

“Stop reporting the results of the USHCN/GHCN temperature series to the hundredth of a degree C.”
People who cite USHCN are those who get their knowledge of temperature measurement from Tony Heller. In the real world, USHCN was replaced by ClimDiv eight years ago. The authors of this MDPI paper also seem to be unaware of the change. They must be getting their updates from Heller.

I would have thought that with this galaxy of talent, they might have done what I have never seen a “sceptic” do, which is to calculate the global average with and without adjustments, to see what is the real effect of homogenisation.I regularly calculate the average using unadjusted data, but I check with adjusted data too. I described the result of one comparison here.

I showed the result as a graph of trend to present, calculated either way. The x-axis year is the starting point of the period, so recent years give silly results. The purple line used adjusted GHCN data, the blue unadjusted. The difference to trend is small, and has cooling effect for trends since about 1970. Here is the graph:

comment image

I see BTW that there is an indication of the seriousness of MDPI peer review. The paper was submitted 13 Jan, accepted 6 Feb.

Thomas
Reply to  Nick Stokes
February 23, 2022 12:19 am

Thanks Nick. And welcome back.

Klem
Reply to  Nick Stokes
February 23, 2022 2:12 am

Ooh that annoying Tony Heller, doesn’t he just get under your skin, Nick?

Sebastian Magee
Reply to  Nick Stokes
February 23, 2022 2:45 am

So your graph prooves the corrrection introduces an acceleration on the temperatures. The trends are lowered before 1970 and increased after. Since this occures on the trends it implies an acceleration on the temperatures.
This is a very missleading graph

Mark BLR
Reply to  Nick Stokes
February 23, 2022 5:34 am

I would have thought that with this galaxy of talent, they might have done what I have never seen a “sceptic” do

I showed the result as a graph of trend to present

One problem with trends is that they are, to use an English colloquialism, “tricky buggers”. They very often do “unexpected” things.

An alternative to “trend from start / to end” graphs is “sliding window” versions, which have a set of fixed-length trends plotted instead.

The standard integration time for “climate” is 30 years, but in AR6 (section 1.4.1, page 1-54) they state :
“In AR6, 20-year reference periods are considered long enough to show future changes in many variables when averaging over ensemble members of multiple models, and short enough to enable the time dependence of changes to be shown throughout the 21st century.”

Extending “N-year reference period anomaly offset calculations” to “N-year trend calculations” I plotted the 20-year (trailing) trends for various IPCC “projections” and the main instrumental GMST records (see graph below).

NB : For the RCP (CMIP5 / AR5) model runs, the common “Historical Data” inputs went to 2005, with individual “pathway” inputs used from 2006.
For the SSP (CMIP6 / AR6) model runs, the “Historical Data” was updated and extended to 2014, individual “pathways” only split off from 2015 onwards.

Note how the NCEI and GISS datasets deviated from the other (relatively ?) “tightly grouped” instrumental datasets for the set of 20-year periods from (roughly) 1890-1910 to 1955-1975. These differences would have been masked using your “trend to present” methodology.

Note also how badly the (trailing) trends of the model reconstructions for the section of the “Historical” period from 1935 to 1965 compare to those of the instrumental datasets, as well as during the most recent (since 2000) part of the “Modern Warming” period.

RCP-SSP-Instrumental_20-yr-Trends_1870-2022.png
Last edited 3 months ago by Mark BLR
Reply to  Mark BLR
February 23, 2022 2:32 pm

“These differences would have been masked using your “trend to present” methodology.”

Well, I was focussed on the difference between adjusted and unadjusted, and people normally criticise adjustment in terms of its effect on trend to present.

But hey, I can do 20 year trends too. Here are my program TempLS using adjusted and unadjusted data, compared with Had5, GISS and NOAA (NCEI). I haven’t done the HAD relatives, so now HAD looks like the outlier. Otherwise it looks like yours

comment image

It’s a bit crowded, so I plotted the same data in terms of difference from TempLS (unadjusted):

comment image

Now it’s clear that the difference due to adjustment is less than the difference between different indices.

Last edited 3 months ago by Nick Stokes
Mark BLR
Reply to  Nick Stokes
February 24, 2022 4:04 am

Well, I was focussed on the difference between adjusted and unadjusted …

Yes, my post is slightly “off topic”.

Probably due to associative memory, I saw “Trends” and “°C per century” and thought “Hey ! Didn’t I do something along those lines a month or so ago ?”.

I haven’t done the HAD relatives, so now HAD looks like the outlier.

I compared :
1) GISS (NASA)
2) NCEI (NOAA)
3 + 4) The “old” HadCRUT4 and its “kriged” version (Cowtan & Way)
5 + 6) The “new” HadCRUT5, in both “Infilled” and “Non-infilled” versions
7 + 8) Both version of the BEST reconstruction (“Air” and “Water”)
9 to 12) The 4 main CMIP5 (RCP) “ensemble means”
13 to 16) 4 of the 5 main CMIP6 (SSP) “ensemble means”

I noted that among the (8) “instrumental / measurement” datasets “GISS + NCEI” were similar, but differed from the other “group” of six options.

Here are my program TempLS using adjusted and unadjusted data, compared with Had5, GISS and NOAA (NCEI).

You only compared GISS, NCEI and “Had5” (which version ???). Given the similarity between GISS and NCEI it is not surprising that you ended up with “now HAD looks like the outlier”.

From your “A guide to the global temperature program TempLS” [ now using GHCN V4 ] webpage :

TempLS comes in various flavours, depending on the integration scheme. Most prominent in reports is “mesh”, based on a global triangular mesh. It most closely resembles GISS. The other regularly reported is the older “grid”, which works like HADCRUT and NOAA.

Each month, on about the 8th, when all the main regions have reported, I post a discussion of results, including a map based on a spherical harmonics fit. This map is designed to match that of GISS, and when this comes out a week or so later, I report on that with comparison of numbers and maps. I have been doing this since July 2011.

It is not surprising that your “TempLS program” has results closer to my “GISS + NCEI pair” than “the group of 6 others”.

Now it’s clear that the difference due to adjustment is less than the difference between different indices.

Looking at your graph, if you squint a bit it looks like “TempLS (unadjusted) – TempLS (adjusted)” (the smooth purple line) and “TempLS (unadjusted) – GISS” (the jagged lime-green line) are very similar from 1920 to 1970 or so …

I can’t visualise the result in my head (especially post 2005-ish), please could you add “NOAAlo – GISSlo” to your (second) graph ?

Reply to  Mark BLR
February 24, 2022 1:22 pm

Here is another graph showing differences from GISS. NOAA-GISS is the greenish curve.

comment image

The Had5 is the infilled version. I think infilling was always the right thing to do, so I think the other version is just to save face for a while, and will disappear.

bdgwx
Reply to  Nick Stokes
February 24, 2022 2:28 pm

Infilling has to be the right thing to do. Otherwise anyone who takes the spatial average of 85% of the grid cells and treats it as if it covered 100% then they are implicitly “infilling” the remaining 15% of the cells with the average of the 85% whether they realized it or not. At least with HadCRUTv5’s kriging-like method they are doing the infilling using a local weighted strategy instead of just assuming the empty cells follow the average of the entire globe.