Hidden Behind Climate Policies, Data From Nonexistent Temperature Stations

Hundreds of ‘ghost’ climate stations are no longer operational; instead they are assigned temperatures from surrounding stations.

This article in the Epoch Times is behind half a pay wall depending on whether or not you’ve used up your free subscription. It is a summation of issues we’ve talked about on this site for years.

“Earth’s issuing a distress call,” said United Nations secretary-general António Guterres on March 19. “The latest State of the Global Climate report shows a planet on the brink.

“Fossil fuel pollution is sending climate chaos off the charts. Sirens are blaring across all major indicators: Last year saw record heat, record sea levels, and record ocean surface temperatures. … Some records aren’t just chart-topping, they’re chart-busting.”

President Joe Biden called the climate “an existential threat” in his 2023 State of the Union address. “Let’s face reality. The climate crisis doesn’t care if you’re in a red or a blue state.”

In his 2024 address he said, “I don’t think any of you think there’s no longer a climate crisis. At least, I hope you don’t.”

When recalling past temperatures to make comparisons to the present, and, more importantly, inform future climate policy, officials such as Mr. Guterres and President Biden rely in part on temperature readings from the United States Historical Climatology Network (USHCN).

https://www.theepochtimes.com/article/hidden-behind-climate-policies-data-from-nonexistent-temperature-stations-5622782?welcomeuser=1

However…

The problem, say experts, is that an increasing number of USHCN’s stations don’t exist anymore.

Shewchuk goes on to describe the problem.

“NOAA fabricates temperature data for more than 30 percent of the 1,218 USHCN reporting stations that no longer exist.”

He calls them “ghost” stations.

Mr. Shewchuck said USHCN stations reached a maximum of 1,218 stations in 1957, but after 1990 the number of active stations began declining due to aging equipment and personnel retirements.

NOAA still records data from these ghost stations by taking the temperature readings from surrounding stations, and recording their average for the ghost station, followed by an “E,” for estimate.

The addition of the ghost station data means NOAA’s “monthly and yearly reports are not representative of reality,” said Anthony Watts, a meteorologist and senior fellow for environment and climate at the Heartland Institute.

“If this kind of process were used in a court of law, then the evidence would be thrown out as being polluted.”

Mr. Shewchuk said the USHCN data is the only long-term historical temperature data the United States has.

“In these days of apparent ‘climate crisis,’ you would think that maintaining actual temperature reporting stations would be a top priority—but they instead manufacture data for hundreds of non-existent stations. This is a bizarre way of monitoring a climate claimed to be an existential threat,” he said.


Emphasis mine in the following quotes from the article.

“For various reasons, NOAA feels the need to alter this data instead of fixing equipment problems they think exist,” Mr. Shewchuk said.

“Fixing temperature reporting stations is not rocket science. If we can go up to space to fix the Hubble telescope, we can surely come down to earth to fix a few thermometers.”

NOAA’s use of ghost temperature stations isn’t a recent phenomenon. In 2014, Mr. Watts raised the issue of ghost stations and bad data with NOAA’s chief scientist at the National Climatic Data Center, Tom Peterson, and Texas’ state climatologist, John Nielsen-Gammon, who confirmed there was an issue.

“Anthony – I just did a check of all Texas USHCN stations. Thirteen had estimates in place of apparently good data,” Mr. Nielsen-Gammon wrote in an email to Mr. Watts, according to a report on the latter’s website.

“It’s a bug, a big one. And as Zeke [Hausfather] did a cursory analysis Thursday night, he discovered it was systemic to the entire record, and up to 10 percent of stations have ‘estimated’ data spanning over a century.”

At the time, Mr. Watts reported on his climate website, “Watts Up With That,” that NOAA was taking the issue seriously and expected them to issue a fix shortly.

That fix never materialized. “They’re still doing it, and it’s even worse” he said.

Anthony is quoted further in the article.

NOAA’s Cooperative Observer Program, which includes the USHCN stations, is a network of daily weather observations taken by more than 8,500 volunteers, its webpage states.

Mr. Watts said the process for volunteers is “labor intensive.”

(L–R) Philippe Papin, hurricane specialist at the National Hurricane Center, and Richard Pasch, senior hurricane specialist, work on tracking unsettled weather over the eastern Gulf of Mexico in Miami on May 31, 2023. (Joe Raedle/Getty Images)

“It requires people to record high and low temperature, rainfall, the temperature at the time of observation, and do it at a very specific time, every day. And this has to then be recorded and sent to the National Climatic Data Center in Nashville, now known as the National Center for Environmental Information,” he said.

“Some of it’s still done on paper, some of it’s still done with touchtone over the telephone. It requires a lot of dedication and effort on the part of the observer. It’s a thankless job. And as a result, observers have been disappearing. A lot of them have left due to attrition by death. And then there’s no one to take on that job.”

Mr. Watts explained that when that happens, instead of subtracting the unmanned station from the overall number of USHCN stations, NOAA creates a number from surrounding stations.

“As a result, we end up with this milkshake of data that is basically a hot mess, and isn’t real in most cases,” Mr. Watts said.

And further down.

The Bigger Issue

According to Mr. Watts, ghost stations are problematic but are only part of a much bigger problem.

He explained that several different entities—such as the European Commission’s Copernicus, NASA’s Goddard Institute for Space Studies (GISS), Berkeley’s Earth Surface Temperatures (BEST), and NOAA—publish monthly and yearly climate data and advertise themselves as having “independent data.”

“That is a lie,” Mr. Watts said about the independent data claim.

“The USHCN data set and the [new] nClimDiv climate division data set [which uses the same stations and has the same problems] comes from the Cooperative Observer [Program] in the United States.

“Similarly, in the rest of the world, there is a Cooperative Observer [Program] that suffers from the same problems of attrition and incompetence. And it’s called the GHCN; the Global Historical Climatology Network.

“All these different entities out there, like NOAA, GISS, BEST, all the entities I listed, use the same data from GHCN. And they all apply their own set of ’special sauce’ adjustments to create what they believe is true.

“It’s almost like each of these entities is creating their version of the real, true God. You know, it’s like a religion. They’re using different mathematical and statistical techniques to produce their version of climate reality.

“And it all goes back to the same original, badly-sited, badly-maintained ghost station dataset around the world. USHCN and GHCN are the same stuff. So, there is no independent temperature dataset. It’s bogus that anyone claims this.”

5 40 votes
Article Rating
586 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
April 12, 2024 6:10 am

NOAA has fallen into the class of Sir Cyril Burt, and just makes sh!t up.

James Cole
Reply to  Tom Halla
April 12, 2024 7:12 am

Tony Heller has been documenting this statistical fudgery at realclimatescience.com for over a dozen years. The adjustments have been subtly incremental but massively biased over time. NOAA uses the “ghost” infill process to cool the past and warm the present, adding well more than 1.0 F to the trends over the last 100 years.
1920s were famously hot (widespread Arctic melting and glacial retreat) but NOAA’s temp history chart would have you believe this was one of the coldest periods since the 1880s.
Just one of many examples of how utterly worthless/deceptive the NOAA ghost station adjustments have been

observa
Reply to  Tom Halla
April 12, 2024 8:58 am

Unlike the superior scientific Oz BoM-
Winter set to be one of warmest on record, BoM says (msn.com)

ozspeaksup
Reply to  observa
April 13, 2024 4:04 am

BOM did NOT record rainfall at all for my town for the first 24days of Jan meanwhile we had good rains up to 70mm. they started jan 24th and so far show about 12,, for the yr to date. NOT good enough

Ed Zuiderwijk
Reply to  Tom Halla
April 12, 2024 9:56 am

You perpetuate the myth that Burt fabricated his data. He didn’t.

Tom Halla
Reply to  Ed Zuiderwijk
April 12, 2024 11:13 am

The “defense” in “The Bell Curve” was singularly unpersuasive. Having the same correlation coefficient to three decimal places as a study expanded some threefold in group size is, to put it lightly, indicative of fraud.

Ed Zuiderwijk
Reply to  Tom Halla
April 12, 2024 12:10 pm

Look at the kurtosis and skewness of the distribution, which, incidentally is quite non-gaussian. His accusers therefore claim that not only did he get the correlation that later studies confirmed, he also had ‘guessed’ the correct shape of the distribution. Pure genius if he did that.

Why was Burt vilified? First, he was dead so could not defend his work. Second he showed that intelligence has a strong genetic component to it and that was anathema for the lefty liberals who claimed it was all due to environment. Therefore, they attacked the messenger and anything that could be found would do. Someone could not find the two assistants who worked with Burt, as reported by him, in the census data and claimed therefore they did not exist. Proof of fraud surely. Only the ladies concerned had married, taken the name of their husbands and emigrated to Australia.

Reply to  Tom Halla
April 12, 2024 11:42 am

More Fake Data, what a surprise.

Reply to  karlomonte
April 12, 2024 1:34 pm

NASA/NOAA treat the ability to just-make-up-numbers

… as a FEATURE, not a problem to be fixed.

Mr.
April 12, 2024 6:25 am

I asked in a comment 2 days ago if there was an ISO (International Standards Organization) issued Standard that specifies the methodology and verification for constructing global average temperature values.

Apparently not.

And yet, there is a Standard that specifies and checks the Quality of an organization’s production methods and practices.

Maybe a new International Standard is required for “Irrelevant, Sloppy Work”?

Reply to  Mr.
April 12, 2024 11:58 am

Sounds like the ISO 9000 quality system.

Mr.
Reply to  karlomonte
April 12, 2024 12:24 pm

Yes, 9001 if I remember correctly.

Reply to  Mr.
April 12, 2024 12:36 pm

9000 through 9004 when I was subject to them.

Kevin R.
Reply to  karlomonte
April 12, 2024 5:51 pm

The ISO 9000 is very dedicated to the mission.

Reply to  karlomonte
April 13, 2024 6:17 am

You would think with the global trillions of dollars being given to the renewable energy elites, there would be a bottom up analysis from temperature/enthalpy to energy provision. An ISO 9000 process should identify some of the questionable “facts” being tossed around.

Reply to  Jim Gorman
April 13, 2024 7:30 am

Yes, plus the modelers would have to bring all their software into compliance. I could imagine being a huge task.

Reply to  karlomonte
April 13, 2024 11:46 am

How about the word impossible!

Reply to  Mr.
April 13, 2024 3:43 pm

Dear Mr.

The World Meteorological Organisation publish standards relating to homogenisation, instruments, layout of met-enclosures …. all there, searchable on their web site. Don’t go near the GUM tho…

Cheers,
Dr Bill Johnston
http://www.bomwatch.com.au

April 12, 2024 6:28 am

Using fabricated temperature data form a USHCN ghost station is no different than withdrawing money from imaginary bank account.

SteveZ56
Reply to  John Shewchuk
April 12, 2024 11:51 am

The US Treasury does it all the time.

JamesB_684
Reply to  SteveZ56
April 12, 2024 2:21 pm

More accurately, the Federal Reserve fabricates currency out of thin air using computers, and uses those funds to purchase Treasuries. So … it’s a bit of slight of hand waving indirect imaginary bank accounts.

Editor
April 12, 2024 6:32 am

This is a very welcome update to the issue, I appreciated the non-strident tone, as will people I share this with.

Nick Stokes
Reply to  Ric Werme
April 12, 2024 2:16 pm

It’s WUWT that needs updating. USHCN has not been used for 10 years. It now uses ClimDiv.

I’ve no idea where they even get these numbers of “ghost stations”. The USHCN average did have, because it did not use anomalies, a structure where it had a fixed number of stations, and interpolated missing data. That was a good idea initially, but needed to be changed when the mostly volunteer stations dropped out. That change happened 10 years ago.

sherro01
Reply to  Nick Stokes
April 12, 2024 2:52 pm

So, Nick,
What is the best, approved text book for mathematical methods to calculate the errors and uncertainties of these “guesses” that are not observed data?
Geoff S

Nick Stokes
Reply to  sherro01
April 12, 2024 4:17 pm

Geoff,
Like most people here, you don’t seem to be able to process the simple fact that USHCN was replaced by nClimDiv in March 2014. nClimDiv is anomaly based and does not need to do that.

Of course, we will never have more than a finite number of samples (measurements), so there is always spatial sampling uncertainty. Morice et al 2012 is the standard reference for that.

Reply to  Nick Stokes
April 12, 2024 4:31 pm

All Climdiv has to do is match reasonably well USCRN. !

Reply to  bnice2000
April 12, 2024 5:09 pm

All Climdiv has to do is match reasonably well USCRN. !

How many times does it have to be pointed out that Climdiv and USCRN are statistically indistinguishable over their joint period of measurement and that, rather amusingly, the so-called ‘adjusted’ data are warming slower than the ‘pristine’ ones.

You really couldn’t make it up!

Reply to  TheFinalNail
April 13, 2024 9:19 am

Two wrongs don’t make a right!

sherro01
Reply to  Nick Stokes
April 12, 2024 5:34 pm

Nick,
Please answer my question.
I wrote nothing about USHCN vs. nClimDiv.
I asked for a text with methodology to assign uncertainty to guesses.
Geoff S

Nick Stokes
Reply to  sherro01
April 12, 2024 6:43 pm

Loaded term, guesses, Geoff. Giver me an example of what you are talking about.

AW and John Shewchuk are complaining about a specific issue with USHCN, which they call a guess. It’s actually a sound interpolation procedure, but whatever, it hasn’t been in use for 10 years.

Reply to  Nick Stokes
April 12, 2024 10:02 pm

The same issues exist in any ClimDiv before (or after) 2014 as in USCHN

The reason they changed it was because USCRn was making a nonsense of the farcical USCHN.

ClimDiv is just a variant of the USHCN, re-adjusted to make sure it matches USCRN closely.

sherro01
Reply to  Nick Stokes
April 13, 2024 4:41 am

Nick,
Here is the example you seek. Others have mentioned data that are not observed, but are calculated, whether by interpolation or pair matching, it matters not because they are not observed. Extending, you maintain a global average temperature data base that includes historic sea surface temperature data from times when there were no observing stations near enough for good relevance to help create made-up numbers that for shorthand I call “guesses”.
How do you estimate the uncertainty of such guesses, either used alone or as infills in sets of multiple observations? Geoff S

Reply to  Nick Stokes
April 12, 2024 9:49 pm

Please explain how using anomalies doesn’t require an estimate for missing station data.

Reply to  Nick Stokes
April 13, 2024 9:19 am

 nClimDiv is anomaly based and does not need to do that.”

What an utter load of crap!

The measurement instruments used to create the nClimDiv data DO HAVE MEASUREMENT UNCERTAINTIES. You simply cannot eliminate these through the use of anomalies.

If I have a base of 5deg +/-0.1deg and an measurement of 5.1deg +/- 0.1deg I CAN NOT eliminate the measurement uncertainty by subtracting the two!!

The base uncertainty interval is at least 4.9deg to 5.1deg. That of the measurement will be 5deg to 5.2deg. Therefore the slope of the difference (measurement – base) could be from 4.9deg – 5.2deg, a slope of -0.3deg, to 5.1deg – 5deg, a slope of +0.1deg. The slope could even be 0 (zero).

You didn’t eliminate the uncertainty of the anomaly, in fact, it’s larger than the uncertainty of the elements used to calculate. Nor did you isolate the trend between the two points, it could be positive, negative, or zero. The actual anomaly, the true value, is part of the GREAT UNKNOWN.

As a check, exactly what is the variance of the data used to find the baseline? What is the variance of the actual measurement used to find the anomaly? If you don’t know then you have to assume it is the measurement uncertainty of the instrument, probably something like +/- 0.5C.

You’ve been asked for these variances before and failed to answer. That failure means you don’t know. Which means you don’t care what the variance actually is. A distribution CAN NOT be described solely by the average value. You *must* know the variance as well, and a complete description would include the skew and kurtosis. A 5-number statistical description would be even more valuable. But my guess is that you don’t even know the values that go into the 5-number description.

Reply to  Tim Gorman
April 13, 2024 4:13 pm

Dear Tim Gorman,

You are still totally confused. The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means. It does not and cannot ‘choose’ where it goes somewhere between the CIs of the data points. I think you statistical ideas are all GUM-ed up.

If you specifically want to examine upper-ranges and lower ranges, you could use percentile regression, if you are worried about variation in the Xs as well as the Ys, you could use reduced Major Axis or Major Axis methods. However, for time-domain analysis X has no error.

Instead of ruminating about stats, you could grab some data, analyse them and talk about the outcome. I have recently completed an interesting study of climate trend and change at Tennant Creek (https://www.bomwatch.com.au/bureau-of-meterology/8-tennant-creek-nt/).

Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
April 13, 2024 8:26 pm

You are still totally confused.

Not worth the time needed to type a reply.

Reply to  karlomonte
April 13, 2024 11:11 pm

Oh Dear Karlo,

What don’t you understand about least squares regression?

Tim’s theory only works if there are just two data-points each with an error bar.

Oh wait, you can’t get an error bar for single points, you need a sampling strategy, so each is an average with a valid SEM.

Ask Tim it’s his theory, maybe he can make them up.

(You could also check with Jim at https://statisticsbyjim.com/.)

Cheers,

Bill

Reply to  Bill Johnston
April 14, 2024 6:14 am

If you want to remain ignorant, no one is stopping you.

Reply to  Bill Johnston
April 14, 2024 6:21 am

No, my theory works for *ANY* number of measurement data points, each with a measurement uncertainty interval giving all the reasonably possible values that could be assigned to the data point. It just requires you to do multiple runs using all the possible values of each data point, both singularly and in all possible combinations.

You *will* wind up with a large range of possible linear regression trend lines and no way to determine which one is the “true” one!

Such is the world of measurement uncertainty!

Reply to  Tim Gorman
April 14, 2024 6:50 am

Bill denies that systematic uncertainty exists.

Reply to  karlomonte
April 14, 2024 1:40 pm

I have never claimed systematic uncertainty does not exist Karlo.

Reply to  Bill Johnston
April 14, 2024 3:28 pm

Yes you do, in effect.

Reply to  karlomonte
April 15, 2024 6:15 am

Yep.

Reply to  Tim Gorman
April 15, 2024 7:15 am

“Systematic bias is not uncertainty!” — Bill J

Reply to  Tim Gorman
April 14, 2024 1:36 pm

Dear Tim,

For all your cracking-on about this, you have not provided a single worked example showing why you are the only person in the universe who truly understands the uncertainty components in least-squares linear regression.

You are also the only person in the world who fully understands all the ins and outs of taking weather observations even though you have never made any. Mr Funny man, the expert!

To reiterate: The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means. It does not and cannot ‘choose’ where it goes based on the CIs of the data points.

Finally, confidence intervals about the line are all you need to establish where the ‘TRUE’ line lies.

If you disagree provide a worked example. If you can’t do that but need someone to talk to, then talk to Karlo.

All the very best,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
April 14, 2024 4:54 pm

I have changed one of your statements slightly. Think about it and how you get a regression space when analyzing all the possible values that each Y value could have.

The slope of a least squares line minimises the squared differences between the fitted line and the ±Y-values

What this means is that your data looks like:

Y1±u, Y2±u, Y3±u, …, Yn±u

You can not simply throw away the fact that each data point can have different values defined by the uncertainty. You can’ t just input Y1, Y2, Y3, …, Yn and say you’re done.

I can’t post graphs or I would post one.

You can do it simply in a spreadsheet yourself. Pick 30 monthly temps and use ±1°C values in different combinations. Of course the end points will probably have the biggest impact. That is where I would start.

Here are some examples, where uncertainty is ±1

1 2 3 4 5 6 7 8 9
2 2 3 4 5 6 7 8 9
0 2 3 4 5 6 7 8 9
1 3 3 4 5 6 7 8 9
1 1 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 10
1 2 3 4 5 6 7 8 8
1 2 3 4 5 6 7 9 10
1 2 3 4 5 6 7 7 8

2 3 4 4 5 6 7 8 9
0 1 2 4 5 6 7 8 9
1 2 3 4 5 6 8 9 10
1 2 3 4 5 6 6 7 8

These will all give you different regression lines and all are possible because of the uncertainty in the data values.

Reply to  Jim Gorman
April 15, 2024 2:37 am

Dear oh dear Jim,

Fact is, if you want to regress on the upper OR lower CIs of the input (y) data, they become the ‘means’ in the model (you could call them “upper” or “lower” and test it for yourself).

Remember tho that the slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means. It does not and cannot ‘choose’ where it goes based on the CIs of the data points.

I think, but I don’t know for sure [because I can’t be bothered chasing your rabbit down your GUMed-up drain] that resulting regressions would be parallel and approximate the 95% CI of regression through the means.

As it is uniquely your hypothesis, you could do that all by yourself, without me holding you little hand. Then talk to Karlo. After-all Karlo knows everything, and if it does not work-out he can call you a liar, which gets me off the hook.

However, if you want to regress on a randomly selected bunch of EITHER upper or lower CIs (which despite all the crap you go-on with is easily done), your analysis would make no sense at all! Which is precisely what you seem to have found!

So good luck with your unique take on statistics 1.01. Why don’t you busy yourself by writing a book? You could ask Karko to be co-author; best-seller I’d say.

All the very best,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
April 15, 2024 3:22 am

Another rant?

You could ask Karko to be co-author; best-seller I’d say.

Yer obsessed, johnson, get help.

old cocky
Reply to  Bill Johnston
April 15, 2024 4:27 am

I don’t know that any of you blokes have done much sensitivity analysis 🙁

Reply to  old cocky
April 15, 2024 5:48 am

Yep, all data is perfect and 100% accurate.

old cocky
Reply to  Jim Gorman
April 15, 2024 2:59 pm

It’s also worth discarding the largest residuals and rerunning the regression, amongst many other techniques.

old cocky
Reply to  old cocky
April 15, 2024 3:29 pm

Being downvoted for a totally unexceptionable statement always intrigues me.
If it’s wrong, it’s much better to know why it’s wrong.

Reply to  old cocky
April 15, 2024 3:43 pm

The trendologists don’t like you, oc, nothing more than this.

old cocky
Reply to  karlomonte
April 15, 2024 3:50 pm

The funny thing is that my longest “arguments” have been with Tim 🙂

Reply to  Bill Johnston
April 15, 2024 5:43 am

It does not and cannot ‘choose’ where it goes based on the CIs of the data points.

That is the whole point I’ve been trying to make with you.

You have to do the sensitivity analysis yourself.

It illustrates very well that statistcians are not trained at all in handling data that have what you call CI’s. If they did, software would have been developed that allows inputting the uncertainty intervals of each data point.

Reply to  Jim Gorman
April 15, 2024 6:06 am

His statement above is quite revealing:

“Systematic bias is not uncertainty!”

In other words, systematic bias cannot be unknown (two negatives, yes).

In other words, he really has no knowledge of basic uncertainty concepts and metrology.

Reply to  Bill Johnston
April 14, 2024 8:42 pm

 If you can’t do that but need someone to talk to, then talk to Karlo.

Yer just another trendology a$$hat, johnson.

Reply to  karlomonte
April 15, 2024 2:55 am

I’m sure you two could have some long-winded but fruitless discussions.

Reply to  karlomonte
April 15, 2024 3:16 am

You lack empathy Karlo!

Reply to  Bill Johnston
April 15, 2024 3:23 am

???

Reply to  Bill Johnston
April 15, 2024 6:29 am

For all your cracking-on about this, you have not provided a single worked example showing why you are the only person in the universe who truly understands the uncertainty components in least-squares linear regression.”

Example after example exist in Taylor, Bevington, and Possolo.

My guess is that you have never studied any of their texts or papers.

To reiterate: The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means. It does not and cannot ‘choose’ where it goes based on the CIs of the data points.”

Taylor covers this in his Chapter 8.

Taylor: “Let us now return to the question of finding the best straight line y = A + Bx to fit a set of measured points (x1,y1), …, (xN, yN). To simplify our discussion we will suppose that, although our measurements of y suffer appreciable uncertainty, the uncertainty in our measurement so x is negligible. This assumption is often reasonable, because the uncertainties in one variable often are much larger than those in the other, which we can then safely ignore. We will further assume that the uncertainties in y all have the same magnitude. (This assumption is also reasonable in many experiments, but if the uncertainties are different, then our analysis can be generalized to weight the measurements appropriately; see Problem 8.9).” (bolding mine, tpg)

See the last sentence and study its implications. *YOU* want to assume that all measurement uncertainties are equal, random, and GAussian so they all cancel. But that is *NOT* reality. Different stations have different microclimates, different calibration drift, and different data variance. Their measurements MUST be weighted appropriately before combining them.

What weighting scheme do *YOU* use?

Reply to  Bill Johnston
April 15, 2024 7:17 am

Finally, confidence intervals about the line are all you need to establish where the ‘TRUE’ line lies.

Another indication you have no understanding of these subjects — true values are unknowable.

Reply to  Bill Johnston
April 14, 2024 6:17 am

Bill

All you have done here is confirm you only use the stated values to develop your trend line while ignoring the measurement uncertainty of the data.

The issue isn’t the X’s, it’s the possible range of values of the Y’s.

To do a full analysis would require you to do possibly thousands of iterations. Do a run inputting all the possible y-values for x1 while holding all the other points equal to their stated value. Then do the same for x2 … xn. Then start over doing combinations, rerun the entire data set while varying x1 and x2 through all their possible combination while holding the others to their stated values. Then run all the other possible combinations x2/x3, x3/x4, etc while holding all the other values to their stated values.

Then run all the possible combinations of x1/x2/x3 with the other held constant. Then x2/x3/x4, etc.

What you will develop is a set of trend lines that incorporate all of the measurement uncertainties. And the final result? YOU STILL WON’T KNOW WHICH ONE OF THE TREND LINES IS THE “TRUE” ONE!

Such is the nature of measurement uncertainty.

Unless the Y-delta between consecutive points is outside the measurement uncertainty intervals you simply won’t know, CAN *NOT* know, what is actually happening. It all becomes part of the GREAT UNKNOWN.

The uncertainty interval defines *all* the reasonably possible Y-values attributable to the measurand. Which one is the “true” value? It’s part of the GREAT UNKNOWN.

I’ve analyzed LOTS of temperature data for lots of stations over the past three years. I’ve followed standard metrology protocols for propagating measurement uncertainty, for using significant figures and rounding, and for calculating averages and VARIANCE of the data sets.

Bottom line: The measurement uncertainty and variances of the data lead to not even being certain of the UNITS digit in the averages, let alone the hundredths digit! And you know what? That means the answer to what is going on is UNKNOWN. There is absolutely nothing wrong with stating the fact that you simply can’t tell what is happening based on the available data.

Just using stated values as 100% accurate and pretending that the standard deviation of the sample means is a substitute for knowing the accuracy of the calculated average is only fooling yourself as well as others.

There *is* a reason why the accepted experts in measurement uncertainty like Taylor, Bevington, and Posslo always, ALWAYS, make the assumption that there is no systematic bias in measurements and that the measurements are always of the same thing in the same environment in all of their texts and examples. E.g. Possolo in TN1900.

Reply to  Tim Gorman
April 14, 2024 6:52 am

He’s back to repeating his lie that the GUM tells you to calculate the “standard deviation” of a single number.

Reply to  karlomonte
April 14, 2024 10:56 am

I suspect they have no clue what JCGM 100:2008 4.1.4 means when discussing Xᵢ,ₖ!

Reply to  Jim Gorman
April 14, 2024 11:50 am

He rejects the GUM entirely as political dictatorship, but acknowledges that a Type A is standard statistics. What gets him up on his high-horse are Type B uncertainties—his stats bias told him that calculating a Type B is in effect the same as calculating the standard deviation for a single number.

Which is a farce, of course, but he rants about this over and over.

He really doesn’t grasp the concept of uncertainty at all. That it might be possible to consider the magnitude of the limit of knowledge for a single measurement in a time series is completely outside of the framework he operates in.

Reply to  karlomonte
April 14, 2024 12:25 pm

Every document I read, GUM, EURACHEM / CITAC Guide CG, NIST, etc., all discuss these uncertainty types. It is an internationally accepted practice with well defined methodology.

Reply to  Jim Gorman
April 15, 2024 3:00 am

Good for you. But have you analysed any data to back your case (what ever is).

Reply to  Bill Johnston
April 15, 2024 4:35 am

Why yes I have. Using the procedure used in NIST TN 1900.

Funny how so many sites come out with a confidence interval similar to what that document shows. From high tenths to low integers for monthly Tmax and Tmin both.

Tavg has a measurement uncertainty so high that it isn’t funny. For 70 and 50 which are representative Tmax and Tmin.

x̅ = 60
s² = 200
s = 14.1
s  = s/√n = 10
expanded uncertainty = 127 @ 95% with DOF = 1 and k = 12.7
expanded uncertainty = 19.6 @ k = 1.96

In interval form. In this conformity, the shortest 95% coverage interval is t̄± ks∕√n = (-67°F, 187°F).

Or, using the lower k factor. In this conformity, the shortest 95% coverage interval is t̄± ks∕√n = (40°F, 80°F).

That makes Tavg indefensible as a valid metric.

Using Kelvin gives

s² = 60.5
s = 7.8
sx̄= s√N = 5.5
expanded uncertainty = 69.9 @ 95% with DOF = 1 and k = 12.7
expanded uncertainty = 10.8 @ k = 1.96

The only other option is to ignore measurement uncertainty entirely, which is what climate science does.

There is on other way to calculate uncertainty.

u/60 = √[(1.8/50)² + (1.8/70)²

u = 60 ∙ 0.16 = 42

Not good either.

Reply to  karlomonte
April 15, 2024 2:59 am

No I said it was re-branded mark-it-up as they go along woke-sh*t. Have you done your quota of measurands today?

Reply to  Bill Johnston
April 15, 2024 3:24 am

What are you ranting about, clown?

Reply to  Tim Gorman
April 14, 2024 2:03 pm

Instead of blowing-off about your pet-theory Tim, why don’t you do a basic stats course, or provide a worked example.

Uncertainty CAN occur in X.

Systematic bias is NOT measurement uncertainty.

No one claims a single value is 100% accurate.

It is also not a LIE that The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means.

Do you still not understand what the standard deviation is? Standard error?, Confidence intervals? The spread factor k …. woops, the t-value? and on and on.

You say: To do a full analysis would require you to do possibly thousands of iterations. Have you not heard of bootstrapping – several thousand iterations in a few seconds?

Have you done any measurands today? Can you do stats and do GUM at the same time?

Can you provide a worked example?

All the best,

Bill

Reply to  Bill Johnston
April 14, 2024 3:29 pm

The Bill Clown Show is back.

Nick Stokes
Reply to  Bill Johnston
April 14, 2024 4:03 pm

Bill is of course right and the peanut gallery is wrong. Linear regression does a least squares fit, leaving residuals which are a bunch of numbers with zero mean nd a standard deviation you can calculate. The uncertainty of the trend is just that standard deviation, as propagated in the normal way through the trend calculation.

ducky2
Reply to  Nick Stokes
April 14, 2024 4:38 pm

The standard deviation of the slope is not the same as the uncertainty of the individual data points. Bill isn’t even taking into account the uncertainty of the thermometers.

Nick Stokes
Reply to  ducky2
April 14, 2024 4:55 pm

Of course. I said “as propagated in the normal way through the trend calculation”

Trend is a weighted linear sum of the data. The uncertainty is calculated accordingly.

The uncertainty of thermometers is part of the observed variability, which goes into this calculation.

Reply to  Nick Stokes
April 14, 2024 6:03 pm

Trend is a weighted linear sum of the data. The uncertainty is calculated accordingly.

Show us the methodology used calculate the uncertainty. “calculated accordingly” is a word salad with no meaning whatsoever.

The uncertainty of thermometers is part of the observed variability, which goes into this calculation.

This is so ridiculous it isn’t funny. If it is built into the observed values, how do you recognize what it is. It sounds like you are trying to dismiss the need for determining measurement uncertainty because it is built in.

Heck, we can do away with NIST and give the funding to climate science. If uncertainty is built into the variability in observations, why worry about having all these requirements shown in the GUM or NIST TN’s for any measured item, right?

Nick Stokes
Reply to  Jim Gorman
April 14, 2024 7:46 pm

Heck, we can do away with NIST and give the funding to climate science. “

No, leave it with NIST. Here is their E2, with Possolo calculating the mean of 22 days meanured in May. His standard uncertainty of the mean is just σ/sqrt(22), where σ is the standard deviation of those days. Nothing else. Nothing about a separate term for instrumental error. It all adds in to the observed variability.

comment image

Reply to  Nick Stokes
April 14, 2024 8:46 pm

Why isn’t the NIST number 50 milli-Kelvin, Gaslighter?

Reply to  Nick Stokes
April 15, 2024 6:12 am

What Nitpick Nick the Gaslighter failed to mention here:

The NIST uncertainty calculation was for a single station, and the instrumental uncertainty was assumed to be zero.

That no climatology trendologist uses the standard deviations of any of the averages upon averages they calculate. It is all throw away and ignored.

Reply to  karlomonte
April 15, 2024 1:34 pm

Uncertainty is not skewed, but is expressed as +/-; meaning it is the same each side of the mean.

Trend is determined by the means and the larger the individual uncertainties, which do not need to be known, the larger the data spread, the wider the CIs around the line and the less likely would be significance of the trend coefficient. Those are some of the reasons why least squares is such a robust method of data fitting.

Go read a book Carlo or do a course, or analyse some data for yourself.

Cheers,

Bill

Reply to  Bill Johnston
April 15, 2024 3:50 pm

You don’t even understand what the word means; whatever caused you to dredge up this bizarre comment: Uncertainty is not skewed, but is expressed as +/-; meaning it is the same each side of the mean.”

An uncertainty interval most certainty can be asymmetric.

The world has passed you by.

Go read a book Carlo or do a course, or analyse some data for yourself.

GFY

Ask me if I care you took temperature measurements for James Cook in the 16th century.

Reply to  Nick Stokes
April 15, 2024 6:39 am

You didn’t read the whole example nor any of the text prior that explains much of this. Or maybe, you just did as usual and ignored it.

Nothing about a separate term for instrumental error. It all adds in to the observed variability.

You are correct about no term being used for instrumental error. However, error is a term that is no longer used. You might search for uncertainty rather than error next time.

Let’s see what Example 2 says about other uncertainty.

Assuming that the calibration uncertainty is negligible by comparison with the other uncertainty components, and that no other significant sources of uncertainty are in play, then the common end-point of several alternative analyses is a scaled and shifted Student’s t distribution as full characterization of the uncertainty associated with r.

Remember, this is an example. It is not meant to be all encompassing from a measurement uncertainty standpoint. They even go on to mention another method that gives an even larger value. Calibration uncertainty would certainly add.

The procedure developed by Frank Wilcoxon in 1945 produces an interval ranging from 23.6 ◦C to 27.6 ◦C (Wilcoxon, 1945; Hollander and Wolfe, 1999). The wider interval is the price one pays for no longer relying on any specific assumption about the distribution of the data.

You might want to make up an uncertainty budget. And remember, each item although small adds to the total uncertainty. Things like:

  • shading
  • wind speed
  • shelter reflectivity
  • humidity
  • nearby sources of heat
  • land use changes
  • ground cover under screen

Lastly, tell us how this monthly uncertainty is propagated into the monthly anomaly.

Reply to  Jim Gorman
April 15, 2024 2:01 pm

But you have no experience in observing the weather. Aside from pictures, you have probably not even seen a meteorological thermometer let alone thought about instrument uncertainty verses error – error is definable as a factor whether you like it or not.

Given time as the predictor, the net effect of all of what you list above potentially results in variability in Y, which is estimated by lest squares regression, which minimises the squared differences between the fitted line and the Y-values. The trend-line also passes through the X, Y grand means.

While you waffle-on interminably, you have not supported your claims by undertaking analysis yourself.

Cheers,

Dr Bill

Reply to  Bill Johnston
April 15, 2024 3:52 pm

But you have no experience in observing the weather. 

Not this tripe again, please Lord have mercy.

Aside from pictures, you have probably not even seen a meteorological thermometer let alone thought about instrument uncertainty verses error – error is definable as a factor whether you like it or not.

And you know this, how exactly?

What a liar.

While you waffle-on interminably, you have not supported your claims by undertaking analysis yourself.

That is a might big hat size you have there, johnson.

Reply to  Nick Stokes
April 15, 2024 6:58 am

No, leave it with NIST. Here is their E2, with Possolo calculating the mean of 22 days meanured in May. “

You didn’t even bother to read the assumptions Possolo had to make in this example.

Just a small list:

  1. No systematic uncertainty
  2. the same measurand each day
  3. no measurement uncertainty for any reading

Thus his 22 data points become multiple measurements of the same thing in the same environment using the same device. . It allowed the assumption that the average was the “true value”, thus the standard deviation of the sample means, incorrectly called the “standard error of the mean”, becomes the measurement uncertainty of the mean.

In essence the example does exactly what climate science does: all measurement uncertainty is random, Gaussian, and cancels.

That simply isn’t reality. But then climate science isn’t either!

Reply to  Tim Gorman
April 15, 2024 2:33 pm

Dear Tim,

You seem to have forgotten that the SE (sigma^2) is not the same as the SD (sigma), also the mean of the 22 data is the “TRUE MEAN’ by definition.

If as I recall, as he is estimating the monthly mean from the 22 samples, it is appropriate to use sample statistics. However, if he is estimating just for the 22 points he should use population statistics. For small sample sizes (<about 60) the difference is important.

In Excel these are referred to as VAR or VARP, for variance, or STDEV and STDEVP for SD. Remember I worked through this example for you ages ago. The confusing thing is that he used the 97.5% t-value from the 1-sided table to calculate the 2-sided 95% CI (see Nick’s excerpt of the paper). I’m truly amazed, but not surprised that Karlo did not pick-up on this. Need I say again: see Nick’s excerpt of the paper.

Aside from that, all the goings-on at the GUM are exactly the same calculations as you probably ignored in Stats 1.01, but re-branded by GUM into woke-speak.

The rest is all noise inside your head.

Yours sincerely,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
April 15, 2024 3:55 pm

, but re-branded by GUM into woke-speak.

You are a clown who doesn’t of what you yap, but you cram it down others’ throats as if you do regardless.

The rest is all noise inside your head.

The noise is from your bizarre and insane ideas about metrology.

Yours sincerely,

Liar.

Reply to  Bill Johnston
April 16, 2024 3:07 am

I got that rong and you did not even pick me up on it.

Sigma^2 is the variance from which is derived the SD which is sigma. Using the GUM formula (and the t-value, which in woke-speak is their ‘coverage factor (k)’), you can work out the rest. Usually called the standard error derived CI, now in woke-speak it is Uncertainty Type A.

Type B uncertainty is determined at a seance involving approved scientists, data-holders, policy-makers, WWF and indigenous elders.

b.

Reply to  Bill Johnston
April 16, 2024 5:32 am

You’re a liar, johnson,

The GUM predates the racial woke insanity by at least two decades.

Reply to  karlomonte
April 16, 2024 12:16 pm

This Guide establishes general rules for evaluating and expressing uncertainty in measurement that are intended to be applicable to a broad spectrum of measurements. The basis of the Guide is Recommendation 1 (CI-1981) of the Comité International des Poids et Mesures (CIPM) and Recommendation INC-1 (1980) of the Working Group on the Statement of Uncertainties. The Working Group was convened by the Bureau International des Poids et Mesures (BIPM) in response to a request of the CIPM. The ClPM Recommendation is the only recommendation concerning the expression of uncertainty in measurement adopted by an intergovernmental organization.

This Guide was prepared by a joint working group consisting of experts nominated by the BIPM, the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the International Organization of Legal Metrology (OIML).

Reply to  Jim Gorman
April 16, 2024 12:56 pm

Four decades…

Reply to  karlomonte
April 16, 2024 1:22 pm

Time flies.

Reply to  Bill Johnston
April 18, 2024 3:52 pm

You seem to have forgotten that the SE (sigma^2) is not the same as the SD (sigma), also the mean of the 22 data is the “TRUE MEAN’ by definition.”

What in Pete’s name are you talking about? The SE (standard error) is *NOT* (sigma^2). It is the sqrt(sigma^2), i.e. the square root of the variance of the sample means! The variance is a metric for the range of the sample means.

Aside from that, all the goings-on at the GUM are exactly the same calculations as you probably ignored in Stats 1.01, but re-branded by GUM into woke-speak.”

NO KIDDING! Except the GUM is working with the MEASUREMENT UNCERTAINTY, not just the stated values. You have totally missed the assumptions Possolo made. In essence, they are the same one you make – no measurement uncertainty, each stated value is 100% accurate and the uncertainty of the average is, therefore, the variance of the data only. EXCEPT the fact that Possolo expanded the variance because he did not assume a Gaussian distribution.

You are trying to teach an old dog how to suck eggs. The problem is that you don’t know how!

Reply to  Tim Gorman
April 18, 2024 11:52 pm

What in Pete’s name are you talking about?

You say, “The SE (standard error) is ….. the sqrt(sigma^2), i.e. the square root of the variance of the sample means!”

I’m getting confused as you are, and I usually use a stats package not EXCEL, and like you, I don’t have to work everything out longhand.

Sigma^2 is the variance (VAR or VARP in Excel), which is the sum of the squared differences from the mean, divided by N, in the case of a sample from a population, and N-1 if it is the whole population.

SD = Sigma = (Sqrt(Sigma^2)) (STDEV or STDEVP in Excel)

SE or SEM is Sigma/SQRT(N) for a sample from a population

or

Sigma/SQRT(N+1) if it applies to the whole population

95% CI – t-05*SEM

Because the divisors are N or N-1, differences converge as N increases.

Somebody referred to the standard deviation of a regression coefficient, which except for EXCEL is usually stated as the SE of the regression coefficient.

No one said anywhere that data are 100% accurate. That is an assumption that you are projecting, not an assumption that I have ever stated. And because I have never claimed data are 100% accurate, does not imply that it is true.

All the best,

Bill Johnston

Reply to  Nick Stokes
April 15, 2024 1:38 pm

All been pointed out to Jim before Nick. But he goes on … and on.

Cheers,

Bill

Reply to  Bill Johnston
April 15, 2024 3:55 pm

Sucking up to Nitpick Nick the Gaslighter, how appropriate for you.

Reply to  Jim Gorman
April 14, 2024 8:39 pm

Yep, Stokes is gaslighting and arm waving at the same time.

Reply to  Nick Stokes
April 15, 2024 6:50 am

Trend is a weighted linear sum of the data. The uncertainty is calculated accordingly.”

NO, it isn’t. A residual is a single number calculated using the single value, i.e. the stated value of the measurement, compared to the trend line.

From your earlier message: “leaving residuals which are a bunch of numbers with zero mean nd a standard deviation you can calculate.”

The problem is that the trend line should be re-calculated for each possible combination of data, not just calculated once using the stated values of the measurement and then the residual distribution at each point calculated based on subtracting the possible values of y from the fixed trend line.

If you can’t determine a “true value” for slope of the line between each individual segment, y_i to y_(i+1) then you can’t know a “true value” for the trend line either!

Reply to  ducky2
April 15, 2024 1:41 pm

As far as I know only Excel mentions “standard deviation of the slope“. Most or all other Stats packages refer to it as the standard error. Rven my 1960s Stats book says in regression analysis the two terms are synonymous.

b.

Reply to  Nick Stokes
April 14, 2024 6:15 pm

If I add “1” to each of the first 3 data points and subtract “1” from the last 3 will I get the same regression line? How about the residuals, will they stay the same?

What if I add 1’to the first half of the data and subtract 1 from all of the last half? Will I get the same regression line? How about the residuals, will they stay the same?

Reply to  Jim Gorman
April 14, 2024 7:15 pm

Where does the GUM recommend that???

Absolutely stoopid meaningless question!!

b

Reply to  Bill Johnston
April 14, 2024 8:47 pm

Absolutely stoopid meaningless question!!

This from the dood with the big red bulbous nose.

Reply to  Bill Johnston
April 15, 2024 7:00 am

The GUM recommends propagating the uncertainty. It doesn’t recommend assuming all uncertainty is random, Gaussian, and cancels.

Nick Stokes
Reply to  Jim Gorman
April 14, 2024 7:35 pm

This is elementary regression stuff and has been around for a century+. Go and learn some properly. It has nothing to do with climate science.

Reply to  Nick Stokes
April 14, 2024 8:46 pm

Just stop with the irony already, Nitpick.

Reply to  Nick Stokes
April 15, 2024 7:08 am

It is elementary regression stuff that does *NOT* include measurement uncertainty!

We were doing this with capital expenditure projects clear back in the early 70’s. We called it sensitivity studies. We would change the y values for various factors and see what it did to the trend lines for ROI. If labor costs were uncertain we would calculate a new trend line for the lowest possible value and the highest possible value as part of the ROI determination.

We didn’t just recalculate the residuals to the original trend line. We determined a NEW TREND LINE based on the uncertainty of the component factors.

It was *NOT* unusual to see the trend line for the ROI go from positive to negative or vice versa. The uncertainty *does* impact the trend line, not just the residuals.

Reply to  Tim Gorman
April 15, 2024 2:37 pm

Artificially changing values does not mean anything for natural data that is independent. Did you check assumptions for linear regression?

b

Reply to  Bill Johnston
April 15, 2024 3:57 pm

More noise.

Reply to  Nick Stokes
April 15, 2024 7:48 am

I learned how to do sensitivity analysis clear back in the early 70’s doing ROI calculations on capital expenditure projects.

The residuals are *NOT* the same thing as the propagated measurement uncertainty. When you calculate ONE trend line from the stated values and then compare that trend line to the possible data values you are still only measuring the fit of the data to that one trend line.

For measurement uncertainty you have to calculate MULTIPLE trend lines based on the possible data values and then compare the data vales to that recalculated trend line.

Suppose labor cost is $100 per hour today and I plot a trend line of ROI based on that value. Then I change the value to $125 per hour four years from now and compare that value to the original ROI trend line calculated using $100 per hour.

What is that residual going to do? Go up probably. So what? What does *that* tell you? It *should* tell you that your trend line now doesn’t match as well and you need to redo the calculation of the trend line!

I.E. YOU WIND UP WITH MULTIPLE TREND LINES BASED ON THE UNCERTAINTY.

It’s no different with measurement uncertainty. You *need” to look at all the possible trend lines that the uncertainty could cause! The issue is that those possible trend lines form a picture of the GREAT UNKNOWN.

Climate science wants us to believe that they can *know* differences down to the hundredths digit with perfect accuracy. Just by assuming everything is calculated with 100% accuracy. What a freaking joke!

Reply to  Nick Stokes
April 15, 2024 9:32 am

You refuse to understand because it would upset your apple cart.

I have never said a linear regression of stated values can’t have residuals nor a standard error associated with those residuals.

What you and most other statisticians will not admit is that those stated values have uncertainty that end up making different trend lines. In essence the trend line has an uncertainty value also.

Reply to  Jim Gorman
April 16, 2024 6:07 am

For the data, it’s error bands, and it’s systemic errors, in the evaluations under discussion, those data errors add almost nothing to the standard error of the resulting trends.

Reply to  Nick Stokes
April 14, 2024 8:38 pm

Ah yes, Nick Stokes, trendologist gaslighter extraordinaire and uncertainty crank — who doesn’t understand that error is not uncertainty and that uncertainty increases.

Reply to  Nick Stokes
April 14, 2024 8:45 pm

Stokes the gaslighter: Bill is of course right and the peanut gallery is wrong. “

Johnston the clown: Systematic bias is NOT measurement uncertainty.”

What a joke, both of you know nothing about which you yap.

Reply to  Nick Stokes
April 15, 2024 6:38 am

Bill is of course right and the peanut gallery is wrong”

Nick, you simply don’t have a clue. As I just posted to Bill from Taylor’s textbook:

“… but if the uncertainties are different, then our analysis can be generalized to weight the measurements appropriately; see Problem 8.9.” (bolding mine, tpg)

Temperatures have different variances in different hemispheres. Temperatures have different variances in different months.

Variance is a metric for the accuracy of the data, i.e. its measurement uncertainty. Yet NOONE in climate science ever bothers to weight the measurements appropriately

 leaving residuals which are a bunch of numbers with zero mean nd a standard deviation you can calculate.”

The issue isn’t the residuals. The issue is the measurement uncertainty of the data points used to calculate those residuals. Your assertion is a non sequitur. It only shows that you have no idea what you are even speaking on.

Reply to  Tim Gorman
April 15, 2024 2:56 pm

Ha, haa, haa … you don’t even understand what you read Tim. You say “but if the uncertainties are different, then our analysis can be generalized to weight the measurements appropriately“.

For the case where the uncertainties are different it is appropriate to use weighted linear regression, the weights being something that differentiates the differences between data classes.

If you had read about linear regression assumptions, you would have seen that the issue IS residuals. It is they that indicate lack-of-fit: due to outliers, unequal variance, lack of independence and normality, linearity etc.

You really do need to brush-up Tim. After all you only have two feet to stick-in.

You are a turn-off and a time-waster, so please don’t reply.

Cheers,

Bill

Reply to  Bill Johnston
April 15, 2024 3:58 pm

You really do need to brush-up Tim. After all you only have two feet to stick-in.

You are a turn-off and a time-waster, so please don’t reply.

Can you swell your head any larger?

I think you can.

Reply to  Bill Johnston
April 14, 2024 4:33 pm

Uncertainty CAN occur in X.

Tim said:

  • The issue isn’t the X’s, it’s the possible range of values of the Y’s.

BJ – We are discussing temperature readings taken on identified days. Unless your perspective gives you an uncertainty on which days, weeks, months and years that readings were taken, there is no variation in X.

It is also not a LIE that The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means.

That is exactly true. But, you get ONE regression line and equation only if all the data points have no uncertainty, i.e., 100% accurate.

When you have uncertainty in your data, each data point can vary within the uncertainty interval and you need to do a sensitivity analysis to find all the regression lines that can occur with various combinations of the data points being changed to their maximum and minimum values using the greatest ±values. You WILL get different regression lines for each combination.

When done and plotted, you will have a space that defines the uncertainty of where the actual regression line truly lies. Consider it a ±interval based upon the uncertainty interval of each piece of data. This has NOTHING to do with residuals. Although each sensitivity run will have a residual error, that is not what is being examined.

No statistics is needed to do this. The uncertainty is already done and only the amount of uncertainty is at issue.

Reply to  Jim Gorman
April 14, 2024 7:57 pm

In the post referred to by Tim (https://wattsupwiththat.com/2024/04/12/hidden-behind-climate-policies-data-from-nonexistent-temperature-stations/#comment-3895971), I was referring to some general possibilities, including error in X.

A pile of Ys tells you nothing unless their is some other structure such as y by treatments. In regression structure is provided by x. If you have no x there is no slope and therefore Tim’s assertions go down the gurgler.

While error CAN occur in X, you are all trying to infer that the regression line which minimises squared differences in Y (over the range of x) simply floats around, when for a least-squares fit, it does not and cannot.

To reiterate: The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means. It does not and cannot ‘choose’ where it goes based on the CIs of individual data points.

Tim and Karlo’s pile-on over something as fundamental as OLS is ridiculous. Furthermore, as they have not provided any worked examples to support their cases it is fair to conclude they are all blather and no substance. Maybe they are too busy doing measurands and have been besotted by the GUM, by GUM!

All the talk about thermometers or other instruments, of which few have any field experience is irrelevant. Poor-data (or the wrong hypothesis) show up as P values that are not significant, low R^2 values and wide 95% confidence intervals.

Here an example.

Maximum sea surface temperature anomalies at Cape Ferguson (Qld) verses time. The relationship is significant (P = 0.03, but the R^2(adj) is only 0.01 – only 1% of SST variation is explained by DeciYear. Residuals are reasonably normal in their distribution. The relationship is in fact useless, there is no time effect that is not explained by a few outliers.

Call:
lm(formula = FerMxSSTAnom ~ DeciYear, data = Dataset)
 
Residuals:
   Min     1Q Median     3Q    Max
-3.7849 -0.4913 -0.0255 0.5331 3.8014
 
Coefficients:
             Estimate Std. Error t value Pr(>|t|) 
(Intercept) -24.874062 12.026597 -2.068  0.0394 *
DeciYear     0.012400  0.005996  2.068  0.0394 *

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1
 
Residual standard error: 0.8652 on 325 degrees of freedom
 (28 observations deleted due to missingness)
Multiple R-squared: 0.01299,           Adjusted R-squared: 0.009953
F-statistic: 4.277 on 1 and 325 DF, p-value: 0.03941

All the best,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
April 14, 2024 8:48 pm

Shuttup.

Reply to  Bill Johnston
April 15, 2024 7:27 am

A pile of Ys tells you nothing unless their is some other structure such as y by treatments. In regression structure is provided by x. If you have no x there is no slope and therefore Tim’s assertions go down the gurgler.”

This is total and utter malarky. Meaningless word salad!

The slope of the line is determined by the y-values, not the x vlaues.

slope = dy/dx. A infinitely small x-interval vs the change in the y-value.

“ou are all trying to infer that the regression line which minimises squared differences in Y (over the range of x) simply floats around, when for a least-squares fit, it does not and cannot.”

If y floats around due to uncertainty then the slope of the trend line does as well! Again, the slope of the line is based on delta-y with delta-x held constant. For a derivative, delta-x is infinitely small.

” Residuals are reasonably normal in their distribution.”

Once again, the issue isn’t the residuals to a fixed trend line calculated from the stated values. It is the value of the residuals when the trend line is different due to different y-values – and the measurement uncertainty of the data points determine the different y-values.

I posted this before. Here it is again. When determining ROI on a capital expenditure project we would start with a labor value at present. Because of uncertainty in future labor costs, i.e. similar to measurement uncertainty, we would run new trend lines for ROI based on that uncertainty in labor costs. We didn’t just recalculate the residuals between the various labor costs and the original trend line. The ROI on the capital expenditure could change from positive to negative – i.e. the trend line could actually change based on the uncertainty of the y-values. With a limited capital budget you would look for those projects where the trend line would stay positive even with the uncertainty in labor costs changing the actual slope of the trend line.

The residuals, i.e. the best fit metric, would be recalculated based on each individual possible trend line. All the residuals tell you is how ell the trend line fits the data points you are using.

If you were running a business and just depended on calculating one single trend line based on current stated values you would probably wind up owning more on capital investments than you could afford.

Reply to  Tim Gorman
April 15, 2024 3:08 pm

You are making it up using your own words Tim.

I said “A pile of Ys tells you nothing unless their is some other structure such as y by treatments. In regression structure is provided by x. If you have no x there is no slope and therefore Tim’s assertions go down the gurgler”.

Contradicting yourself you say “The slope of the line is determined by the y-values, not the x vlaues” then that “slope = dy/dx”.

I can see an X there – without x “there is no slope and therefore Tim’s assertions go down the gurgler”.

How can you you have a slope if there is no X? Take away dx in your dy/dx you are left with a “pile of Ys (that) tells you nothing.

Are you for real?? Does the GUM have a method of creating slope without an X?

Don’t reply to me, go talk to Karlo.

Bill

Reply to  Bill Johnston
April 15, 2024 3:59 pm

You are making it up using your own words Tim.

Don’t reply to me, go talk to Karlo.

And the Bill Kook-Clown Show continues…

Reply to  Bill Johnston
April 15, 2024 9:09 am

This is all fine, but it still does not assess uncertainty.

If uncertainty was considered the 5-number distribution would show:

-3.7849±u -0.4913±u -0.0255±u 0.5331±u 3.8014±u

Measurements have uncertainty, always. If you see a stated value without an uncertainty associated with it, the assumption is that the number is 100% accurate.

Show the numbers in your dataset. Do they have uncertainty values associated with them?

Reply to  Jim Gorman
April 14, 2024 8:07 pm

You are embarrassing yourself.

There is a 95% likelihood that the linear least-squares line will be inside the 95% confidence intervals for the line (look up “95% CI regression“). While you are at it, look-up linear regression assumptions.

Bill

Reply to  Bill Johnston
April 14, 2024 8:48 pm

Um, no. You and Gaslighter Stokes have a monopoly in this crop, clown.

Reply to  Bill Johnston
April 15, 2024 8:15 am

here is a 95% likelihood that the linear least-squares line will be inside the 95% confidence intervals for the line (look up “95% CI regression“). While you are at it, look-up linear regression assumptions.”

So what? That still doesn’t mean you know the true trend line. If the best-fit metric changes shouldn’t you also recalculate the trend line based on the different data points?

Reply to  Bill Johnston
April 15, 2024 1:20 pm

There is a 95% likelihood that the linear least-squares line will be inside the 95% confidence intervals for the line (look up “95% CI regression“). While you are at it, look-up linear regression assumptions.

Of that is true when your value are essentially constants.

You won’t address this because it doesn’t fit your paradym but that does make less true.

Here is y-data data where all “y” values are ±1.

Data set 1 (stated values)
(0,1), (1,-2), (2,3), (3,-5), (4,7), (5,-1), (6,3), (7,6)
y = 0.714286x – 1 r² = 0.1847. r = 0.4298

Data set 2
(0,0), (1,-3), (2,4), (3,-6), (4,7), (5,0), (6,2), (7,7)
y = 0.892857x – 1.75 r² = 0.2264. r = 0.4758

Data set 3
(0,2), (1,-1), (2,3), (3,-6), (4,6), (5,-2), (6,4), (7,7)
y = 0.678571x – 0.75. r² = 0.1445. r =0.3801

As you can see these are three different lines with different qualities. Yet they are made with values within the uncertainty intervals of the stated values. And remember there are many other combinations within these eight points.

Which one is the correct one?

Reply to  Jim Gorman
April 15, 2024 3:23 pm

These are bullshit data plucked out of your head for what precisely?

There is also no X-data, so whatever trend you calculated is nonsense. (Is it trend per second, minute, year; or the SQRT of age of your neighbors grand mother multiplied by something from the GUM?)

You forget that in order to calculate trend, you need X data. You should also check your residuals.

You assertion that Y-values are also constant is wrong.

Off fishing …

Bill

Reply to  Bill Johnston
April 15, 2024 4:02 pm

These are bullshit data plucked out of your head for what precisely?

For you to whine about.

Off fishing …

From the deck of the HMS Endeavour, no doubt.

old cocky
Reply to  Bill Johnston
April 15, 2024 4:04 pm

There is also no X-data

They’re sets of data points expressed as bog standard (x, y) pairs.

Not how I’d do sensitivity analysis, but to each his or her own.

Reply to  old cocky
April 15, 2024 4:24 pm

Thanks Old Cocky,

I see that now. I thought they were y (+/- something).

Using Cooks Distance and influence plots, which are objective, one can do sensitivity analysis on T-data by progressively ignoring obvious outliers, which are usually due to low sample-numbers/Yr (missing data), or spikes.

All the best,

Bill

old cocky
Reply to  Bill Johnston
April 16, 2024 12:44 am

Using Cooks Distance

Crikey, that dates me.
It’s an old technique, but I don’t remember the formalisation being taught when I was at Uni.

Reply to  old cocky
April 16, 2024 2:30 am

Dear Old Cocky,

Me too…

You can have outliers that are influential on trend (which are detected by Cooks distance) and you can have outliers that are just extremes – they appear to be high/low but they do not leverage trend and could be ‘real’.

Influence plots are bubble-plots that highlight individual datapoints (as residuals from the fit) graphically in both dimensions, with ‘error-magnitude’ calculated jn SD units and Cooks distance as the bubble. Except that they are strongly influential on trend, a rule-of-thumb is that residuals +/- 2SD from the fit are probably OK.

Influence plots greatly assists interpretation of least-squares trend where for no reason that is obvious, some influential data can be bad-data and some datasets are simply crappy.

Having highlighted such data, additional tests are possible, including if when predicted by a second analysis from which they are omitted, suspect data fall outside 95% prediction intervals for those points. Individual data (particularly averages) can also be investigated for problems such as missing observations/yr, spikes etc.

Depending on the data, I have made a range of scripts (instruction lists) that I run in R that extracts a large number of data attributes/yr from high-frequency raw data, that assist that process.

(Excel can only handle 1,048,576 rows of input data, Some of the base-datasets that I download and work with are two to three times that (six-minute data = 10 samples/hour, 24 hours/day, 365.25 days/yr over 30 years). Some datasets are more frequent than 6-minutes.)

The flippant, childish commentary that heads my way from some at this site, ignores that checking and analysing data (which they don’t have a clue about) is a serious business that takes a lot of time, effort and number-crunching.

All the best,

Dr Bill Johnston
http://www.bomwatch.com.au

old cocky
Reply to  Bill Johnston
April 16, 2024 3:41 am

Excel can only handle 1,048,576 rows of input data, 

Excel and Open Office/Libre Office are okay for preliminary work.

I did a fair whack of performance analysis using Perl, but R seems be a better bet. I’ve never used R, but I gather it’s excellent.

Reply to  old cocky
April 16, 2024 4:04 am

No, R is not straightforward, at least for me. It is stringent in its coding and not intuitive. Some call it primitive.

While I rely on R to summarise large data files (only because it evolved from S and I bought a copy of S from CSIRO (for almost A$2,000) way back when, to analyse my PhD data), it is a pain in the bum to use. I also use a desktop application for on-the-run stuff.

For analysis I rely on a GUI interface (Rcmdr) that does the background work. I use R and and another package for drawing maps (ggplot2), and another program for graphs.

Cheers,

Bill

old cocky
Reply to  Bill Johnston
April 16, 2024 4:28 am

R is not straightforward, at least for me. It is stringent in its coding and not intuitive. Some call it primitive.

It can’t be worse than FORTRAN.

Being stringent sounds good. I really should look into it.

Nick Stokes
Reply to  old cocky
April 16, 2024 1:50 pm

I use R for all calculation and graphing nowadays. It is very good, fast, and not hard to program.

Reply to  Bill Johnston
April 16, 2024 5:33 am

The flippant, childish commentary that heads my way from some at this site, ignores that checking and analysing data (which they don’t have a clue about) is a serious business that takes a lot of time, effort and number-crunching.

Poor baby bill doesn’t get the respect his huge ego thinks he deserves.

Reply to  Bill Johnston
April 16, 2024 8:41 am

The flippant, childish commentary that heads my way from some at this site, ignores that checking and analysing data (which they don’t have a clue about) is a serious business that takes a lot of time, effort and number-crunching.

You don’t have a clue about my experience in data analysis. I was involved for 30 years in a Bell System telephone company. I engineered equipment for multiple million dollar central offices and operator systems. I did budgets for the same. These weren’t simple one variable single stations. They were all interconnected and what happened at one could affect a number of others.

This was business, not academia. Have you ever set in a meeting having to justify budgets in the millions of dollars based on the data you have. Most of the senior exec’s had advanced business degres and bull sh*tting wasn’t accepted. How well you predicted DIRECTLY affected your pay and promotions.

I HAVE been there and done that. Have you?

You probably don’t even know about queuing theory in a multivariate interconnected system. Read this for an introduction and then ask yourself where the input data to a sequential queuing system comes from. And remember, this had to be done over 24/7/365 periods.

https://en.m.wikipedia.org/wiki/Queueing_theory

Reply to  Jim Gorman
April 16, 2024 3:49 pm

I don’t care, and why are you so allergic to academia? I was never an academic, and just as your background is no business of mine, mine is no business of yours.

Despite all your experience and waddling-around like an expert or an overpaid administrator, you have yet to support anything you say with a worked example of your own.

Go and talk to Karlo.

All the best,

Bill

Reply to  Bill Johnston
April 16, 2024 4:06 pm

Despite all your experience and waddling-around like an expert or an overpaid administrator, you have yet to support anything you say with a worked example of your own.

Another lie. And pure projection.

Reply to  Bill Johnston
April 16, 2024 4:45 pm

I don’t care, and why are you so allergic to academia?

Because few academics in climate science have skin in the game. The models run hotter and hotter and nobody gets canned. If I would have missed my projections as badly as they have over the last two decades I would have been walking the streets looking for work.

Reply to  Jim Gorman
April 16, 2024 11:04 pm

So how does all this argy-bargy about the GUM, the circular arguments and associated crap get you closer to whatever objective you have in mind? If you don’t understand or accept the basic tenants underpinning least-squares regression, all you end-up doing is running around in circles.

Least-squares is the most widely used statistical framework in every science discipline across the world. Trend analysis does not just apply to climate science, it applies to any investigation where an independent variable(s) (x .. x(n)) is hypothesised to have an effect on a dependent one (y). Your weather station probes would not work if they were not calibrated using least-squares methods.

Instead of going down this path have you thought about what it is you are trying to achieve and joined some dots on the path to getting there?

All the best,

Bill

Reply to  Bill Johnston
April 17, 2024 6:18 am

Instead of going down this path have you thought about what it is you are trying to achieve and joined some dots on the path to getting there?

What I am trying to accomplish is awareness that a singular “global” temperature is a farce. Measurements have uncertainty. The very first measurement you take has uncertainty. That uncertainty propagates throughout following calculations and it grows, it doesn’t get smaller. Every “global temperature” should ALWAYS be quoted with an uncertainty range so people can make up their own minds about what is actually occurring.

Climate scientists making pronouncements of knowing a temperature to the one thousandths of a degree is not only unscientific but should be considered a joke. Only mathematicians and programmers can look into a crystal ball and tease out decimals that were never measured. Sure, programmers can fill up a floating point number to the maximum with irrational numbers but that doesn’t make them measured numbers. Sure, you can use an extra decimal or two for interim calculations so rounding error is minimized, but in the end, those decimals do not express the limits of information provided in the actual measurements.

I learned in business that trends are only accurate for the data you already have. Extrapolating is fraught with danger. Simply extending a trend into the future is assuming that nothing changes. Everything DOES change, even nature. Unless you have a good handle on all the variables, how they interrelate and how they change, you will quickly learn about uncertainty. Climate science does not have this knowledge. We are in an interglacial, it WILL change into a glaciation at some point in time. Have you seen climate science or their models predict when this will occur. Why not? The models surely be tweaked to determine the factors leading to glaciation, right?

Ultimately, I think climate science is at the stage of alchemy. Making pronostications at this stage is more guessing than science. Spending my tax dollars wily nily and raising my energy costs by government fiat is not something I think should be happening!

Reply to  Jim Gorman
April 17, 2024 7:03 am

I learned in business that trends are only accurate for the data you already have. Extrapolating is fraught with danger. Simply extending a trend into the future is assuming that nothing changes.

Which is exactly what introductory statistics texts tell you.

Ultimately, I think climate science is at the stage of alchemy. Making pronostications at this stage is more guessing than science. Spending my tax dollars wily nily and raising my energy costs by government fiat is not something I think should be happening!

Absolutely, and it is this point that all the air temperature trendologists studiously avoid at all costs.

Reply to  Jim Gorman
April 17, 2024 2:58 pm

Dear Jim,

Most people contributing to this site would agree there is no “global temperature” as such, and that (with a few prominent exceptions, including Judith Curry and Roger Pielke), there is no warming. Also, that the concept of a ‘global temperature is nonsense.

But then you go off the track into the GREAT UNKNOWN by being distracted by the GUM, which is Stats 1.01 in drag!

There can not be, neither is there any error propagation between independent observations. By definition they are independent values determined from one day to the next that describe changes in the weather, not the instrument.
 
By way of example, when measured by the nurse at your clinic, your blood pressure does not include error propagated from all the previous patients whose BP was measured by the same nurse using the same gear. You could not be taken seriously if you insisted that it was. Chances are the BP machine is regularly calibrated by accredited technicians who travel around, which is their job. Provided correct protocols were used by the nurse, the clinic would defer any legal challenge about your BP measurement to those technicians.
 
That is why in all sorts of fields, accreditation is such big business – the accreditors (or auditors) are contracted to carry the can.
 
Autocorrelation is not the same as error. It is the propensity of the property being observed to reflect some of yesterday’s signal. It rained yesterday, there is a heightened chance it will be rainy/cloudy/humid/cool today. That is autocorrelation, not error and it leads to a propagated signal. A seasonal cycle is a predictable propagated (autocorrelated) signal, which must be allowed for in statistical inferences about the data. Annual averaged data are rarely autocorrelated. For data that are autocorrelated, one way around the problem is to sub-sample at sufficiently wide intervals that samples are independent, and make inferences on those. (Slope (trend) is not an inference, it is a property.)   
 
Regardless of the GUM, and irrespective of what the data are intended to be used for, you are fundamentally incorrect in claiming error propagates from one, independent measurement, to the next. You can say it does, insist that it does, yell about it, but you are wrong.
 
I undertook weather observations, and like the nurse in the clinic, observers follow protocols with the aim of reducing operator error and therefore maximising accuracy. However, the degree to which operator error contributes noise to observations cannot be known. Instrument error is a property of the instrument – thermometer, anemometer … etc. and can only be assessed by re-calibration, or by comparing between instruments, or a standard certified by an accredited lab or technician.
 
Error components by and large are factors less than variation in the medium being observed and can mostly be disregarded. You can insist that error is a big deal, yell about it and get completely bogged in GUM, but you are wrong.
 
Instrument error, which is ½ the interval range, is known, and generally exceeds error contributed by observers. I do not have any calibration certificates but calibration error is points of decimals less than the observable instrument error, which is ½ the interval range. You can insist that such errors are important, and yell about it, but you are wrong.     
 
Systematic error by definition is a non-random change. If it is important, systematic error (an instrument reading high or low) is knowable. Either an instrument can be suspected of being biased, in which case it may be re-calibrated or replaced, it can be seen to be biased (the mercury or alcohol column may be broken), or it shows up in the data as a cumulative trend or a step-change in a property relative to the mean. It is important that you understand the difference between random and systematic error and their possible causes.
 
My final point is that scientists must have some idea of the sample size, and number of samples required to obtain a stable mean relating to the property being observed. Noisy data requires more samples. It is fundamental requirement of many research grant applications (particularly in medicine) that researchers are confident their sampling strategy addresses the problem without over-sampling. Under-sampling is a cause of failure, over-sampling vastly increases the expense and workload. It is a simple fact that a precise estimate requires a large sample size.
 
All the best,

Bill Johnston

Reply to  Bill Johnston
April 18, 2024 4:21 am

But then you go off the track into the GREAT UNKNOWN by being distracted by the GUM, which is Stats 1.01 in drag!”

Assuming all measurement uncertainty is random, Gaussian, and cancels is an absolute misunderstanding of basic statistics as well as of the GUM.

It doesn’t matter if you don’t realize you are applying that meme. You *are* applying it.

There can not be, neither is there any error propagation between independent observations”

Pure and utter malarky. If I measure two boards with two different devices the measurements are independent observations. Yet, when I put the measurements of the boards together in a data set and calculate their average, that average is conditioned by the measurement uncertainties of each. The measurement uncertainty is either a direct addition of the uncertainties (worst case) or the quadrature addition of the uncertainties (best case).

But those uncertainties certainly do not totally cancel. And the can’t just be ignored – except by climate science.

“By way of example, when measured by the nurse at your clinic, your blood pressure does not include error propagated from all the previous patients whose BP was measured by the same nurse using the same gear. You could not be taken seriously if you insisted that it was.”

No, but your blood pressure *IS* conditioned by the measurement uncertainty of that device. If the doctor comes in after the nurse and retakes your blood pressure then the two measurements added together and averaged will have the average conditioned by the measurement error of each measurement. That measurement uncertainty will have multiple elements including things like your arm position, did you move after the nurse took the measurement, is their hearing different, is the device held in the same way, etc.

To actually make your situation apply to an average temperature suppose you are trying to find the average blood pressure of all the patients that day. That average *will* be conditioned by the measurement uncertainty of the device used — just like the average temperature should be conditioned by the measurement uncertainty of the measuring device.

You are still stuck on the old meme of “true value +/- error”. UNCERTAINTY IS NOT ERROR. It must be treated differently.

Reply to  Tim Gorman
April 18, 2024 4:42 am

No, but your blood pressure *IS* conditioned by the measurement uncertainty of that device.

Blood pressure data is of course a favorite of stats texts.

You are still stuck on the old meme of “true value +/- error”. UNCERTAINTY IS NOT ERROR. It must be treated differently.

He won’t understand, and will refuse to even try to understand.

Reply to  Bill Johnston
April 18, 2024 4:33 am

Regardless of the GUM, and irrespective of what the data are intended to be used for, you are fundamentally incorrect in claiming error propagates from one, independent measurement, to the next. You can say it does, insist that it does, yell about it, but you are wrong.
 “

You are arguing something no one is claiming, its a red herring When you COMBINE those different measurements of independent things in order to find an average then the measurement uncertainty of each *does* condition that average.

Systematic bias *is* a conditioning of each independent observation of different things by the same device. When temps from different devices with different systematic bias are combined the uncertainty of their average is conditioned by the measurement uncertainty of each.

It doesn’t matter what average you are trying to find. Even the daily mid-range value is conditioned by the measurement uncertainty of the device doing the measuring. When combined with measurements from other devices the measurement uncertainty of the total is some kind of a sum of the measurement uncertainty of each component.

Face it, you are trying to justify the meme of “all measurement uncertainty is random, Gaussian, and cancels” so you can ignore the measurement uncertainty propagation in everything you do.



Reply to  Tim Gorman
April 18, 2024 4:48 am

He has zero comprehension of subjects like instrumentation and calibration as well.

I recall him claiming that “measurement uncertainty” was solely some calculation of the etchings on an LIG thermometer.

Then he climbed onto this trick pony of saying the GUM tells you to find the standard deviation of a single number.

Reply to  karlomonte
April 19, 2024 3:00 pm

What I actually said Dumbo, is that measurement uncertainty of a Lig thermometer is 1/2 the interval range.

I have also consistently said is there is no such thing as a SD for a single number, and I challenged you and your good friend, Tim, who insisted there was, to give me the SD of the number 47, or failing that, 135. Instead of trying to remember complex things, you ISO-thingy manager you, perhaps you should rest and let the past catch you up.

The sun has also finally risen and thrown some light into the dark recesses inside Tim’s head, so he can finally see that too.

b.

Reply to  Bill Johnston
April 19, 2024 3:53 pm

NATA requires testing laboratories to use ISO17025 and the GUM, which you call “woke”.

Uncertainty is not error.

Resolution uncertainty is not total uncertainty.

Rant away all you want, you are still lost at sea.

Reply to  Bill Johnston
April 19, 2024 3:30 am

Instrument error, which is ½ the interval range, is known, and generally exceeds error contributed by observers.

Bullshit, this is only the resolution limit. “generally exceeds” is hand waving.

And you still have zero comprehension that uncertainty is not error.

I do not have any calibration certificates but calibration error is points of decimals less than the observable instrument error, which is ½ the interval range.

Not at all surprised you have zero experience with real instrumentation and calibration. Making a blanket statement like this is more hand-waving.

Real calibration certificates report uncertainty according to the GUM, not “error”.

Hope you choke on this.

You can insist that such errors are important, and yell about it, but you are wrong. 

Why are you yelling then? And you just admitted you know nothing about them. Another fail.

Reply to  Bill Johnston
April 19, 2024 2:06 pm

Regardless of the GUM, and irrespective of what the data are intended to be used for, you are fundamentally incorrect in claiming error propagates from one, independent measurement, to the next. You can say it does, insist that it does, yell about it, but you are wrong.

More crap.

Any time you creat a random variable made up of independent measurements each measurement has uncertainty. Those uncertainties propagate into the mean.

Read this site,
https://www.isobudgets.com/7-steps-to-calculate-measurement-uncertainty/

specifically this section and the table in the section

Measurement Functions without Equations

Note particularly that the items

  • Resolution
  • Repeatability
  • Reproducibility

are all separately derived and are all additive as standard uncertainties.

The simple example of measuring a table top’s length and width is a perfect counter example to your statement. They are independent measurements, each with their own uncertainty. Yet you would claim their uncertainties don’t propagagate into either the perimeter or area calculation because they are independent measurements?

Reply to  Jim Gorman
April 19, 2024 3:57 pm

He won’t read it.

He is just like the climatologists, can’t understand and refuses to understand that error is not uncertainty.

Check the reference he posted below, it is straight out of the GUM and ISO17025!

https://wattsupwiththat.com/2024/04/12/hidden-behind-climate-policies-data-from-nonexistent-temperature-stations/#comment-3898854

“The whole thing is nicely explained here:”

https://www.esscolab.com/uploads/files/measurement-guide.pdf

He doesn’t even read the links he puts up!

Reply to  Bill Johnston
April 17, 2024 6:59 am

Why does NATA, the national laboratory accreditation authority in Australia, require labs to use and report results according to the GUM?

Reply to  karlomonte
April 17, 2024 4:17 pm

Fingers not working Karlo?

Ask them yourself.

b

Reply to  Bill Johnston
April 17, 2024 6:41 pm

YOU are the one telling me that “NATA gear” is the solution to any and all measurement problems.

And it turns out that YOUR cherished NATA uses and requires the “woke-GUM”.

Clown, no surprise you ran away, again.

Reply to  karlomonte
April 17, 2024 8:17 pm

I did not say anything of the kind. However, if you specifically want an answer to your question, ask them yourself.

b

Reply to  Bill Johnston
April 17, 2024 8:56 pm

Liar. Do you want me to find the receipts?

I had no idea what NATA was until YOU started ranting about it.

Either you are a biden-potatohead or you are a coward.

Reply to  karlomonte
April 17, 2024 10:31 pm

Ha ha ha,

Fingers don’t work.

What happened to your homework?

Too hard?

The dog ate it?

Could not do it?

More gas than engine!!

b

Reply to  Bill Johnston
April 18, 2024 4:38 am

I vote coward. Just like Biden, when boxed in you lie, and then cannot keep the new lie in sync with old lies.

Reply to  Bill Johnston
April 16, 2024 6:14 am

Cooks Distance works on one set of data to analyze a data points influence on the single trend. It does not analyze the affect of uncertain measurement data which result in different trend lines.

From: https://online.stat.psu.edu/stat462/node/173/

The basic idea behind each of these measures is the same, namely to delete the observations one at a time, each time refitting the regression model on the remaining n–1 observations. Then, we compare the results using all n observations to the results with the ith observation deleted to see how much influence the observation has on the analysis. Analyzed as such, we are able to assess the potential impact each data point has on the regression analysis.

This is not the same as examining each trend as data values are changed in varying combinations to reflect the uncertainty in all of the data points.

Reply to  Jim Gorman
April 16, 2024 6:55 am

Bill likes to rant about “NATA gear” as if this is the solution to any and all measurement problems. I figured I should look behind the curtain a bit.

Wikipedia says NATA is: “… the recognised national accreditation authority for analytical laboratories and testing service providers in Australia.”.

https://nata.com.au/

On their website, a menu item under the “Accreditation” tab is:

“Testing & Calibration”
“ISO 17025”

Oh my, look at this. Apparently Bill’s ISO17025 training has lapsed. From the NATA page:

“ISO/IEC 17025”
“Testing & Calibration”

“Accreditation to ISO/IEC 17025 plays an important role in supporting the validity and reliability of results from testing and calibration laboratories across many industry sectors.”

“ISO/IEC 17025 facilitates international cooperation by establishing wider acceptance of laboratory test results between countries and organisations. This not only provides global confidence in the work of an organisation but also promotes national and international trade through the acceptance of test and calibration data between countries, without the need for additional testing.”

What is ISO/IEC 17025?”
“In many countries, ISO/IEC 17025 is the standard for which most laboratories must hold accreditation to be deemed technically competent. In many cases, suppliers and regulatory authorities will not accept test or calibration results from a laboratory that is not accredited.”

Under the FAQs:

Key technical elements covered by the Standard include

Validity and appropriateness of methods
Technical competency of staff
Traceability of measurements to national and international standards
Suitability, calibration and maintenance of equipment
Handling of test / calibration items
Quality control and assurance processes
Reporting of results

Of course, there is a link for the standard itself (but requires Swiss francs to read):

https://www.iso.org/ISO-IEC-17025-testing-and-calibration-laboratories.html

Here is another link to a training course for ISO17025 that deals specifically with the measurement uncertainty requirements:

https://www.isobudgets.com/measurement-uncertainty-training/

“DISCOVER THE BEST WAY TO ESTIMATE MEASUREMENT UNCERTAINTY AND MEET ISO/IEC 17025:2017 REQUIREMENTS”

In this link:

Next, you will learn how to use the GUM method for estimating uncertainty:

How to convert your uncertainty components to a standard uncertainty,
How to combine your uncertainty components using the GUM method,
How to pick the right expansion factor (k-factor),
How to calculate expanded uncertainty, and
How to evaluate and validate your results.

In other words, any lab trying to get accredited under NATA must meet the requirements of the ISO standard, which in turn requires of the GUM for reporting results with measurement uncertainty.

“Uncertainty cranks” — Nick Stokes

Indeed, Stokes, you are out of your pond, just like Bill the Ranter:

“The World Meteorological Organisation publish standards relating to homogenisation, instruments, layout of met-enclosures …. all there, searchable on their web site. Don’t go near the GUM tho…”
— Bill J

“Aside from that, all the goings-on at the GUM are exactly the same calculations as you probably ignored in Stats 1.01, but re-branded by GUM into woke-speak.”
— Bill J

“Type B uncertainty is determined at a seance involving approved scientists, data-holders, policy-makers, WWF and indigenous elders.”
— Bill J

And used by the Australian national accreditation authority.

Oops.

Reply to  karlomonte
April 16, 2024 10:53 am

https://www.isobudgets.com/measurement-uncertainty-training/

I’ve had this in my bibliography for quite some time. I have not paid for the course, but the examples of uncertainty budgets is enlightening. The fact that repeatability, reproducibility, and resolution are shown should trigger some folks to look deeper into what uncertainty is.

I also like this site and it table of things to look for under each category. Climate science has a lot to learn about proper measurements. Moving on from twice a day measurements would be a good first step.

https://www.isobudgets.com/7-steps-to-calculate-measurement-uncertainty/

Reply to  Jim Gorman
April 16, 2024 12:01 pm

I just glanced at it, but yeah looks like there is a lot of good information. It is no easy task to obtain a lab accreditation to ISO17025 through an agency like NATA (or A2LA in America), there are a lot steps, including implementation of a quality system (ISO9000). You have to understand every detail of whatever measurements are involved, and a formal uncertainty analysis is required that adheres to the GUM. I have written UAs and they can run to many pages. The agency and its auditor have to review and approve the analysis before the lab can issue any certified measurements. Every instrument in the lab must have ISO17025-traceable calibrations, and the quality system must define recalibration intervals.

It is painfully obvious that Bill has zero experience with any of this.

Reply to  karlomonte
April 16, 2024 1:09 pm

Some things drilled into me was using the same device so you could use the same corrections. Sometimes you had to come in during off hours to get the one you needed. It’s one of the repeatable conditions. Sometimes you had to rebuild a circuit on a breadboard and you couldn’t get the same measurement. Good training to show what production lines do to your design.

Reply to  old cocky
April 16, 2024 5:56 am

The point was not how to do a comprehensive sensitivity analysis. The point was to show that one must deal with the uncertainty in the data points.

One can assume that the stated values are “true values” and therefore 100% accurate or you can assume the data points are a “center” value of an uncertainty interval.

If one assumes that data is is a “stated value ±uncertainty” then one must perform a sensitivity analysis to determine a range of trends that could exist.

old cocky
Reply to  Jim Gorman
April 16, 2024 2:17 pm

The point was to show that one must deal with the uncertainty in the data points.

That’s necessary, but not sufficient.

If one assumes that data is is a “stated value ±uncertainty” then one must perform a sensitivity analysis to determine a range of trends that could exist.

All you really need to determine is the envelope of possible slopes (and intercepts).

That can add some spice to the conversation when the measurement uncertainties are of the same order of magnitude as the measurements.

Reply to  old cocky
April 16, 2024 5:08 pm

That can add some spice to the conversation when the measurement uncertainties are of the same order of magnitude as the measurements.

Evaluation first, using statistical truths that we’ve successfully applied for at least 2 different centuries. Conversation later, without fact free bloviating about “Well, I see what you did, but doesn’t work in the real world.”

Reply to  bigoilbob
April 17, 2024 8:03 am

Again you mention statistical truths before anything else. That is a mathematicians viewpoint.

You might do well to forget statistics and deal with measurements first. Things like:

What resolution does my device have?
What degree of information do I have from this measurement (how many decimal places)?
Is it possible to extract more information than measured?
What conditions was my device calibrated at?
What correction tables do I have.
Does my measurement meet repeatable conditions?
Does my measurement meet reproducibility conditions?

These are part of defining a measurand and a measurement of that measurand.

What is next? How about an uncertainty budget?

  • Repeatability (Level 1) – uncertainty value
  • Reproducibility (Level 2) – uncertainty value
  • Resolution – uncertainty value
  • Calibration – uncertainty value
  • Environmental – uncertainty value

Now we can start some analysis.

  • Does an average give more information digits than what was measured?
  • Does resolution uncertainty limit the number of digits in the average?
  • What distribution does repeated measurements have?
  • What distribution does reproducible measurements have?
  • What is the appropriate dispersion of values that are attributable to the measurand?

Until you get deep into the analysis does statistics even enter into the work.

When you start with statistics, the first thing that enters into the analysis is that the numbers are exact stated values, just like in school. It is not measurement uncertainty that is involved, but sampling uncertainty.

Totally backwards.

Reply to  Jim Gorman
April 17, 2024 8:13 am

Does an average give more information digits than what was measured?

It can.

Does resolution uncertainty limit the number of digits in the average?

It can

What distribution does repeated measurements have?

Given enough different measuring techniques, as with GAT evaluations over physically/statistically significant time periods, a statistically large number of them.

What distribution does reproducible measurements have?

Reproducible? That’s definitional. But in general, as above.

What is the appropriate dispersion of values that are attributable to the measurand?

Small enough to supply the engineered answer to the evaluation.

Until you get deep into the analysis does statistics even enter into the work.

Yes, statistical evaluations require the referenced estimates. Along with the sensitivity evaluations that you gloss over. But otherwise, nope. Do the statistical analysis properly, which of course means using best estimates for the parameters in the first part of your comment, and those questions will be answered. As is done regularly in climate science, per the beaucoup papers available.

Reply to  bigoilbob
April 17, 2024 8:21 am

Does an average give more information digits than what was measured?

“It can.”

Try again.

Rest is word salad.

Reply to  karlomonte
April 17, 2024 8:40 am

Try again.

  1. Pick 10 random numbers between 1 and 10.
  2. Average them
  3. Do it over and over 1000 times.
  4. Find the standard deviation of the averages
  5. Repeat with 1000 similarly sampled random numbers.
  6. Find the ratio of the 2 resulting standard deviations.

Well ba gully. I gut me anudder significant figure…

There are none so blind…

Reply to  bigoilbob
April 17, 2024 9:17 am

Random integers are not measurements … try again.

Reply to  karlomonte
April 17, 2024 9:34 am

Random integers are not measurements … try again.”

Agree. It is the opposite that an be true. For an individual measurement process ,and, increasingly as the number of those measurements increases. No, it’s not me brothers and sisters. It’s the CET wurkin’ thru me…

More and more I see that this might be the initial vapor lock that seeds all of the others of your ilk. This truth propagates throughout virtually all modern statistical evaluations. Since you don’t accept it, you and yours are doomed to lurk under the mossy rock here and elsewhere.

Reply to  bigoilbob
April 17, 2024 10:35 am

More and more I see that this might be the initial vapor lock that seeds all of the others of your ilk.

ILK ALERT!

Your random numbers are exact, they have no uncertainty.

Unlike actual real measurements.

But climate science likes to make up Fake Data, so it is no wonder at all you don’t understand the difference.

And you still can’t understand that for an air temperature measurement, N is always and exactly equal to one. There is no opportunity to measure it more than once. The CLT is quite irrelevant.

Since you don’t accept it, you and yours are doomed to lurk under the mossy rock here and elsewhere.

Great rant, blob.

Reply to  karlomonte
April 17, 2024 10:46 am

Your random numbers are exact, they have no uncertainty.

Sure they do. They are either + or -1 to the expected value. At random. I could have not rounded up or down to the nearest integer, and the result would have been the same. I could have used any combination of discrete and continuous functions, and as the number of them increased, the standard deviation of possible averages of them would decrease.

It’s all been done. Long, long, ago….

Reply to  bigoilbob
April 17, 2024 11:20 am

Sure they do. They are either + or -1 to the expected value.

Well this is a lie:

Pick 10 random numbers between 1 and 10.

And you of course ignored the point about how all this is irrelevant to real air temperature measurements.

And who are my “ilk”, blob? I want a LITS.

Reply to  bigoilbob
April 17, 2024 12:59 pm

Random integers are not measurements … try again.”

Computer generated pseudo-random numbers are done using distributions.

r
runif – uniform
rnorm – normal
rbinorm – binomial
rexp – exponential

C
rand – uniform

Excel
rand – uniform
norm.inv

Funny how that works. Do you think the default uniform distribution generated a measurement distribution that can be assumed to be Gaussian?

Reply to  bigoilbob
April 17, 2024 4:22 pm

For an individual measurement process”

The GAT is not formed from an individual measurement process but from multiple different measurements taken of different measurands by different devices.

You may as well measure all the scrap 2″x4″ boards at a new house construction. It’s highly unlikely you’ll get a Gaussian distribution. You’ll get long ones from the roof trusses and short ones from the stud walls. A multi-modal distribution. It’s no different for temperatures. That’s why it is so important to know the variance, skewness, and kurtosis of every temperature data set, from daily measurements to annual averages. It’s why climate science thinking an average value is all that is needed to describe a distribution is so idiotic.

Reply to  Tim Gorman
April 18, 2024 6:42 am

The GAT is not formed from an individual measurement process but from multiple different measurements taken of different measurands by different devices.”

And as such, many, varied, either positively correlated or not, sources of systemic error. We have known for centuries that these tend towards normality as the number of them increase. So, if they can not be individually identified and corrected, they can be functionally treated as another source of random error. And are.

Your heads I win tails you lose argument is that these Big Foot sources of systemic error are both undetectable and so large as to have a significant chance of not only lurking in the dark to taint individual GAT estimates, but to magically bend decades long trends of them qualitatively. AGAIN, back to the chimps typing the encyclopedia. You’re right – sooner or later, they’ll do it…

Reply to  bigoilbob
April 18, 2024 7:24 am

Another liar, and proponent of Fake Data fraud.

Go blob, go.

Reply to  bigoilbob
April 20, 2024 5:57 am

We have known for centuries that these tend towards normality as the number of them increase.”

You are confusing how precisely you can calculate the mean with the accuracy of the mean.

The standard deviation of the sample means gets smaller as you have larger sized samples IF AND ONLY IF THERE IS NO MEASUREMENT UNCERTAINTY IN THE SAMPLE DATA!

“they can be functionally treated as another source of random error. And are.”

NO, THEY CAN’T!

I’ve given the proof of this several times on here. Apparently you continue to miss it.

Sample1: s11 +/- u11, s12 +/- u12, …. s1n +/- u1n

s1_mean = Σs1(i) / n +/- sqrt[ u11^2 + u12^2 + … + u1n^2]

Sample2: s21 +/- u21, s22 +/- u22, …, s2n +/- u2n

s2_mean = Σs2(i) / n +/- sqrt[ u21^2 + u22^2 + … + u2n^2]

Carry that out for as many samples as you like.

What you wind up with is s1_mean +/- u1_mean, s2_mean +/- u2_mean, etc.

You can calculate the standard deviation of the sample means and as the size of the samples get larger, the standard deviation of the sample means will get smaller, meaning you have more precisely located the mean of the population. In the end, if your sample size *is* the population you get the population average with a standard deviation of zero.

BUT, you will still be left with the propagation of the uncertainties to contend with.

You, and climate science, just throw away the measurement uncertainty by assuming that “all measurement uncertainty is random, Gaussian, and cancels”. WITHOUT EVER PROVING IT!

u1_mean, u2_mean, …, un_mean are probaly NOT A RANDOM DISTRIBUTION especially if they contain systematic uncertainty because of microclimate differences. Even if they *are* random, they are quite likely *NOT Gaussian. And they must be both in order to assume they cancel.

Your heads I win tails you lose argument is that these Big Foot sources of systemic error are both undetectable”

They ARE undetectable. All the metrology experts state this specifically.

” and so large as to have a significant chance of not only lurking in the dark to taint individual GAT estimates, but to magically bend decades long trends of them qualitatively.”

If they *are* undetectable then how do you *know* they are not significant? Significance is measured by the bias relative to the differences trying to be detected. When trying to find differences in the hundredths digit of the temperatures then a systematic bias in the thousands digit becomes significant.

I’m pretty confident that I can unequivocally state that there are no field temperature measuring devices today whose accuracy is in the thousandths of a degree. Prove me wrong if you can!

Reply to  Tim Gorman
April 21, 2024 4:21 am

comment image

comment image

Reply to  bigoilbob
April 17, 2024 4:13 pm

You really do not understand what you are doing when it comes to temperature measurements, do you?

Picking random numbers from the universe of numbers is no different from picking samples from a large data set. The Central Limit Theory tells us that the means of such a set of samples will always tend to Gaussian.

Measurements have no such CLT controlling their distribution. They can be skewed and the standard deviation is meaningless. When measurements from different stations are crammed into the same data set, e.g. NH temps crammed together with SH temps, you can even have a multi-modal distribution. This can happen using either absolute temps or anomalies due to different variances of the temperatures.

Reply to  Tim Gorman
April 18, 2024 6:51 am

When measurements from different stations are crammed into the same data set, e.g. NH temps crammed together with SH temps, you can even have a multi-modal distribution

The CLT still applies. Read and heed.

https://bootcamp.umass.edu/blog/quality-management/central-limit-theorem#:~:text=For%20example%2C%20the%20distributions%20can,population%20must%20include%20finite%20variance.

Reply to  bigoilbob
April 20, 2024 6:01 am

The CLT still applies. Read and heed.”

The CLT only applies to how precisely you can locate the population mean when using the stated values of a set of measurements. It tells you absolutely zero about the accuracy of the mean you find.

If your data is wildly inaccurate, then the mean you find will also be wildly inaccurate no matter how precisely you locate the mean.

The CLT simply doesn’t apply to the accuracy of the mean UNLESS you make the common climate science assumption that “all measurement uncertainty is random, Gaussian, and cancels”. Which climate science never actually proves – and neither do you!

Reply to  old cocky
April 17, 2024 6:54 am

That is what a sensitivity analysis does ultimately.

The real issue comes about when extending the trends into the future as climate science does. The trends will continue diverging making wider and wider uncertainty intervals. That is what I learned in business.

Climate science by proselytizing that there will be a given temperature in 2100, never explains the uncertainty in that number. Maybe we wouldn’t be rushing headlong into destroying economies if they would.

old cocky
Reply to  Jim Gorman
April 17, 2024 1:46 pm

The real issue comes about when extending the trends into the future

Thou shalt not extrapolate.
Thou shalt not extrapolate.
Thou shalt not extrapolate.
Thou shalt not extrapolate.
Thou shalt not extrapolate.

Reply to  old cocky
April 17, 2024 6:25 pm

Exactly.

Reply to  Bill Johnston
April 15, 2024 4:16 pm

There is also no X-data, so whatever trend you calculated is nonsense.

You are driving a dying car.

Of course there is x-data. It is a sequence of numbers just like dates on a calendar. The fact that I started at zero and counted up to 7 is no different than going from 2010 to 2017.

We are not dealing with a system that has a functional relationship at all. the x-axis values DO NOT determine the y-values. The readings are sequential in TIME, i.e. a time series, not a linear relationship between two variables.

You forget that in order to calculate trend, you need X data. You should also check your residuals.

Are you attempting to show that a time series has some relationship to the time periods? Good luck with that.

Time Series Analysis Introduction – Statistics By Jim

A time series is a set of measurements that occur at regular time intervals.

Regular time intervals, understand. The time intervals can be indicated as a sequence number, a clock time, a calendar date, etc. They just need to be regular.

The regression info I showed you, has a line equation. You’ll find if you put the sequence number in for “x”, you will get an output of “y”.

You are being obtuse to not recognize that stated value uncertainty does affect the range of possible trend lines that you must examine.

Reply to  Jim Gorman
April 15, 2024 4:42 pm

I just replied to Old Cocky about misreading the numbers.

I did not say “the x-axis values DO NOT determine the y-values”. What I said is the the x-values determine trend.

In answer to your other commentary, you can also have missing data, outlier data, data juxtaposed from one instrument to another, or one site to another, filled-data ….

I have been looking at sea surface T-data for sites along the Great Barrier Reef (40 to 144 samples/day over 25+ years). Excel will only load 1/4 of some datasets. There are slabs of data missing – sometimes 6-months at a time.

How would that go with flat-earth time-series analysis that requires sequential data?

Have you ever done this stuff hands-on, or are you just offering comment based on no experience at all?

Cheers,

Bill

Reply to  Bill Johnston
April 15, 2024 5:36 pm

So no fishing then; why am I not surprised that you are back here ranting and raving, again.

Reply to  Bill Johnston
April 15, 2024 5:51 am

It’s not a “pet-theory”. It’s actual metrology. It’s what we used to do in Long Range Planning at the major telephone company I worked for when we would analyze a capital expenditure project in order to rank its attractiveness compared to other plans.

We would gather all the factors we could think of, from labor costs to depreciation schedules to taxes to demographics to maintenance costs to interest on bonds/loans to etc.

We would then vary each individually and then in combinations. All because each factor had a “measurement uncertainty”. Labor costs were always asymmetric, always up never down, and while interest costs could be guaranteed to go up and down in the future. Much like temperature measurement stations with calibration drift and seasonal variation!

Systematic bias is NOT measurement uncertainty.”

Of course it is. It is uncertain because it is UNKNOWN for field measuring devices. Part of the GREAT UNKNOWN.

Taylor: “Even when we can be sure we are measuring the same quantity each time, repeated measurements do not always reveal uncertainties. For example suppose the clock used for the timings in (1.3) was running consistently 5% fast. Then all timings made with it will be 5% too long, and no amount of repeating (with the same clock) will reveal this deficiency. Errors of this sort, which affect all measurements in the same way, are called systematic errors and can be hard to detect, as discussed in Chapter 4. In this example, the remedy is to check the clock against a more reliable one.”

Let me disabuse the readers of the misunderstanding of the term “error” when used by Taylor and other experts.

Taylor: “In science, the word error does not carry the usual connotations of the terms mistake or blunder”. Error in a scientific measurement means the inevitable uncertainty that attends all measurements. As such, errors are not mistakes, you cannot eliminate them by being very careful. The best you can hope for is to ensure that errors are as small as reasonably possible and to have a reliable estimate of how large they are. Most textbooks introduce additional definitions of error, and these are discussed later. For now, error is used exclusively in the sense of uncertainty, and the two words are used interchangeably.”

Taylor’s textbook was first published clear back in 1982. Even then science was moving away from the concept of “true value +/- error” to “stated value +/- uncertainty”. Clear back in the late 60’s/early 70’s we learned about uncertainty in lab settings, from chemistry to EE labs. Did each pipette in chem lab have the same size drop when doing titrations? What was the uncertainty of the voltmeter (typically in percentage of full scale). You could never say *exactly* what the ph was of a solution and you could never say *exactly* what a voltage value was no matter how carefully you measured everything.

Nor could you take the average of all the readings from eight different lab stations and say that was the “true value”. We got that ‘beaten” out of us in my first EE lab.

YOU should have learned the same thing you your university labs. If you didn’t then either you didn’t listen or the teachers short-changed your education.



Reply to  Tim Gorman
April 15, 2024 6:55 am

My introduction to the subject was via the ASTM precision and bias system, which is quite different (and quite old). Years ago ASTM (the acronym used to stand for Amer. Society for Testing and Materials) decreed that a test method, a standard that produces a numeric result, must include a statement on the precision and bias of the test.

ASTM had recognized that bias is a problem when a lab performs a test on a material, but had no way of quantifying the magnitude.

So how is this supposed to be done? With an intercomparison of the method among a group test labs, where they all run the test on the same materials, using a minimum of 6 different samples and 6 different labs. The results are processed according to a standard written by the Committee on Statistics, which declares that “repeatability” and “reproducibility” of the test method are A and B, in percent. This is supposed to give whoever reads the standard an idea of how good it is.

If a standard can’t meet the minimum for the intercomparison, this is a problem.

All of this is completely different from uncertainty, of course, and one of the problems is that the test samples are almost treated as true values. As time went by and more committees were formed that have nothing to do with testing steel, copper, aluminum, plastics, paint, etc, the precision and bias requirement became just bureaucracy.

Reply to  karlomonte
April 16, 2024 12:44 am

if it is any of your business Karlo, we were set to go fishing but instead my mate and I welded up a bit of gear for my shed, then had lunch, Meanwhile my computee backed its self up and you squarked away like a 3 year-old up the back of the class vying for attention, like you usually do.

Quality standards in Australia are set by NATA and they conduct similar accreditation and certification between labs as you describe. But that has nothing to do with Stats and I frankly don’t think you or Jim have much of a handle on data or their analysis.

Here is some sea surface temperature data for the automatic weather station at Davies reef. How do you approach analysis from a GUM perspective? I know you won’t/can’t – you are all gas and no engine.

DeciYear      Max      N            Pop.sd
1991.797   25.84   43          0.167121
1991.799   25.48   48          0.156076
1991.802   25.5      48          0.172705
1991.805   25.55   48          0.122627
1991.808   25.83   48          0.158245
1991.81      25.77   48          0.119663
1991.813   25.84   48          0.14704
1991.816   25.8      48          0.097436
1991.819   25.83   46          0.136585
1991.821   25.8      48          0.074019
1991.824   26.09   48          0.110535
1991.827   26.09   48          0.101693
1991.83      26.19   48          0.145243
1991.832   26.25   48          0.129023
1991.835   26.42   48          0.177724
1991.838   26.45   48          0.140649
1991.841   26.35   48          0.162239
1991.843   26.68   48          0.171428
1991.846   26.7      48          0.193369
1991.849   26.72   48          0.200389
1991.851   26.5      48          0.171864
1991.854   26.55   48          0.152752
1991.857   26.4      48          0.091167
1991.86      26.53   48          0.149307
1991.862   26.44   48          0.057173
1991.865   26.55   48          0.11031
1991.868   26.79   48          0.174742
1991.871   26.91   48          0.216339
1991.873   26.89   48          0.0785

All the best,

Bill

Reply to  Bill Johnston
April 16, 2024 5:37 am

/rant skipped, unread/

bill johnson the religious zealot continues his usual practice of stepping hard on others in order elevate himself and his huge ego.

I still like this one the best:

“Troll.” — Jennifer Maharosy, in reply to bill the chump

Reply to  Bill Johnston
April 18, 2024 8:48 am

The GUM tells you how to handle measurement uncertainty.

As you have pointed out, a single measurement has no standard deviation. Yet you are posting a single measurement, Tmax, with a standard deviation.

Assuming Tmax is from multiple measurements on a day/days WHERE IS YOUR UNCERTAINTY INTERVAL FOR EACH PIECE OF DATA. WHERE ARE THE KURTOSIS AND SKEWNESS VALUES?

Are you suggesting the standard deviation is the uncertainty of Tmax for each data point?

Since you give no measurement uncertainty for this data I would first assume a typical measurement uncertainty of +/- 0.5C and round all the measurements to the tenths digit.

This gives us:
25.8
25.5
25.5
25.6
25.8
25.8
25.8
25.8
25.8
26.1
26.1
26.2
26.3
26.4
26.5
26.4
26.7
26.7
26.7
26.5
26.6
26.4
26.5
26.4
26.6
26.8
26.9
26.9

Then assume this is experimental data and find the variance.

This gives us a mean of 26.3 and a variance of 0.18.

SD = 0.4

Then expand this by some factor, start with 2, in order to make up for not knowing the actual measurement uncertainty and you get

26.3 +/- 0.8 or a relative uncertainty of 3%

All of this is based on the GUM.

It shows that you can’t differentiate between values in the hundredths digit. With a measurement uncertainty of 0.8 it’s questionable even trying to identify differences in the tenths digit. Your mean has an uncertainty interval of 25.5 to 27.1.

This distribution is left skewed and the median and mean are not equal. That means the distribution is not symmetrical and you can’t assume that measurement uncertainty is random, Gaussian, and cancels.

This is just a quick, basic analyzation of the data. I would go back and find the actual measurement uncertainty of each data point and propagate that forward onto the average in order to get a better understanding of the measurement uncertainty.

old cocky
Reply to  Tim Gorman
April 18, 2024 1:54 pm

26.3 +/- 0.8 or a relative uncertainty of 3%

You’ve done it again. Not that relative uncertainty matters in this case.

Reply to  old cocky
April 20, 2024 6:18 am

Just using the data that is given. Of course the relative size will change if you rescale it. But when the measurements are based on a specific unit value then calculating the relative uncertainty in *that* unit value is acceptable.

old cocky
Reply to  Tim Gorman
April 20, 2024 2:01 pm

Tell that to M. Carnot.

Reply to  Tim Gorman
April 18, 2024 11:03 pm

Dear Tim,

Depending on the experiment, you can derive a second-decimal place mean (or more) from a large population of imprecise samples. I agree however, that creating ‘trend’ that is due to the way it is calculated, and not the data, is misleading.

Before getting hot and sweaty about this and yelling and screaming, you most understand the fundamentals of least-squares regression, which as I’ve said repeatedly, cannot choose where it goes and simply float around. The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means. It does not and cannot ‘choose’ where it goes based on the CIs of individual data points.  
 
So to the homework task …

I had not actually analysed that small sample of data before. However, I did mention that it was from an AWS and recalling previous discussions with Lance and others and Karlo, T-sensors are probably accurate to at least three decimal places. The data shows there is one Tmax value/day, out of a total of 40 samples, that together have a population SD of whatever.

If you divide the SD with the Tmax value, you get a very rough estimate of the CV, which if you graph it, it seems to have a sine-wave or some other hidden signal.

Dates are in DeciYears, So 0.797 * 12 months is 9.5, so you know the data start in mid-September, which is spring in Oz. If you had graphed the data first, you would have found an increasing least-squares trend of 17.26DegC over the interval, which is 29 days = 0.595DegC/day, so SST is warming rapidly.

Going to my stats application (or using Excel),I find that while the data are probably normally distributed, they are not independent. Residuals show a ‘hidden’ signal. The x-y plot shows the increase in SST may not be linear, but the rate may be change depending on the temperature. This behavior is likely to be accounted for by a 2-order polynomial.

The polynomial fit does three things. One it that it provides a better fit to the data (R^2 increases and AIC declines); secondly, it shows deceleration in the rate of warming as SST increases (the coefficient is negative); and thirdly it removes autocorretaion, which is a source of dependence from one time to the next.

The physical explanation is that as solar radiation increases under clear skies in September, SST also increases as expected, but evaporation increases also. So the increase in SST ‘decelerates’ due to a temperature-dependent feed-back that removes heat from the sea-surface via. evaporation. Other research I’ve done shows that by the end of November to mid-December, SST has achieved a stable state of around 28.5degC where it remains until the end of summer (early March).

So the data are not skewed, they don’t show kurtosis, they don’t need rounding-down or tampering-with, they simply reflect a process.

As they are measured by an electronic thermometer, calibrated by a NATA-certified lab, data are also ‘accurate’.

All this is based on sound analytical practice – graph the data first, work out what it represents, analyse it then explain what is happening using sound physical reasoning.

And oh, as it is of no help at all, toss the GUM in the bin.

All the best,

Bill Johnston
.

Reply to  Bill Johnston
April 19, 2024 3:43 am

As they are measured by an electronic thermometer, calibrated by a NATA-certified lab, data are also ‘accurate’.

Hypocrite and imbecile — a NATA-certified laboratory (the real word is accredited) is required to report uncertainty of their results according to the GUM, not “error”. If they didn’t use the GUM, they would lose their accreditation.

You lied about this yesterday when I pointed your little “error’ out.

You have ZERO knowledge of real instrumentation and calibration, especially and including ISO 17025.

And oh, as it is of no help at all, toss the GUM in the bin.

Poor bill the ranter simply can’t abide that the world has left him in the dust.

So he lies about it.

And the simple fundamental fact remains that uncertainty is not error.

Reply to  karlomonte
April 19, 2024 3:56 am

No dumbo,

Instruments carry calibration certificate issued by a NATA certified lab.

Such a certification does not guarantee their performance in the field.

Instrument error as you know is 1/2 the interval range.

Could not do the homework task? You light-bulb you!

All wind and no instrument!

Cheers,

Bill

Reply to  Bill Johnston
April 19, 2024 4:17 am

No dumbo,

Instruments carry calibration certificate issued by a NATA certified lab.

I’ve been the technical manager for an accredited calibration laboratory according to ISO17025, which requires use of the GUM.

You’re a liar, bill, who doesn’t know WTF you rant about.

Reply to  karlomonte
April 21, 2024 1:12 am

Just a Karlo dream,

Instead of being here, you could still be there giving all those workers the $hits. So for their sakes it is wonderful that you moved on to abuse those beyond your former wokeplace.

I bet you got a plastic Swatch and a grand farewell where everybody left early!

b

Reply to  Bill Johnston
April 21, 2024 6:11 am

Despite your obsession with me and all your lame insults, the fact remains that the GUM is the worldwide standard for reporting uncertainty of measurement results.

“You’re a liar, bill, who doesn’t know WTF you rant about.”
— me.

Reply to  Bill Johnston
April 19, 2024 4:18 am

And you still can’t comprehend that uncertainty is not error.

Who is the “dumbo” here?

Reply to  karlomonte
April 19, 2024 3:03 pm

The whole thing is nicely explained here:

https://www.esscolab.com/uploads/files/measurement-guide.pdf

Have a nice day,

Saturday calls and I’m moving on from this boring conversation.

b.

Reply to  Bill Johnston
April 19, 2024 4:05 pm

HAHAHAHAHAHAHAHAHAHAH

That is all straight out of the GUM and ISO17025!

—————————
16 Further reading

[1] BIPM, IEC, IFCC, ISO, IUPAC, IUPAP, OIML. Guide to the Expression of Uncertainty in Measurement. International Organization for Standardization, Geneva. ISBN 92-67- 10188-9, First Edition 1993, corrected and reprinted 1995. (BSI Equivalent: BSI PD 6461: 1995, Vocabulary of Metrology, Part 3. Guide to the Expression of Uncertainty in Measurement. British Standards Institution, London.) 

[10] International Standard ISO/IEC 17025 General Requirements for the competence of testing and calibration laboratories, First Edition 1999, International Organization for Standardization, Geneva. 

Annex A – Understanding the terminology

Type A evaluation of uncertainty
evaluation of uncertainty by statistical methods

Type B evaluation of uncertainty
evaluation of uncertainty by non-statistical methods

You are a really, really bad troll.

And you still don’t understand that uncertainty is not error.

Reply to  karlomonte
April 19, 2024 4:48 pm

Of course it is but the equations and their meanings are straight out of Stats 1.01.

Variance is still the variance, the SD is still the SD, the SE is still the SE and the spread factor is just a woked-up version of Students-t!

Instrument uncertainty is still 1/2 the interval range, a least-squares line still minimizes the sum of the squared deviations of each data point from the line, and the line still goes through grand-mean X & Y. Slope (trend) is still determined by x.

Nothing new under the sun.

What happened to your homework?

Dog ate it, couldn’t do it, too used to bossing others around or calling them liars.

All gas and no engine our Karlo.

b

Reply to  Bill Johnston
April 19, 2024 6:58 pm

GFY, troll.

You don’t even bother to read the links you push.

old cocky
Reply to  Bill Johnston
April 19, 2024 4:38 pm

I might be missing something, but it looks very much like what your sparring partners try to say, at least before everybody gets stroppy.

old cocky
Reply to  old cocky
April 19, 2024 5:53 pm

The uncertainty budget table was nice.

Reply to  old cocky
April 19, 2024 6:46 pm

Thanks. There is ton of information online about uncertainty and measurements. Metrology has become a recognized field of study requiring an engineering degree. I did just a quick search and found 2 jobs with comprehensive requirements and requiring a PE certification.

Reply to  Jim Gorman
April 19, 2024 7:02 pm

No surprise at all, the GUM has become a de facto international standard by virtue of ISO 17025 using it.

Reply to  old cocky
April 19, 2024 7:00 pm

No you are not missing anything, oc.

He claims, again and again, that Type B uncertainties are “woke”, then goes and tells people to read docs that present material straight out of the GUM he derides so much.

Reply to  old cocky
April 19, 2024 8:55 pm

Dear Old Cocky,
 
I think what they are on about carries with it a high level of uncertainty.

I doubt that either of them would go to a worksite with a computer and a tape measure in order to provide an uncertainty estimate for every lump of wood or steel or brick they measured. So, except for undertaking busy-work and getting in the way, what is the practical use of their fixation on the GUM?
 
My fishing mate and I cut up some RHS the other day using a tape, a rule and a cutoff saw but not a GUM-budget in sight. When we welded it together all the bits fitted perfectly, square in all dimensions. Ready now for a coat of paint.
 
They built a house up the road. The blokes came with a bobcat, levelled the site, plumbers and sparkies put down their pipes using laser gear, ditto for another crew that did the formwork and reo; others came with a concrete pump to do the pour, then all the walls and trusses, pre-cut to specs arrived with another crew a week later. Up it went in a few days, then the brickies, and roofers, plasterers … hardly any waste at all, half-a small skip, very few timber off-cuts and no GUM in sight. So, what is the use of it?
  
The length of string example in the “Beginner’s Guide” I referenced (Section 9), calculated for a length of string averaging 5.07 m (from N=10 observations), a combined uncertainty of 12.8 mm (0.0128 m) or 0.25% of 5.07 m (see their Table 1).  However, the tape can still be observed to ½ the interval range, which in the example is +/- 0.5 mm. So how do they get 12.8 mm uncertainty from 0.5 mm?
 
So, a big hairy, burly carpenter or brickie called Shirley with years of experience in the game, goes down there in his enormous black RAM loaded with tools and surf boards, measures 5 m of a bit of string using a ‘certified’ tape to +/- 0.5 mm, and with the GUM under their arms, Tim and Karlo try to convince him he is out by 12 mm or so!! Really!
 
And they think that by shuffling around some numbers they can demolish global warming without having any idea that least squares regression minimizes the sum of the squared deviations of each data point from the line, and that the line goes through grand-mean X & Y!! Double really??
 
I put up an example, and neither of them could come to grips with the dataset. Karlo could not do it, or he ate it or something. Jim did not even do a sequence graph, so got lost by thinking it was just a pile of y’s with no time structure.
 
So, aside from wasting time and providing them with a target they can throw a tantrum at, what exactly in plain English is the use of all this GUM? I personally have no idea and I’m sure they don’t either.
 
All the best,
 
Bill

Reply to  Bill Johnston
April 19, 2024 9:04 pm

I think what they are on about carries with it a high level of uncertainty

You don’t know what the word means, troll.

 what exactly in plain English is the use of all this GUM? I personally have no idea and I’m sure they don’t either.

Then why do you promote it, troll?

Reply to  karlomonte
April 20, 2024 4:56 am

I don’t ‘promote’ it as such Karlo.

The GUM is really important in the calibration and certification of instruments, including thermometers, survey and medical equipment and all the rest. I get that and I have calibrated suspect thermometers using an ice bath and long-stem laboratory thermometers observed simultaneously over a range of temperatures. Same same with finely-tuned laboratory balances. Not hard, and something that used be taught in physics and chemistry 1.01.

However, the GUM has no place in mensuration – the measurement of things using such calibrated and certified instruments.

Differences in measuring the same bit of string 10 times simply reflected measurement error, not instrument error, which is fixed at 1/2 the interval range: 1/2 of 1mm in that case.

So we need the Karlos and Tims of this world tucked-away in their labs, but what is needed on worksites, including at meteorological offices, is good practice – protocols that minimise avoidable error in measurement and which thereby promote accuracy.

I think this conversation needs to fizzle out.

Cheers,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
April 20, 2024 8:20 am

I don’t ‘promote’ it as such Karlo.

Liar. You post links that do (which you should read, you might learn something. Naaa, forget that, you are incapable of learning).

However, the GUM has no place in mensuration – the measurement of things using such calibrated and certified instruments.

HAHAHAHAHAHAHAAH

Go read your own links, troll.

And despite all your ridiculous mewlings about the GUM, it remains the international standard for uncertainty. It is used everywhere.

I think this conversation needs to fizzle out.

Yet here you are, again. Your religious attitude compels you to step on me to boost yourself.

Get help for your obsession.

Reply to  karlomonte
April 21, 2024 12:52 am

Bullshit Karlo,

It is the instrument that is certified, not the measurements.

Back the front as usual, give yourself a kiss.

b

Reply to  Bill Johnston
April 21, 2024 5:54 am

However, the GUM has no place in mensuration – the measurement of things using such calibrated and certified instruments.

This is a joke right? Even calibrated and certified instruments have a specified calibration interval and are calibrated under closely specified conditions. Things like pressure, temperature, humidity, etc. Unless measurements are taken under these exact same conditions, there is uncertainty involved.

The GUM was derived because of the need for globally accepted expression of what a measurement consisted of.

From the GUM:

0.1 When reporting the result of a measurement of a physical quantity, it is obligatory that some quantitative indication of the quality of the result be given so that those who use it can assess its reliability. Without such an indication, measurement results cannot be compared, either among themselves or with reference values given in a specification or standard. It is therefore necessary that there be a readily implemented, easily understood, and generally accepted procedure for characterizing the quality of a result of a measurement, that is, for evaluating and expressing its uncertainty.

Calibrated instruments moved into the field immediately begin what is commonly called drift. Nothing stays the same. Environmental conditions vary meaning measurements have uncertainty beyond calibration. It is one reason it is expected that conditions of repeatability be used to determine the variance in measurements over a short period of time when measuring THE SAME THING. Temperatures vary in time so multiple measurements are impossible.

Like it or not, instrument resolution is not the only uncertainty item to be considered. That is why national bodies recommend the assessment of repeatability uncertainty, reproductible uncertainty, and long term uncertainty.

Repeatability conditions is unmeasurable since measurements of the same thing is impossible with atmospheric temperatures. Your national bodies should have a standard deviation to be used for different measurement bodies. In the U.S., NOAA specifies ±1.8°F for ASOS stations and 0.3°C for CRN stations, both with a resolution of 0.1°.

For a declared measurand of a Tₘₒₙₜₕₗᵧ_ₐᵥₑᵣₐ𝓰ₑ, one must also include a calculation for reproducibility.

Here are Tmax temps from March 2022 at Topeka Forbes in the U.S.

77,87,61,73,80,41,33,47,42,26,32,38,68,58,64,76,73,46,65,76,63,57,41,44,57,58,52,60,82,59,49

Here is the output of the Uncertainty Machine used by NIST for evaluating the uncertainty in series of measurements.

comment image

This uncertainty must be propagated throughout any further calculations. We all know that is not being done.

Reply to  Jim Gorman
April 21, 2024 6:23 am

This is a joke right? Even calibrated and certified instruments have a specified calibration interval and are calibrated under closely specified conditions.

Sadly, no it isn’t. This guy doesn’t understand anything about instrumentation, metrology, uncertainty, lab accreditation, etc, and goes ape if anyone dares to expose his rank ignorance.

Reply to  Bill Johnston
April 21, 2024 6:20 am

Hi, asshole:

You just make shit up as you go, hoping to buffalo people.

Oh look, the National Institute of Standards and Technology, top of the food chain for metrology traceability in the USA, uses the GUM:

https://uncertainty.nist.gov/

The NIST Uncertainty Machine implements the approximate method of uncertainty evaluation described in the “Guide to the expression of uncertainty in measurement” (GUM), and the Monte Carlo method of the GUM Supplements 1 and 2.

Sux2BU

Reply to  Bill Johnston
April 20, 2024 5:26 am

which as I’ve said repeatedly, cannot choose where it goes and simply float around. The slope of a least squares line minimises the squared differences between the fitted line “

The issue is your “fitted line”. How do you determine the “fitted line” in the face of measurement uncertainty? If you can’t determine the slope of any segment, i.e. data_point_1 to data_point_2, because of the measurement uncertainty then you have no way to determine the overall slope of a (single) single “fitted line”. This is especially true when the starting data point and the ending data point of the entire data set fit within the measurement uncertainty interval.

If you assume the measurement uncertainty of the temperature data is at lest +/- 0.5C then when the beginning and end of your temperature data are within that 1C interval you simply don’t know what the “fitted line” actually is.

Climate science’s answer to this? Ignore the measurement uncertainty of the data, including the variance of the temperature data sets used to calculate monthly and annual averages.

You’ve been asked before to provide the variances of your data sets. I have yet to see an answer. Not a single study on climate that I have ever read does anything about weighting data based on its variance when combining different data sets such as from the northern hemisphere and southern hemisphere. Just jam the data together and calculate an average while ignoring variance – a direct metric for the accuracy of the average.

There *is* a reason why, even in basic stat classes, it is taught that the average by itself in an insufficient statistical descriptor of the data. Unless you are in climate science that is.

Reply to  Tim Gorman
April 20, 2024 8:22 am

If you assume the measurement uncertainty of the temperature data is at lest +/- 0.5C

He doesn’t understand what uncertainty is, and rejects the notion that it propagates.

Reply to  karlomonte
April 20, 2024 4:11 pm

I just did some work with the NIST Uncertainty Machine. I loaded January 2022 Tmax and March 2022 Tmax into the machine. Lo and behold what it gave me for u(y)? 16 and 14.1! Hilarious!

You can see the outputs at:

https://ibb.co/Ht0cwL1

Hopefully it will load for ok. I’m trying to find another way to post images.

This may work also.

comment image

Reply to  Jim Gorman
April 20, 2024 4:58 pm

Yes, both worked. From data like these we are supposed to believe it is possible to resolve changes of 10 mK!

Reply to  karlomonte
April 21, 2024 8:06 am

You’ll notice bdgwx hasn’t said a word. He was the one touting the UCM so I’ve been playing with it. No wonder we haven’t heard anything.

Reply to  Jim Gorman
April 21, 2024 10:16 am

Maybe he’s not reading, it will be worth showing it again when the next UAH rolls around.

Reply to  Jim Gorman
April 22, 2024 7:09 am

From appearances, he seems to have a job. They cut into your WUWT time…

Reply to  Tim Gorman
April 20, 2024 8:26 pm

Dear Tim,
 
Your answer, and most of the preceding discussion shows you have never undertaken a course in basic statistics in your life. The measurement uncertainty of a value observed using a meteorological thermometer is no more and no less than 0.25 degC – rounds to 0.3degC. It is not 0.5, it is not 1.34, it is not 27 or 135 it is +/-0.3 degC.
 
You are telling me that if Shirley the carpenter/brickie measured and cut-off a piece of string to the nearest 0.5mm, your GUM would claim he was out by 11 mm or so. I have a rough idea where Shirley would tell to stick your GUM and if you were too-slow to clear out he might even follow through and do it himself.
 
Variance is the sum of squared deviations from the mean divided by N for a population, and N-1 for a sample from a population.  For temperature the units of variance are degC^2, hence it referred to generically as sigma^2. Given that equation, and or using Excel VAR or VARP, you yourself can determine the variance of any set of numbers you choose, including temperature data. YOU A BIG BOY NOW AND I DON’T NEED TO SPOON-FEED YOU WITH VARIANCES – do it your frigging self, lazy pr$k. And if you want to analyse trend in climate data weighted by their variance, that can be done too.
 
Your latest stanza is just bizarre.
 
I don’t determine the fitted line. Again: The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means. It does not and cannot ‘choose’ where it goes based on the CIs of individual data points.
 
 
As I have previously stated, if you (meaning YOU) want to determine “the slope of any segment, i.e. data_point_1 to data_point_2” you could, but why would YOU do that. First differences divided by the time interval would give YOU that slope, but so what?
 
You are clearly off your rocker. Take up string-measuring or go talk to Karlo.
 
b.

Reply to  Bill Johnston
April 20, 2024 8:45 pm

 The measurement uncertainty 

You don’t know what the word means, troll.

 I have a rough idea where Shirley would tell to stick your GUM

So sad for poor little bill the troll, the GUM is in use worldwide, and all his ranting can’t change it.

You are clearly off your rocker.

Projection time for the little man troll with the huge ego.

Take up string-measuring or go talk to Karlo.

Get help asap for your obsession, troll.

I am very glad I don’t have to deal with your narcissism in real life.

Reply to  Bill Johnston
April 21, 2024 8:17 am

If the data has uncertainty then you DO NOT KNOW which combination of uncertain data defines the REAL regression line. It is that simple.

YOU MUST DEAL WITH THE UNCERTAINTY.

One regression line simply does not encompass the different combinations of unknown temperatures. You must evaluate all the possible combinations to obtain an idea where the real line could lie. This has nothing to do with the residuals around any given regression line.

Reply to  Tim Gorman
April 15, 2024 5:11 pm

Oh dude, did you factor-in mobile phones?

And what if you had?

b

Reply to  Bill Johnston
April 15, 2024 5:37 pm

Off your meds now?

Reply to  Bill Johnston
April 15, 2024 6:14 am

No one claims a single value is 100% accurate.”

You do! When you assume the trend line developed using ONLY the stated values you are assuming they are 100% accurate. You can’t even admit that you can’t tell the actual slope of the trend line between two different consecutive data points unless the difference is greater than the uncertainty interval of the data point measurements.

“It is also not a LIE that The slope of a least squares line minimises the squared differences between the fitted line and the Y-values. It also passes through the X, Y grand means.” (bolding mine, tpg)

So what? Are the “grand means” supposed to be 100% accurate? That would require the application of the meme that “all measurement uncertainty is random, Gaussian, and cancels”. Including all the systematic uncertainty!

“Do you still not understand what the standard deviation is? Standard error?,”

Standard deviation and standard error are *NOT* the same. Standard deviation is a statistical descriptor of *all* of the data points. Standard error is a statistical descriptor of the sampling error associated with calculating an average of the data points from samples. Again, they are *NOT* the same. Standard deviation of the data points is one metric for the accuracy of the mean. Standard error is a metric for how precisely you have located the mean of the data from sampling, it is *NOT* a metric for the accuracy of the mean.

Do YOU understand the difference?

 Have you not heard of bootstrapping – several thousand iterations in a few seconds?”

I said nothing about how long it would take, only how many would be needed. Do you have a reading comprehension problem?

“Have you done any measurands today? Can you do stats and do GUM at the same time?”

Yep. I added this mornings temperature, humidity, wind, and pressure readings to my daily logbook this morning. I also added the 5 minute data points from local midnight to 6AM to my data base.

I know what the measurement uncertainty of measurement equipment is also. I don’t just assume that any measurement uncertainty will cancel over time (since 2012 when I started collecting data) leaving the meter indication as 100% accurate.

I also know what the variance of my data is. I don’t just quote the average monthly temperatures while ignoring the variance of the monthly data. I don’t average the monthly averages together to form an annual average while ignoring the different variances of temps for each month.

You should not carefully that the main GUM equation for propagating measurement uncertainty is *NOT* a statistical descriptor. It is an ADDITION of the measurement uncertainties of the data elements. A quadrature addition but still an addition and not an “average”. The GUM does *NOT* assume that all measurement uncertainty is random, Gaussian, and cancels. The GUM does *NOT* assume that the measurement uncertainty is random with no systematic component.

Can you provide a worked example?”

TN1900, Example 2.

Reply to  Tim Gorman
April 15, 2024 4:11 pm

You like twisting words and conflating meanings don’t you?

As I’ve said repeatedly variance and SD are calculated in Excel as either VAR or VARP, and SD as STDEV and STDEVP. SEM refers to an estimate related to a mean, which takes into account the number of samples. SE*t = CI for the mean. (I use grown-up’s programs for stats not EXCEL.)

Regression does not assume single values are “accurate”, nor have I ever claimed them to be.

I have also never said “you can’t tell the actual slope of the trend line between two different consecutive data points unless the difference is greater than the uncertainty interval of the data point measurements”.

What I have said is that for two values to be different, their means must lie outside respective CI’s.

You want to do slopes between individual values, use LOWESS or a smoothing spline (if that is what you want – can’t see the use of it in this case tho … maybe you can, who knows what goes-on inside your head.)

Your own worked example using real data, not somebody’s else’s.

The GUM is just re-woked statistics 1.01.

Bill

Reply to  Bill Johnston
April 15, 2024 5:38 pm

The GUM is just re-woked statistics 1.01.

You wish.

Reply to  Bill Johnston
April 16, 2024 6:01 am

” (I use grown-up’s programs for stats not EXCEL.)”

Certainly more capable, but do they give you different results?

Reply to  Bill Johnston
April 18, 2024 4:00 am

” SEM refers to an estimate related to a mean, which takes into account the number of samples. SE*t = CI for the mean. (I use grown-up’s programs for stats not EXCEL.)”

Which is the interval in which the population might lie. It has absolutely NOTHING to do with how accurate that calculated mean might be. Even in the case of multiple samples of large size from a severely skewed or multi-modal distribution, the sample means will still generate a Gaussian distribution according to the CLT. The problem there is that none of the programs take measurement uncertainty into account, which expands the standard deviation of the CLT generated Gaussian distribution.

Example:
measurement data: x1 +/- u1, x2 +/- u2, …., xn +/- un
sample 1 data: x1 +/- u1, x13 +/- u13, x100 +/- u100, …., x663 +/- u663
sample 2 data: x7 +/- u7, x23 +/- u23, x402 +/- u402, …, x947 +/- u947
sample 3 data: …..
.
.
.sample 100 data: x49 +/- u49, …, x803 +/- u803

Your program will only calculate the standard deviation of the means derived from the stated values in the sample data. In your program of SE * t = CI you must know two of the three values. “t” you should know since it is the sample size (not the number of samples). Since SE is what you are trying to find, how do you determine CI? Do you estimate it from the standard deviation of the samples? Do you average the standard deviations of the samples? Be specific.

What do you do with the uncertainty values from the sample data? Your sample means should actually be given as m1 +/- us1, m2 +/- us2, …, mn +/- usn (where us1 is the propagated uncertainty of the sample 1 data). us1, us2, …, usn should be propagated onto the average of m1, m2, …, mn. 

What you will wind up with is an expanded standard error conditioned by the uncertainty in the spread of the mean values of the samples. Meaning that no matter how large you make you sample size, your calculated mean for the population will always have at least an uncertainty propagated from the uncertainties of the measurement data. 

Think about it. If your sample size is the entire data set you can calculate the mean very precisely. It *will* be the actual mean of the data. But that mean will also be conditioned by the propagation of the measurement uncertainty in the data as laid out in GUM Equation 10, i.e. a quadrature addition of the measurement uncertainties.

In essence your SE becomes something like SE = (Ci +/- measurement uncertainty)/t

Where do *YOU* include the measurement uncertainty in SE = CI/t?

Answer? *YOU* don’t. You may not even realize it but you *are* using the climate science meme of “all measurement uncertainty is random, Gaussian, and cancels”.

The GUM is just re-woked statistics 1.01.”

You don’t even have a basic understanding of the GUM. Your opinion is not credible.

Reply to  Tim Gorman
April 18, 2024 4:50 am

The GUM is just re-woked statistics 1.01.”

You don’t even have a basic understanding of the GUM. Your opinion is not credible.

The understatement of the week.

Reply to  Nick Stokes
April 12, 2024 7:40 pm

I corrected you on this hard on this a few years ago as you kept claiming USHCN was “obsolete” which after I showed you several time that it was false claim because they were still recording the data everyday you persisted in you it is obsolete lies until I posted the URL showing the daily update, then you vanished immediately.

You have a bad habit of posting MISLEADING occasionally flat-out lying claims which many here see it quickly which is why you are not considered reliable with a lot of down votes.

It is still updated daily to this day thus not considered “obsolete” by the NOAA.

Nick Stokes
Reply to  Sunsettommy
April 12, 2024 10:59 pm

The question is qhether it is used to compute an average in the way described. It is not. NOAA has not computed a USHCN average since 2014.

Here is a 1/2017 Wayback capture of their notice:

comment image

April 12, 2024 6:39 am

Reminds me of Asterix and the Soothsayer –

asterix_v19_prolix_can_control_gauls
SteveZ56
Reply to  PariahDog
April 12, 2024 11:56 am

Interesting–I had seen Asterix cartoons in France back in the ’90’s, but I didn’t know that English translations were available. There’s a theme park based on Asterix about 30 miles north of Paris, which is more popular than EuroDisney.

April 12, 2024 6:54 am

And then there’s UAH and RSS satellite data, which, within the statistical range of uncertainty, show pretty much the the same thing as the global surface temperature producers.

In fact, over the past 15-years UAH has warmed at a slightly faster rate than GISS (+0.33 versus +0.32 C per decade), which is the fastest-warming of the surface data sets over that period.

So the surface record can’t be too badly out; unless we’re throwing UAH under the bus now, too?

UAH-v-GISS
Mr.
Reply to  TheFinalNail
April 12, 2024 7:08 am

What if different decades have different numbers of leap years and extra days in them?

(Note for other readers here –
I don’t actually care if TFN has an answer for this, I’m just hoping it sends him scrambling for his spreadsheet for a bit, and keeps him busy deciding if his Temps constructs should have 4 decimal places or just the usual 3).

Reply to  Mr.
April 12, 2024 7:41 am

What if different decades have different numbers of leap years and extra days in them?

We already know that they do. What would that have to do with anything regarding monthly temperature updates?

Trying to Play Nice
Reply to  TheFinalNail
April 12, 2024 7:12 am

Why don’t you understand cyclic processes and cherry picking of start points? I think that final nail was driven into your head and whatever brain was there leaked out.

Reply to  Trying to Play Nice
April 12, 2024 7:42 am

Why don’t you understand cyclic processes and cherry picking of start points?

A la Monckton? Don’t think he ever got as far as 15-years though….

ducky2
Reply to  TheFinalNail
April 12, 2024 9:43 am

What does Monckton have to do with this?

Reply to  ducky2
April 12, 2024 11:45 am

He has a case of MDS.

Reply to  ducky2
April 12, 2024 3:47 pm

What does Monckton have to do with this?

Cherry-picking start-dates.

ducky2
Reply to  TheFinalNail
April 12, 2024 9:47 pm

Monckton never used the data to predict the future. He was only illustrating the point that the GAT index doesn’t have a strong relationship with increasing CO2 concentrations. It can’t be discerned from whatever a natural increase looked like.

MarkW
Reply to  TheFinalNail
April 14, 2024 9:13 am

Not this lie again. I would think after having been schooled a dozen times or more, you would have given up by now.

Lord Monckton uses today as his starting point. He then calculates backwards to determine how far back you have to go to find a statistically significant difference.

Nick Stokes
Reply to  MarkW
April 14, 2024 4:06 pm

LOrd M uses today as the endpoint, and calculates back to see how far you can start and get a zero trend. Nothing about statistical significance.

Reply to  Nick Stokes
April 14, 2024 8:49 pm

Nitpick Nick Strikes Again!

Reply to  TheFinalNail
April 12, 2024 7:13 am

And you picked the last 15 years because why? Doesn’t UAH go all the way back to December 1978?

AlanJ
Reply to  Steve Case
April 12, 2024 7:26 am

They’re consistent across the length of the UAH record:

comment image

Reply to  AlanJ
April 12, 2024 7:46 am

Just what do you think this graph is telling you? It shows a trend of about 1.4 deg C/century, which implies that the Earth must have had a temperature of 0 K about 20,000 years ago, and in another 6,000 years, the oceans will be boiling. Right?

AlanJ
Reply to  stevekj
April 12, 2024 8:12 am

It implies that the world has warmed between 0.5 and 1 degrees in the past 45 years according to both satellite and surface-based estimates. You cannot reliably extrapolate beyond the bounds of the series. Grade schoolers are taught this.

Reply to  AlanJ
April 12, 2024 9:50 am

Wow- between .5 and 1 deg C. I’m horrified. This is now #1 on my worry list.

AlanJ
Reply to  Joseph Zorzin
April 12, 2024 9:58 am

In context, that is about 10% of the change that occurs across a deglaciation, which typically take on the order of 12kyr, in less than 50 years. But worry is an emotional state, not a rational position, so whatever you feel is valid.

Reply to  AlanJ
April 12, 2024 1:41 pm

I don’t think you have much understanding of what “a rational position” is. !

MarkW
Reply to  AlanJ
April 14, 2024 9:17 am

You are as usual, assuming that all of the warming is due to CO2 and completely discounting the many well know cyclic processes.
Beyond that, you are ignoring the fact that the warming at the end of last glacial phase is measured using proxies, most of which have centennial, if not millennial resolution. You have no idea whether there were any periods quick warming or cooling, because the proxies were not capable of registering them.

AlanJ
Reply to  MarkW
April 14, 2024 9:56 am

I’ve not said a thing about cause in this discussion. We are discussing the observed pattern of change and whether it is substantial. Many proxies have annual resolution.

Reply to  AlanJ
April 12, 2024 10:01 am

Nonsense. The graph is a representation of temperature anomalies. Temperature anomalies can’t warm anything as they have no intrinsic property. Stop lying to everyone and yourself.

Reply to  AlanJ
April 12, 2024 10:01 am

I know that’s what it says. My point is, what conclusions can we draw from a warming of 0.5 and 1 degrees in the past 45 years? Even if that were actually the case (and there are lots of reasons to believe it isn’t)? Or in other words, why did you post this graph?

Reply to  stevekj
April 12, 2024 1:30 pm

We can conclude, assuming those numbers are correct, that the climate has improved!

Reply to  AlanJ
April 12, 2024 10:30 am

You cannot reliably extrapolate beyond the bounds of the series”.
Nobody told climate scientists, apparently. E.g. the extrapolations being made from the short satellite-based sea level rise record that gives rise to those crazy sea level rise predictions (because the short record fits a quadratic curve quite well).

AlanJ
Reply to  Chris Nisbet
April 12, 2024 11:08 am

Sea level rise estimates are not made simply by linear extrapolation of observed trends, but by estimating quantities like the amount of runoff from melting glaciers and ice caps, or the thermal expansion of warming ocean water. The “there’s no science behind anything, it’s all just statistics” trope is one of the most pervasive myths continuing to fester away on this website, and it speaks to the lack of scientific literacy of the contrarian set.

Reply to  AlanJ
April 12, 2024 12:44 pm

But predictions (or projections) of glacial / icecap runoff, or oceanic expansion, are based on the temperature projections from the climate models. Not necessarily a good base.

Reply to  AlanJ
April 12, 2024 1:32 pm

And if the sea rises, so what? Should we spend hundreds of trillions to try to stop it?

sherro01
Reply to  AlanJ
April 12, 2024 3:05 pm

AlanJ,
Can you link to a scientific paper that gives the value of X in the equation –
global ocean level change = X × global temperature change

With what data can you claim that despite known tectonic change happening, the walls and bases of the ocean can contain a constant water volume, constant enough to be satisfied that changes in level are not affected by change in basin geometry?
Geoff S

AlanJ
Reply to  sherro01
April 12, 2024 4:50 pm

Geoff, in comparison to the history of human civilization, tectonic processes are quite slow. Their effect on the shape of the global ocean basic is not significant in the context of ongoing GMSLR.

sherro01
Reply to  AlanJ
April 12, 2024 5:37 pm

AlanJ,
Try again please. I asked for data, not baseless opinion.
Geoff S

AlanJ
Reply to  sherro01
April 13, 2024 3:25 pm

It is not an opinion that tectonic process are slow, Geoff, I recommend you pick up a geology textbook sometime.

Reply to  AlanJ
April 13, 2024 3:32 pm

You haven’t given one fact to support your opinion. Are you an expert in tectonic topography along with metrology and heat radiation and thermodynamics so you don’t need to provide references?

AlanJ
Reply to  Jim Gorman
April 13, 2024 7:00 pm

The rifting of Pangea that created the Atlantic Ocean basic began some 200 million years ago, is this not a slow process by your estimation? If you need a reference for this, please pick up any earth science textbook, I recommend you start your education with something aimed at the grade school-level, since you exhibit a rather substantial knowledge gap.

Reply to  AlanJ
April 14, 2024 7:40 am

Again opinon only. Not one reference that supports your assertion.

Here is a report on a new scientific finding about ocean ridges.

https://www.whoi.edu/press-room/news-release/scientists-report-new-type-of-mid-ocean-ridge-in-remote-parts-of-the-earth/

These can displace water so they can be a component of sea level increase. They can also be a component of heat into the deep ocean.

It’s better than CO2 heating the oceans.

AlanJ
Reply to  Jim Gorman
April 14, 2024 8:19 am

I don’t think linking to research about ultra-slow spreading ridges, interesting as the research is, helps you case that tectonic processes are proceeding rapidly enough to contribute significantly to decade-scale sea level rise. Whereas the thermal expansion of water is a very well understand physical property, and the warming of the oceans is directly observed:

https://climate.nasa.gov/vital-signs/ocean-warming/?intent=121

sherro01
Reply to  AlanJ
April 12, 2024 5:43 pm

AlanJ,
There were 2 questions.
What is the value of X?
Geoff S

Reply to  AlanJ
April 12, 2024 10:02 pm

Sea level rise is nominally about 2.5 mm/year. Tectonic plates have an average rate of horizontal movement of about 25 mm/yr. Humans are being blamed for sea level rise, yet you say that “tectonic processes are quite slow.” Do you see the contradiction?

AlanJ
Reply to  Clyde Spencer
April 13, 2024 3:28 pm

The oceans are about 11,000 meters at their deepest, the earth is about 40 millions meters in circumference. I’ll let you puzzle out whether the 2.5 mm/yr vertical rise in sea level is proportionally larger than the 25mm/yr horizontal motion of the tectonic plates or not.

Reply to  Clyde Spencer
April 15, 2024 7:03 am

Tectonic plates have an average rate of horizontal movement of about 25 mm/yr

Link please? I’m searching and all of my returns are for ~1/10th of that. But maybe it’s a different kind of movement that I’m missing?

Also, horizontal plate movement alone can not tell you what that piece of ground does vertically. But to bone throw, we certainly need to account for isostatic adjustments in sea level budgeting….

Reply to  bigoilbob
April 15, 2024 7:52 am

“Link please? I’m searching and all of my returns are for ~1/10th of that.”

Yes, a silly units boner. Got me there….

But anything w.r.t. my second paragraph?

MarkW
Reply to  AlanJ
April 14, 2024 9:22 am

Rebound from past glaciations occur rapidly enough to be quite noticeable in human lifespans.

AlanJ
Reply to  MarkW
April 14, 2024 9:48 am

Do please look up the difference between eustatic and relative sea level rise and then treat yourself to another attempt at playing.

MarkW
Reply to  AlanJ
April 12, 2024 3:12 pm

And since neither of those factors is known with a factor of 10, your attempts at defense are just undercutting your case even more.

Reply to  AlanJ
April 12, 2024 4:24 pm

by estimating quantities like the amount of runoff from melting glaciers and ice caps

The “there’s no science behind anything, it’s all just statistics” trope

 it speaks to the lack of scientific literacy of the contrarian set.

look at your first comment. See those bolded words. That is real science isn’t it. Don’t use real measurements, just estimate what you want.

From Richard Feynman

  • “The first principle is that you must not fool yourself and you are the easiest person to fool.”
  • “Most likely anything that you think of that is possible isn’t true. In fact that’s a general principle in physics theories: no matter what a guy thinks of, it’s almost always false.”

Read these and see what you think of “estimates”.

AlanJ
Reply to  Jim Gorman
April 13, 2024 3:31 pm

“Real measurements” are estimates, since they involve measurement uncertainty. Since you spend 99% of your time on these fora crowing about measurement uncertainty, I’m a little bit surprised to see you taking the asinine position that estimates are unscientific.

Reply to  AlanJ
April 13, 2024 4:47 pm

“Real measurements” are estimates

You know nothing and I repeat, nothing about “real measurements. Your caviler attitude toward measurements is an affront to every scientist and engineer who obtain a college degree in a physical science. No wonder you try to brush off answering questions concerning measurements. To you they are just numbers on a number line to be moved around however you want.

Tell us just what senior level physical science lab courses you have taken and passed. My PhD lab professors would have failed anyone who mentioned that they used estimates.

My lab books had to have written details of all kinds of things. Serial numbers of all devices used, calibration dates and charts for those devices, exacting measurement procedures, temperature, humidity, pressure, and other environmental conditions, error budgets with appropriate formulas, detailed calculations of measurement error analysis, and lastly, the measurements themselves.

These were taught, not for fun, but because they were expected to be done throughout your career. Exacting measurements required both a probable value with an interval surrounding it where people could rely on the where the actual measurement might be. You didn’t dare to record something you couldn’t support. “Estimates” were verboten for anything official.

Lest you think my training was all “book learning”, I will disabuse that also. My father owned an International Harvester dealership and I cut my teeth on using a micrometer and gauge blocks for calibration. Plastigage for bearing clearances was di rigor. Piston ring end gaps, cylinder bore eccentricity, and setting ring gear clearances were done routinely. Accurate measurements were part of customer service. You screwed something up and you had to fix it at no cost and the time cost you money for other jobs.

AlanJ
Reply to  Jim Gorman
April 13, 2024 7:29 pm

You didn’t dare to record something you couldn’t support. “Estimates” were verboten for anything official.

This long-winded diatribe is you getting your skivvies in a twist over the semantics of the word “estimate.” If you have ever taken an instrumental measurement in your life, you made an estimate of the quantity being measured, since your measurement device, and your own reading of the measurement device, are subject to uncertainty.

Tell us just what senior level physical science lab courses you have taken and passed.

Eesh, you’re going to strain the limits of my memory. Here is a wholly inexhaustive course list for upper and graduate level classes, all of these had a substantial lab component:

Undergraduate level:

  • Experiment Physics II
  • Quantitative Methods in Earth Science
  • Geochemistry
  • Organic Chemistry
  • Geophysics
  • Quantitative geomorphology

Grauate level:

  • Chemical Oceanography
  • Biogeochemistry
  • Computational methods in earth sciences
  • Introductory climate modeling
  • Paleoclimatology
  • Isotope geochronology
  • Ocean-Atmosphere System dynamics

I’ve left out seminars and field camp, summer research fellowships, undergraduate work as an isotope geochemistry lab assistant, and of course undergraduate and graduate level research activities, but that’s what I recall off the top of my head. I’m sure you can find something relevant in there. At the very least you can dispense with the patronizing tone.

Reply to  AlanJ
April 14, 2024 6:15 am

You should sue these schools to get your money back, they utterly failed.

Reply to  karlomonte
April 20, 2024 11:57 pm

Droll .. woops, troll

Reply to  AlanJ
April 14, 2024 8:02 am

Then you should have had many labs where you had to make measurements with all the requirements I listed in your lab book.

With your education you should be able to discuss issues from the NIST Engineering Statistical Handbook and the GUM with in depth knowledge.

Let’s see, how about Experimental Physics II. Describe an experiment of consequence where you had to record defining the measurand, a process for making the measurement, device corrections, device uncertainty, etc.

I am most interested in your uncertainty budget and the calculations you made with it. You appear to be unfamiliar with NIST and its documents. Why?

AlanJ
Reply to  Jim Gorman
April 14, 2024 8:23 am

Here we see Jim unsteadily seeking firmer footing after accidentally claiming that he doesn’t believe measurement uncertainty exists (or at least, that there is no uncertainty in the measurements that he makes).

Pick a lane, my friend. I say that measurements are estimates of the quantity being measured because measurements carry uncertainty, if you agree that measurements carry uncertainty, we don’t have any disagreement on this point.

Reply to  AlanJ
April 14, 2024 9:03 am

Here we see Jim unsteadily seeking firmer footing after accidentally claiming that he doesn’t believe measurement uncertainty exists.

Here we see Alan the J gaslighting.

Reply to  AlanJ
April 14, 2024 9:50 am

Here we see Jim unsteadily seeking firmer footing after accidentally claiming that he doesn’t believe measurement uncertainty exists (or at least, that there is no uncertainty in the measurements that he makes).

Funny how you never ever quote what was said.

Here is what I quoted that you said:

by estimating quantities like the amount of runoff from melting glaciers and ice caps

Then you conflate my response into denying measurements have uncertainty. What a joke.

Maybe you have some evidence that runoff from glaciers and ice caps HAVE BEEN MEASURED. If you can’t, then you fail the thing.

I also see no response to what your lab books showed in school. Did you have no memorable experiments or work that that required detailed record keeping either for a lab grade or or for a research project?

AlanJ
Reply to  Jim Gorman
April 14, 2024 10:50 am

Maybe you have some evidence that runoff from glaciers and ice caps HAVE BEEN MEASURED. If you can’t, then you fail the thing.

There is a multitude of information at your fingertips, easy to find via Google, and I think you know it, and are merely feigning ignorance as a defense tactic (the “if I can’t see the science then the science can’t see me” philosophy). The IPCC is a good place to start.

Of course, the whining about a lack of citations is a bit rich coming from the person who has made up whole-cloth a hypothesis of oceanic-ridge growth as the driver of modern GMSLR without a single iota of evidence to back it up. I don’t think evidence is something you’re genuinely concerned about, I think your only concern is being a contrarian.

Reply to  AlanJ
April 14, 2024 10:57 am

The IPCC is a good place to start.

HAHAHAHAHA

Reply to  AlanJ
April 14, 2024 11:25 am

Of course, the whining about a lack of citations is a bit rich coming from the person who has made up whole-cloth a hypothesis of oceanic-ridge growth as the driver of modern GMSLR without a single iota of evidence to back it up

Here is what I said.

Here is a report on a new scientific finding about ocean ridges.

https://www.whoi.edu/press-room/news-release/scientists-report-new-type-of-mid-ocean-ridge-in-remote-parts-of-the-earth/

These can displace water so they can be a component of sea level increase. They can also be a component of heat into the deep ocean.

Did I ever say “driver”? Do you consider “component” to be the driver?

You are a troll through and through. You don’t even have the courage to quote what someone has said. You just gaslight.

Tell you what, on this thread on WUWT, what percent of your post have either a quote and/or a reference? Then what percent of mine do? Don’t fib cause I’ll be checking.

I’ll bet you won’t respond.

BTW, I told you where I first learned measurements and the consequences of wrong results. Where did you learn to make AND use measurements and what we’re the consequences?

I also asked about your lab book experience. With a post grad degree you surely needed to do measurements. How did you handle uncertainty budgets and calculations.

AlanJ
Reply to  Jim Gorman
April 14, 2024 12:48 pm

The claim, originally made by Geoff, and now taken up by you, is that tectonic processes reshaping the topography of the ocean basins are sufficient t explain observed 20th century GMSLR. In case you had forgotten the position you are attempting to defend.

If you now wish to backpedal and claim you merely think slow tectonic processes offer some negligible contribution to 20th century GMSLR, then I brook no argument with the point, and I’m glad we found at least some trivial point on which there is little contention.

Reply to  AlanJ
April 14, 2024 1:16 pm

Show a quote of where I said tectonics processes are reshaping the topology. I said ridge formation can be a component of sea level rise.

From Merriam-Webster
Component
: a constituent part : INGREDIENT

You just can’t resist gaslighting can you?

🤡🤡 – 2 clowns

AlanJ
Reply to  Jim Gorman
April 14, 2024 6:58 pm

Show a quote of where I said tectonics processes are reshaping the topology.

So you don’t think that seafloor spreading is modifies the topography of the ocean basins? Or do you actually not have a consistent set of beliefs and just are saying whatever to contradict me?

Reply to  AlanJ
April 20, 2024 11:56 pm

Well said!

Bill

Reply to  Jim Gorman
April 20, 2024 11:55 pm

For all the reasons that you mention, an observation is still an estimate no matter how it is derived. A single number also conveys no information about that number. Tim seems to run off the deep- end over such issues.

All the best,

Dr Bill Johnston

MarkW
Reply to  AlanJ
April 14, 2024 9:20 am

The “runoff” from glaciers and ice caps are little better than estimates themselves. Beyond that, you are doing simple extrapolations of current estimates into the future in order to make a wild guess about what future seal level increases are going to be.

AlanJ
Reply to  MarkW
April 14, 2024 9:50 am

Well, that’s why I referred to them as estimates. Jim doesn’t even have estimates. He has conjecture about ultra slow spreading ridges. But to him that’s better than having to accept that maybe other scientists who spend their entire lives on this stuff know what they’re talking about better than he does.

Reply to  AlanJ
April 14, 2024 10:01 am

Appeal to Authority. You would fail a high school debate.

When you offer a factual cause without showing any references, you are implicity stating that you are considered an EXPERT in that field. What a joke.

🤡 – 1 clown

AlanJ
Reply to  Jim Gorman
April 14, 2024 10:57 am

Oh, that isn’t what an appeal to authority is, Jim. I’m merely saying you’ve adopted an “anything but what the science says” philosophy, not because you have any basis for disputing the science, but because you’re a contrarian. I don’t think you actually care what is causing GMSLR, just as long as it isn’t what scientists say it is. Evidenced by your eagerness to accept any silly alternative hypothesis.

Reply to  AlanJ
April 14, 2024 11:44 am

Oh, that isn’t what an appeal to authority is, Jim

Appeal to Authority from:

https://www.grammarly.com/blog/appeal-to-authority-fallacy/

Another name for the appeal to false authority fallacy is an appeal to unqualified authority.

Appeal to anonymous authority

An appeal to anonymous authority is an appeal to authority that doesn’t attribute the claim to any specific person. Rather, the arguer attributes it to an unnamed individual or, more commonly, group of individuals. Here are a few examples:

  • Authors say you have to write every day if you want to become a good writer.
  • According to scientists, 5G is harmful.
  • They’re trying to ban plastic shopping bags.

As seen in the last example, an appeal to anonymous authority can be attributed to a group as vague as “they”.

AlanJ
Reply to  Jim Gorman
April 14, 2024 12:50 pm

I know what an appeal to authority is, Jim, and that is why I said that I was not making one. I did not say, “the science is right because some authority figure says it is,” I said, “Jim rejects what the science says because he doesn’t like it, not because he actually has any evidence that it’s wrong.” You’re eager to glom on to literally any foolish notion that runs contrary to what mainstream science says. If scientists said the sky was blue you’d suddenly insist it was actually a shade of rose gold.

Let me know if you aren’t able to figure out why this isn’t an appeal to authority, I can certainly try to simplify it even more for you if you’re continuing have a struggle.

ducky2
Reply to  AlanJ
April 14, 2024 1:01 pm

I can certainly try to simplify it even more for you if you’re continuing have a struggle.

Let me know if you need me to clarify this even further for you, I can break it down to terms a 5th grader could understand.

Then ask follow-up questions if you’re still struggling.

Let me know if you are confused about any other facets of the discussion, I’ll do my best to help you.

You make it so obvious. Do a better job at keeping your mask on.

AlanJ
Reply to  ducky2
April 14, 2024 7:02 pm

To be clear, I don’t genuinely believe the contrarian set here are actually as ignorant and confused as they let on, I think they are playing the part of fools, acting intentionally obtuse as a strange form of trolling. But my offer to answer their questions is genuine nonetheless.

Reply to  AlanJ
April 14, 2024 8:51 pm

More kookdance from the trendology clown show.

Reply to  Jim Gorman
April 14, 2024 10:59 am

Clown is correct, he fits the costume well.

AlanJ
Reply to  Chris Nisbet
April 12, 2024 11:57 am

This paper is describing observations of current sea level rise, and noting that they are in line with model projections. It is not presenting a linear extrapolation of the observed rise into the future. You completely undermine your position by citing this paper, I’m not sure if that was your intent or not.

Reply to  AlanJ
April 12, 2024 12:32 pm

Just wondering – you seem to have inferred that I’ve made some comment about _linear_ extrapolations. Why?

Reply to  AlanJ
April 12, 2024 1:50 pm

The paper does make “predictions” based on their “acceleration” claim from mal-adjusted satellite data.

You completely undermine your position by not reading the paper.

FAIL. !

AlanJ
Reply to  bnice2000
April 13, 2024 3:32 pm

Saying “if x happens” is not the same thing as saying “x is going to happen.” Let me know if you need me to clarify this even further for you, I can break it down to terms a 5th grader could understand.

Reply to  AlanJ
April 13, 2024 3:36 pm

Man you can dance with the best. What’s your favorite, Lindy Hop or Foxtrot?

You deflect like you are using a light saber.

AlanJ
Reply to  Jim Gorman
April 13, 2024 7:32 pm

There is no deflection on my side, I am just literate, a condition I often lament when reading these comment threads.

Reply to  bnice2000
April 15, 2024 7:14 am

More mythology of the “Well, since he uses units of degC/decade^2, he must be forecasting out 10 years.”

To paraphrase an old Steven Wright joke.

Cop: I pulled you over for going 60 miles an hour in a 30 miles per hour speed zone.
Steven: It’s ok. I won’t be out that long.

But above ground, trends are used to highlight unexpected data combinations, for further investigation. No one extrapolates them without the required scientific backup.

Reply to  AlanJ
April 12, 2024 12:41 pm

I doubt that grade schoolers are taught to avoid extrapolation. Maybe high school, maybe uni, but who knows if that sage advice is taught anymore? After all, we have climate models that go out to 2300.

Reply to  AlanJ
April 12, 2024 12:42 pm

‘It implies that the world has warmed between 0.5 and 1 degrees in the past 45 years according to both satellite and surface-based estimates.’

Yes, but what it doesn’t imply is why this has happened, has it ever done so in the past and will it happen again in the future. In other words, the cause is unknown.

AlanJ
Reply to  Frank from NoVA
April 12, 2024 1:18 pm

I would say that is mostly accurate. You cannot infer the cause of the change merely by observing the change itself. That doesn’t mean the cause is unknown or unknowable.

Reply to  AlanJ
April 12, 2024 4:32 pm

“I would say that is mostly accurate.”

Your opinion is meaningless in any realm of science or rational thought.

Much of the “change” is because of data manipulation and fakery.

AlanJ
Reply to  bnice2000
April 12, 2024 4:52 pm

You seem to care an awful lot about my opinion for someone who deems it meaningless, bnice. You seem a bit obsessed with it, in fact.

Reply to  AlanJ
April 12, 2024 1:40 pm

And the UAH data shows that comes purely from El Nino events.

Do you have any evidence of human causation for El Ninos ??

Reply to  stevekj
April 12, 2024 8:38 am

Sorry, stevekj, we don’t need to wait another 6,000 years based on your extrapolation:

“The U.N. chief issued a stark warning on climate change this week: ‘The era of global warming has ended; the era of global boiling has arrived,’ António Guterres declared in a news briefing”
https://www.washingtonpost.com/climate-environment/2023/07/29/un-what-is-global-boiling/
(note date of July 29, 2023, for this pearl-of-wisdom, from “we’re on the brink” Guterres)

/sarc off

Reply to  AlanJ
April 12, 2024 7:56 am

Alan J,

P-l-e-a-s-e . . . the graph you presented would imply there is data measurement accuracy (since no error bars are given graphically or stated) to at least ± 0.01 °C based on the smallest step changes noted.

Really? Using satellite-based instrumentation to derive a “surface” temperature to that precision, let alone accuracy???

That is simply not believable.

Also, what’s up the notation for the green line given in the upper left face of the graph: “uah6/offfset:0.6”? If that means the green line has been offset by 0.6 °C to get a close overlap of it to the red line (gistemp data), then your statement “They’re consistent across the length of the UAH record” is clearly false.

AlanJ
Reply to  ToldYouSo
April 12, 2024 8:09 am

P-l-e-a-s-e . . . the graph you presented would imply there is data measurement accuracy (since no error bars are given graphically or stated) to at least ± 0.01 °C based on the smallest step changes noted.

It doesn’t imply that, because this is not a graph of individual measurements, it is a global average.

Also, what’s up the notation for the green line given in the upper left face of the graph: “uah6/offfset:0.6”? If that means the green line has been offset by 0.6 °C to get a close overlap of it to the red line (gistemp data), then your statement “They’re consistent across the length of the UAH record” is clearly false.

The series are on different baselines. UAH uses the 1991-2020 average as the baseline period, GISS uses the 1951-1980 period. You have to offset one relative to the other so that the zeros line up (translate along the y-axis). These are graphs of anomalies, so absolute values are not preserved, only the amount of change relative to each other (the trend).

Reply to  AlanJ
April 12, 2024 8:26 am

“It doesn’t imply that, because this is not a graph of individual measurements, it is a global average.”

So, are you now asserting that by doing a global average of many individual measurements, the precision (or accuracy) of the data is IMPROVED?

With a mathematically-derived “average”, sure, you could present the value to, say, six decimal places . . . go for it! It is bound to impress some people.

“These are graphs of anomalies, so absolute values are not preserved, only the amount of change relative to each other (the trend).”

Well, in this case, I duly note that even with the 0.6 °C offset, the trend of the green line is visibly different from that of the red line . . . the green line is generally above the red line prior to year 2000, but is generally below the red line after year 2000.

AlanJ
Reply to  ToldYouSo
April 12, 2024 8:31 am

So, are you now asserting that by doing a global average of many individual measurements, the precision (or accuracy) of the data is IMPROVED?

I am not, I’m saying that the uncertainty in the estimate of the mean is not equal to the uncertainty in the individual measurements. This is like… stats 101.

Well, in this case, I duly note that even with the 0.6 °C offset, the trend of the green line is visibly different from that of the red line . . . the green line is generally above the red line prior to year 2000, but is generally below the red line after year 2000.

That is correct – for the period 1979-2023, UAH shows a slightly higher trend that GISS. For the period 2009-present, GISS shows a slightly higher trend. Both series are quite consistent and show the same overall pattern of warming, so unless you want to claim that UAH is bad too, you can’t claim that the surface record is completely out of whack.

Reply to  AlanJ
April 12, 2024 11:49 am

I am not, I’m saying that the uncertainty in the estimate of the mean is not equal to the uncertainty in the individual measurements. This is like… stats 101.

Bullshit — Stats 101 does not treat metrology subjects such as measurement uncertainty and propagation.

Reply to  karlomonte
April 16, 2024 5:51 am

Apparently this flew over your comb over. “Stats 101” – “basic”

Reply to  bigoilbob
April 16, 2024 6:22 am

Gaslighting does not suit you, blob.

I have a classic statistics text from the 1960s — nowhere does it tell anything about metrology, measurement uncertainty and propagation

Reply to  karlomonte
April 16, 2024 7:11 am

AGAIN, to paraphrase: I don’t think that “Stats 101” means what you think it means.

Better to find Nick’s comment about how the basic theory has been around and proved for over a century,

Reply to  bigoilbob
April 16, 2024 7:17 am

Nitpick Nick the Gaslighter and Bill the Ranter don’t understand basic metrology any better that you (don’t), blob.

You may continue to your next word salad.

Reply to  karlomonte
April 21, 2024 12:13 am

Says bullshit artist Karlo.

Can’t do stats, eats his homework, can’t explain how a bit of string 5 m long measured to the nearest 1/2 mm is GUMed to be out by 11 mm … on the joker rolls.

You a disgrace Karlo.

Reply to  Bill Johnston
April 21, 2024 6:26 am

WTF is this “homework” you keep ranting about?

You a disgrace Karlo.

Coming from an obsessed kook, clown, fool like yourself, this is a complement.

You can’t tolerate anyone who exposes your ignorance and incompetence. I’m only the latest in the list of your victims.

Reply to  karlomonte
April 16, 2024 11:10 am

I’ve perused several at the university bookstore and some web learning courses.

They all assumed that the data was perfect because they had nothing about uncertain numbers. Sampling error to be sure, but that is why statisticians assume measurement error only occurs in a sampling environment.

Reply to  Jim Gorman
April 16, 2024 11:45 am

The one I have is Snedecor & Cochran, a classic that first came out in the 1940s, IIRC, way before Excel spreadsheets. I learned a lot from it, especially regression. But there is nothing even remotely close to uncertainty analysis inside.

Reply to  karlomonte
April 21, 2024 12:25 am

Strewth we have something in-common, I have a copy too (but I hope don’t catch Karlo’s diseases) …

You clearly have not read it, especially Chapter 1 – Sampling of attributes, starts on Page 3.

I love you, I love you … but only if you have Steele and Torrie also?

Reply to  Bill Johnston
April 21, 2024 6:26 am

GFY, troll.

Reply to  Bill Johnston
April 21, 2024 7:49 am

The book mentions “variation” amongst the data. There is much discussion of sampling, but no mention whatsoever about each data point having uncertainty.

No mention whatsoever about measurements having uncertainty in each measurement nor how to treat specifying that uncertainty.

You need to approach the problem of uncertainty from the perspective of an engineer or scientist attempting to portray to others how certain the measurements were that gave a result.

Statistics can be used to judge the uncertainty in an average but statistics is not used to determine the measurement. Statistics can be used to judge the uncertainty of a mean when calculated from an average.

Statistics can’t determine the dispersion of measurements around the measurand. Only the measurements themselves determine that. Statistics can illuminate the size of the interval that contain a percentage of the dispersed measurements but can’t create the measurements themselves.

An example. I can’t measure the speed of light to the nearest meter or sec and do it 10,000 times, divide the average by the √10,000 and portray that I measured the distance and time to the nearest mm/ms.

Reply to  Jim Gorman
April 21, 2024 2:52 pm

An obvious circular argument. The GUM uses samples to determine what used to be CI’s (Student-t times SE).

Now you are arguing that you can’t measure a sample??? Are you kidding, smoking something or just making this up.

A piece of string 5m long can be measured to the nearest +/- 2mm by an experienced carpenter/brickie, but you can’t. You are unable to provide an accuracy estimate for your observations, really! A bucket of water has an uncertainty of what, three buckets?

What about your 1,000 lengths of lumber? You couldn’t measure them to the nearest +/- 0.5 mm, but you can measure the speed of light?

Karlo the technical manager can’t tell using a handkerchief-test if an anemometer should be turning. Ditto for a wind-vane.

This whole thing is getting GUMed-up by contradictions, sillier and sillier.

Cheers,

Bill

Reply to  Bill Johnston
April 21, 2024 3:28 pm

/plonk/

Reply to  karlomonte
April 21, 2024 4:23 pm

Pfffit! He wanted homework and got some. Now it’s all a conspiracy to make measurements confusing.

I should have dug into the UCM earlier. NIST’s UCM is going to be hard to argue with. It is a perfect calculator for reproducible uncertainties. My guess is that for repeatable uncertainty NOAA’s AOSS and CRN specs would be factors in a combined uncertainty.

Reply to  Jim Gorman
April 21, 2024 5:29 pm

The WMO guides have similar numbers, the last time I looked.

Apparently Johnston has this goofy idea that if “gear” (i.e. instrumentation — RTD thermometers) is “NATA-certified” then the “error” drops to the “third or fourth decimal” leaving only resolution. He doesn’t understand calibration of real instruments at all.

Reply to  karlomonte
April 22, 2024 5:39 am

The last I knew RTD’s had their thermistors calibrated to achieve 3 or 4 decimal places but not the entire assembly it is used in. And, drift in the field is never taken into account. Crud on the sensor itself, crud on a fan, spider webs in louvers, screen deterioration, screen type, terminal connection deterioration.

NOAA knows that both ASOS and CRN stations have electronic thermometers. Yet they recognize field measurements with uncertainties in the tenths digit and resolution to 1 decimal digit.

old cocky
Reply to  Jim Gorman
April 16, 2024 2:50 pm

They all assumed that the data was perfect because they had nothing about uncertain numbers.

Like it or not, most statistical analysis involves counts of “things”, rather than measurements. Metrology is a sub-field.

Sampling error to be sure

Sampling error and proper experimental design are more universal concerns.

Reply to  Jim Gorman
April 21, 2024 12:16 am

Bullshitter Jim,

No text ever claims “data are perfect” and the reasoning inside Jim’s head that “because they had nothing about uncertain numbers” is all about Jim and not the data.

Reply to  Bill Johnston
April 21, 2024 6:59 am

They only assume that sampling error occurs. They never mention that the numbers themselves are uncertain.

Show a page from a statistical textbook and how they discussed how to deal a measurement of 64.25 inches with uncertainty of ±0.25 inches. The only mention you might find is getting “better” measurements as a solution.

Reply to  karlomonte
April 21, 2024 12:07 am

I have two and they are still the same equations as used by the GUM.

b.

Reply to  Bill Johnston
April 21, 2024 6:28 am

Well pin a big shiny star on your cone head, troll.

Reply to  karlomonte
April 21, 2024 12:05 am

But all the equations used by the GUM come from Stats 1.01, dill, or can’t you read!

Reply to  Bill Johnston
April 21, 2024 6:29 am

Liar, no they do not.

Statistics does not teach the definition of uncertainty, nor the propagation thereof.

You may continue your kookdance.

sherro01
Reply to  AlanJ
April 12, 2024 3:23 pm

AlanJ,
My stats books do not allow me to combine an average sea surface water temperature with an average (above) land surface temperature, because the two temperatures arise in materials of different physical and chemical properties. The main similarity is the thermometer, which is not enough to claim statistical similarity. (Likewise, you cannot add into your averages the temperatures taken with thermometers under the human tongue. They are, again, not statistically similar to SST or LST).
…..
More complications. Land and sea have different thermal inertia. A sudden step change in whatever is changing their temperatures will show different rates of change. One set might lead the other in daily observations, so there will be an error in their simple daily average. So, don’t do it!
Geoff S

Reply to  sherro01
April 12, 2024 4:53 pm

A sudden step change in whatever is changing their temperatures will show different rates of change.”

Which is why they absolutely HAVE to use the El Nino spikes in the atmospheric data to calculate spurious linear trends in data which is most certainly NOT linear.

AlanJ
Reply to  sherro01
April 12, 2024 4:56 pm

I measured the surface temperature of the chicken and broccoli on my dinner plate and derived an average of the two. A few minutes later I observed that the average was much lower, and, despite the two substances having very different chemical and physical properties, I surmised that my dinner had cooled.

I do wonder which stats 101 textbook you reference that details the determination of global mean surface temperatures, if you wouldn’t mind citing it for us all.

Reply to  AlanJ
April 12, 2024 10:11 pm

But, without information on the thermal conductivity and emissivity, you can’t be sure that they cooled equally. Have you ever had the unpleasant surprise of discovering that cheese on a pizza will burn the roof of your mouth even though the crust is quite palatable?

AlanJ
Reply to  Clyde Spencer
April 13, 2024 3:38 pm

If I needed to know if they cooled equally, I’d do a lot more investigating. But the first thing I want to know is whether the temperature of my dinner is changing, and it seems we both agree that the average temperature of the things on my plate is adequate for this purpose, so there is no more debate to be had on this subject I’m afraid.

Reply to  Clyde Spencer
April 21, 2024 12:28 am

The new term is called guessworkisivity. It is related to the GUM

Reply to  Bill Johnston
April 21, 2024 6:30 am

The Liar-Troll speaks, the world needs to listen up!

Reply to  sherro01
April 16, 2024 5:54 am

If Alan J did this, then I agree with you. I have other **** to do today, so, did he?

Reply to  AlanJ
April 12, 2024 4:30 pm

In what level stats courses that you have taken was the data presented as 1 ±.5, 2 ±0.5, …, 10 ±0.5.

I have never come across one that begins to treat the population or sampled data with uncertainty. The only uncertainty comes from sampling. What a joke.

MarkW
Reply to  AlanJ
April 12, 2024 8:50 am

That rule only applies when you use the same instrument to measure the same thing, multiple times.
Using many instruments to measure many different patches of air, does not increase the accuracy of individual measurements.

AlanJ
Reply to  MarkW
April 12, 2024 9:03 am

Using many instruments to measure many different patches of air, does not increase the accuracy of individual measurements.

Did I say that?

MarkW
Reply to  AlanJ
April 12, 2024 3:14 pm

Yes

AlanJ
Reply to  MarkW
April 12, 2024 4:53 pm

You’ll be good enough to quote me then.

Reply to  AlanJ
April 12, 2024 4:43 pm

Did I say that?

You didn’t have to. If you don’t admit that it is true, then you implicitly approve of the way it is done. There is no waffling when it comes to physical measurements. You can’t pretend that advocating for the accuracy and uncertainty in the GAT are fine and then turn around and say “did I say that”.

Quit dancing and spit out what you think about accuracy and uncertainty and how it is calculated in the GAT.

Mr.
Reply to  MarkW
April 12, 2024 9:06 am

And that’s before the torturing of the “data” even begins.

Reply to  MarkW
April 12, 2024 12:47 pm

Doesn’t it also require that the distribution of errors be Gaussian? Are they?

Reply to  MarkW
April 12, 2024 10:13 pm

Nor does it increase the justifiable precision of the average of multiple patches.

Reply to  AlanJ
April 12, 2024 10:06 pm

Therefore, you are comparing apples to oranges.

Richard Greene
Reply to  AlanJ
April 12, 2024 9:00 am

The UAH trend since 1979 is
+0.15 degrees C. per decade warming

No surface statistic is lower than that or equal.

 For one example: The Berkeley Earth team reports that since 1980, the global average temperature is increasing at the rate of +0.19 degrees Celsius per decade.

You are lying again.

Reply to  Richard Greene
April 12, 2024 3:57 pm

The UAH trend since 1979 is

+0.15 degrees C. per decade warming

No surface statistic is lower than that or equal.

 For one example: The Berkeley Earth team reports that since 1980, the global average temperature is increasing at the rate of +0.19 degrees Celsius per decade.

Al a Monckton, you are completely ignoring uncertainty estimates.

Taking these into account, and allowing for autocorrelation that monthly temperature data in particular suffer from, there is no statistical difference between the warming rate observed in UAH or the surface data sets since 1979.

And you are ignoring that the ‘best estimate’ trend of UAH has, over the past 15-years, been the fastest-warmest of any global temperature data set, surface or satellite.

You are lying to yourself.

Reply to  TheFinalNail
April 12, 2024 4:55 pm

Berkeley uses all the most JUNK data they can find, with absolutely zero idea of its possible accuracy or how tainted it is by local surrounds…

… then tortures and twist it to create whatever they want to create.

It is mostly just meaningless garbage… which is prefect for low-level-minded like yours.

UAH shows warming only at El Nino events

Absolutely no evidence of human causation.

Reply to  TheFinalNail
April 12, 2024 10:04 pm

Again the use of two very strong EL Ninos that effect the atmosphere more than the surface.

You know that.

It is you has been caught DELIBERATELY cherry-picking the period.

You are LYING to yourself, and everybody except you, knows it. !

Reply to  AlanJ
April 12, 2024 1:39 pm

ROFLMA, you can clearly see GISS is level at the start where you offset it…

… and well above at the end… except for the EL Nino event

Lots of green above at the left hand side.. and below on the right hand side.

… FAIL. !

Reply to  Steve Case
April 12, 2024 7:45 am

Because UAH has been warming faster than the fastest- warmest surface temperature data producer over that nice, round period, yet UAH it is held up here as ‘the gold standard’.

If UAH is the best and it’s warming faster than the warmest surface temperature data producer for the past 15-years, then how corrupted do you think the surface sets are?

Reply to  TheFinalNail
April 12, 2024 10:13 am

Stop with the “warming” conflation nonsense. The anomalies derived from UAH are increasing faster. All this excitement over a statistical construct that warms nothing.

Reply to  doonman
April 12, 2024 4:06 pm

Stop with the “warming” conflation nonsense. The anomalies derived from UAH are increasing faster. 

I don’t know whether you know this or not doonman; but if the trend in anomalies derived from one temperature data set is increasing faster than the trends derived from all the others, then that means that data set is warming faster than all the others over the past 15-years.

It’s as simple as that.

UAH is the fastest-warming global temperature data set we have over the past 15-years and it’s long-term trend is in good agreement with all the other global temperature data sets, including surface, since it began in 1978.

Reply to  TheFinalNail
April 12, 2024 5:02 pm

You are still using EL Nino events and the fact the atmosphere responds much more to ocean release energy than the surface does.

(do you even comprehend the concept of mass and thermal inertia ??)

You are relying TOTALLY on two strong El Nino events 2015 and 2023) and 1 mild one (2020).

That is all you have.

And you know you are totally incapable of showing any human cause for those El Nino events..

… so I assume you are well aware that the warming is all totally natural.

Reply to  TheFinalNail
April 12, 2024 5:33 pm

Both GISS and UAH suffer from UHI. UAH doesn’t have a good way to remove it directly. GISS has no excuse. The USCRN should be the one you look at for surface temps.

Reply to  Jim Gorman
April 12, 2024 7:01 pm

But its ok to “analyze” the data by inserting fake data.

This is what these people are proclaiming.

Reply to  TheFinalNail
April 12, 2024 1:55 pm

Choose a period where there has been 2 major El Ninos, when it is known that El Ninos effect the atmosphere far more than the land surface.

Then pretend there is a linear trend through them.

It is not remotely science…. or anything but blatant propaganda.

You are fooling no-one but yourself.

Reply to  bnice2000
April 12, 2024 4:09 pm

The long-term trend in UAH is statistically the same as that of GISS or HadCRUT or any of the others.

All show statistically significant warming.

That’s your problem.

Reply to  TheFinalNail
April 12, 2024 4:42 pm

UAH shows warming only at El Nino events.

In both ocean and land data.

Are you saying that GISS land shows essentially no warming from 1980-1997

… and essentially no warming from 2001 – 2015

… then essentially no warming from 2017 – mid 2023 ?

Show us those graphs… or prove yourself either very ignorant or a LIAR.

Do you have any evidence of human causation for the El Nino step changes??

Reply to  bnice2000
April 12, 2024 5:19 pm

No, the trend in UAH, since Dec 1978, shows statistically significant global warming; same as GISS; same as HadCRUT; same as RSS; same as JMAA; same as NOAA….

All show statistically significant warming and all within the same uncertainty intervals, all over the same period.

You seem to be ideologically opposed to reality, which must be interesting.

Reply to  TheFinalNail
April 12, 2024 10:06 pm

WRONG .. UAH shows warming only at El Nino events

It has essentially zero trend from 1980-1997

… and essentially no warming from 2001 – 2015

… then essentially no warming from 2017 – mid 2023 ?

It does not remotely match GISS

Your grasp on reality is basically non-existent. !

Reply to  TheFinalNail
April 12, 2024 7:31 am

TFN,

From your comment, it appears you are not aware that the USHCN data (the main subject of the above WUWT article, once one dismisses the idiotic statements issued by António Guterres) is limited to only land-based stations covering only the contiguous United States.

Therefore, a USHCN dataset is not at all comparable to either a UAH or RSS dataset, the latter two of which represent average global coverage that includes mostly ocean water surface temperatures (SST).

strativarius
Reply to  ToldYouSo
April 12, 2024 8:27 am

Bosh!

Richard Greene
Reply to  ToldYouSo
April 12, 2024 8:51 am

UAH reflects global land warming over 50% faster than oceans

USCRN shows the US land warming much faster than the average land surface per UAH

+0.34 C per decade USCRN
+0.20 C per decade UAH global land only

Reply to  Richard Greene
April 12, 2024 9:18 am

There is dry land and a relatively minor amount of ice-covered land in the continental US that ranges from 25.8 N to 49.4 N in latitude . . . and then there is dry land and significant ice-covered land around the globe that ranges from 90.0 S to 83.6 N latitude.

Therefore, significant differences in the average land temperatures and the rates of warming between these two disparate regions are to be expected.

Reply to  Richard Greene
April 12, 2024 2:07 pm

You can’t compare an averaged global figure with a smaller area figure with a much greater range..

That is mathematical nonsense.

There is no significant difference.

USCRN-UAH-compare
Reply to  bnice2000
April 12, 2024 3:23 pm

Rushed a bit this morning, calculated UAH trend over full period from 1979…

Here is the corrected graph with all trends calculated over 2005 -> period.

trends-uscrn-etc
Reply to  ToldYouSo
April 12, 2024 4:26 pm

….it appears you are not aware that the USHCN data (the main subject of the above WUWT article, once one dismisses the idiotic statements issued by António Guterres) is limited to only land-based stations covering only the contiguous United States.

Then let’s look at the land only version of UAH contiguous USA (lower 48-states).

Statistically significant warming. No trend line required.

UAH-USA48
Reply to  TheFinalNail
April 12, 2024 4:46 pm

WRONG, Warming only at El Nino events

The 2015/16 El Nino bulge and related step cover all the warming..

Explain why there was slight COOLING from 2005-2015

Then present evidence of human causation for the El Nino events that the trend relies on completely.

2005-2015-USA
Reply to  bnice2000
April 12, 2024 5:26 pm

Your chart stops more than 8 years ago.

Sad.

Reply to  TheFinalNail
April 12, 2024 10:17 pm

Thanks for admitting that you have to use the El Nino to get a warming trend.

You still avoided telling us how it was COOLING until the 2015 El Nino.

You still avoided showing any evidence of human causation.

You are still an empty-minded puppet !

Well done. !!

—-

btw, since the 2015/16 El Nino, it COOLED before the 2023 El Nino started

El Ninos provides the ONLY warming.

Do you have any evidence for human causation of these El Ninos ??

USCRN-etc-2017-2023
Reply to  TheFinalNail
April 12, 2024 7:43 pm

“Statistically significant warming. No trend line required.”

A truly objective scientist would say that the graph you presented is meaningless without an indication of the accuracy of the individual data points (measurements) connected by the continuous line and without an indication of what instrumentation “drift” may have occurred over the given time period (x-axis span).

IOW, discernment of what one presents IS required.

Reply to  ToldYouSo
April 12, 2024 10:19 pm

It is also meaningless because the trend is in no way linear except between El Nino events.

Any trend relies totally on the El Nino events.

Richard Greene
Reply to  TheFinalNail
April 12, 2024 8:44 am

You have cherry picked 15 years of UAH data when the database contains 44 years of data.

The full 44 years since 1979 show a much smaller warming rate than surface averages since 1982″

UAH
+0.15 degrees C. per decade
since 1979

UAH Global Temperature Update for March, 2024: +0.95 deg. C « Roy Spencer, PhD (drroyspencer.com)

Surface (one US govt. source)
+0.36 degrees C. per decade
since 1982

Climate Change: Global Temperature | NOAA Climate.gov

You have data mined to confuse the issue

I consider that to be lying

Shame on you

Reply to  Richard Greene
April 12, 2024 4:41 pm

You have cherry picked 15 years of UAH data when the database contains 44 years of data.

I have already pointed out that the warming trends in UAH and all the surface data sets over their whole period, are indistinguishable, statistically. I literally said that in the first post. Did you miss that?

Do you agree with that?

I just thought I’d emphasise this point by highlighting the fact that globally, UAH is the warmest-running of all the data sets over the past 15-years – despite it being the darling of WUWT.

That is just a fact.

So to see AW complain about the trends derived from the surface data sets, whilst at the same time promoting a satellite data set that shows an even faster warming trend than the one’s he’s trying to diss, just adds an element of absurdity that dulls the mind.

I just think that’s funny; it also highlights the idiocy and pointlessness that underlies this site.

Reply to  TheFinalNail
April 12, 2024 5:04 pm

Stop whining.

Reply to  karlomonte
April 12, 2024 5:29 pm

Stop whining.

Not whining mate; just having a laugh.

You guys are unconsciously funny; which is the best type of funny.

Reply to  TheFinalNail
April 13, 2024 7:44 am

I’m not yer “mate”, dood.

old cocky
Reply to  karlomonte
April 13, 2024 10:23 pm

There a lot of subtleties in the use of the word “mate” here in Oz.
I don’t know if that applies in the UK as well.

Reply to  old cocky
April 14, 2024 6:16 am

Nail certainly wasn’t being friendly.

old cocky
Reply to  karlomonte
April 14, 2024 1:43 pm

Yeah, but you could have said ‘I’m not your “mate”, maaaate’

Reply to  old cocky
April 14, 2024 3:31 pm

Ah, subtle.

Reply to  TheFinalNail
April 12, 2024 5:06 pm

Again, since learning and basic scientific comprehension is not part of “you”

You are relying totally on 2 major El Ninos, and one minor one.

The atmosphere, because it has a much smaller thermal inertia, responds far more to ocean releases of energy.

Your comment shows just how little you comprehend about anything to do with heat and energy.

Now, do you have any evidence for human causation of those El Nino events..

.. or are you saying that they are TOTALLY NATURAL !

Reply to  bnice2000
April 12, 2024 5:33 pm

Since 1979…

What has changed to cause global warming if only ‘cyclical’ influences are in play?

Reply to  TheFinalNail
April 12, 2024 10:23 pm

OMG… still the total ignorance or deliberate DENIAL of the strong El Nino releases we have had…

…. despite the fact that they stick out like dog’s ***** on the UAH data.

Who pays you to promulgate the rabid nonsense you come out with ??

Why support the AGW-scam with mathematical idiocy…

…. what do you think is in it for you.??

Now, do you have any evidence for human causation of those El Nino events..

Nick Stokes
Reply to  Richard Greene
April 14, 2024 4:43 pm

Surface (one US govt. source)
+0.36 degrees C. per decade
since 1982″

Elementary blunder. It says +0.36 degrees F per decade. 0.2 degrees C.

Reply to  Nick Stokes
April 14, 2024 5:00 pm

Nick,

Assuming that is based on monthly anomalies. Think about what you are trying to pawn off.

+0.36 / (10 years)(12 months) = 0.003 degrees/month. That means you have calculated monthly temperatures to something like 20.123 degrees. ROFL- Calculator jockey!

Nick Stokes
Reply to  Jim Gorman
April 14, 2024 6:17 pm

I simply treid to point out a simple error.

You have no idea how to calculate uncertainty of trend. But OK, go take it up with UAH, which was the figure quoted.

Reply to  Nick Stokes
April 14, 2024 6:43 pm

I don’t know enough about UAH to make an informed judgement with exact data. I do know that it is not unreasonable that satellites have a limited resolution of say 3W/m².

300K -> 459.27 W/m²
462.27 W/m² -> 300.48K

This wouldn’t allow for temperature at the resolution quoted.

Reply to  Nick Stokes
April 14, 2024 8:53 pm

No, you simply tried to run another one of your patented nitpicks while ignoring the real issues.

Milo
Reply to  TheFinalNail
April 12, 2024 9:52 am

Because satellites are watching, ground station “data” can only be fudged so much after 1979. That’s why cooling the past before then is so important to the scam.

SteveZ56
Reply to  Milo
April 12, 2024 12:17 pm

What do the satellites actually measure? Infrared radiation to space, up to the altitude of the satellite?

Using the Beer-Lambert Law and the IR absorption spectra of water vapor and CO2 show that CO2 does not absorb much incoming sunlight, but (depending on ground temperature and humidity) the differences in absorption of outgoing IR due to increased CO2 concentration occur within 12 meters of the ground, and in some cases within 4 meters of the ground.

At higher altitudes, increased CO2 concentrations actually result in slight decreases in absorption of upward IR radiation from the ground.

Whatever is measured by the satellite has little relation to ground temperatures, or the air temperature near the ground.

This means that the satellite data are not indicative of air temperature changes near the ground, so that fudging missing data from decommissioned weather stations could generate false trends, which would not be picked up by satellites.

Reply to  SteveZ56
April 13, 2024 7:43 am

The UAH (NOAA) satellite data for the lower troposphere are a convolution of the microwave detector response, which peaks at ~5km, with the decreasing air temperature over the 0-10km region.

Reply to  Milo
April 12, 2024 4:46 pm

Because satellites are watching, ground station “data” can only be fudged so much after 1979. That’s why cooling the past before then is so important to the scam.

So why are satellite data sets, including the beloved UAH, reporting faster global warming that surface data sets over the past 15-years if the surface data sets are all involved in a big ‘scam’ to promote warming?

Surely the surface sets would be warming faster.

Did you think this through? You didn’t, did you….?

Reply to  TheFinalNail
April 12, 2024 5:08 pm

Thermal inertia, atmosphere vs land….

Go back to junior high and learn something about it.

You are slimily cherry-picking a period that specifically relies totally on two major El Nino events, and one small one, that you know are totally natural.

Only making a fool of yourself pretending otherwise.

Reply to  bnice2000
April 12, 2024 5:43 pm

Since 1979 UAH shows statistically significant global warming. Same as all the rest of them.

No natural cycle that we know of explains that.

Reply to  TheFinalNail
April 12, 2024 10:24 pm

El Nino events only.

Even someone as deliberately thick as you are, should know that by now.

Now, do you have any evidence for human causation of those El Nino events..

Reply to  bnice2000
April 13, 2024 4:48 pm

I thought they replaced science with lack-of-gender studies at junior high!

Reply to  TheFinalNail
April 12, 2024 5:29 pm

Ironic, isn’t it.

The fact that you cherry-picked the period with two very strong El Ninos…

… shows that you are well aware that those El Ninos provide the only warming.

A massive FAIL on your behalf. !!

DD More
Reply to  Milo
April 12, 2024 9:18 pm

When will they explain why they have to keep Adjusting the Adjustments, year after year. Can’t these CGW ID-10T guys get it right the 1st time?

Page 8 of 48 http://www.climate4you.com/Text/Climate4you_May_2017.pdf – With Chart of the constant changes. 
Diagram showing the adjustment made since May 2008 by the Goddard Institute for Space Studies (GISS), USA, in anomaly values for the months January 1910 and January 2000.
Note: The administrative upsurge of the temperature increase from January 1915 to January 2000 has grown from 0.45 (reported May 2008) to 0.69oC (reported June 2017). This represents an about 53% administrative temperature increase over this period, meaning that more than half of the reported (by GISS) global temperature increase from January 1910 to January 2000 is due to administrative changes of the original data since May 2008.



Reply to  TheFinalNail
April 12, 2024 12:00 pm

Translation:

Nail gets the same line with Fake Data as satellites, therefore using Fake Data is good and justified.

It is fraud.

Reply to  karlomonte
April 12, 2024 4:49 pm

Not sure your translation machine is working, Karl.

Can anyone decipher this, please?

Reply to  TheFinalNail
April 12, 2024 5:05 pm

You advocate for data fraud —— ¿comprende?

Reply to  karlomonte
April 12, 2024 5:39 pm

No, I don’t.

I advocate for full data disclosure and that people, especially those who claim to be ‘sceptics’, learn to download the data from the primary producers and learn how to analyse it in a basic way.

I must apologise for my above comment if your first language isn’t English.

Reply to  TheFinalNail
April 12, 2024 10:27 pm

Your methods of analysis are primitive and mathematically juvenile.

You have deliberately chosen a period you know has two major El Ninos in it.

You have FAILED UTTERLY to con anyone but yourself.

At least you have now discounted any use of GISS, BEST etc

Reply to  TheFinalNail
April 13, 2024 6:39 am

I advocate for full data disclosure

No you don’t. That would include all fabrications of data and the increased uncertainty introduced. Why have you been unable to find a paper and show it with a comprehensive description of uncertainty propagation from calculating a daily average to the final GAT figure.

Neither Nick Stokes or Mosher has ever published anything here addressing this either. Just a lot of dancing and gaslighting, just like you.

Why is that? Could it be that the analysis of MEASUREMENTS does not include any standard practices for determining the uncertainty at each step?

Is it why no university or government agency has funded an ISO analysis of the processes being used? NIST has reams of documents and recommendations for handling measurements. Every mechanic, machinist, and lab has very detailed documentation and procedures but not NOAA, BOM, MET, or any other organization!

Why is there no prepared documentation detailing the measurement uncertainty calculations on the web for immediate access? Tax payers deserve better!

The whole process is designed to be opaque. Trust us, we know what we are doing. Does that sound like the old snake oil salesman?

Reply to  TheFinalNail
April 12, 2024 1:44 pm

You mean the period when most urban surface sites are already totally infilled and densified.

You know it is all the massive mal-adjusted cooling of the past, especially around the 1920-1940 period that the climate scammers use to pretend the glob is warming.

Reply to  bnice2000
April 12, 2024 4:51 pm

So urban heat is warming the lower troposphere in the mid-Pacific… right?

Reply to  TheFinalNail
April 12, 2024 10:29 pm

NO !…

El Ninos are doing that, a fact your cherry-picked period shows you are clearly well-aware of.

Please try to keep up.. it is tedious trying to teach you basic comprehension. !

Reply to  TheFinalNail
April 12, 2024 9:54 pm

Just by eye balling, I’d say that one difference is that UAH has greater variance than GISS and the UAH lows are typically lower than GISS.

April 12, 2024 7:04 am

That there are “Ghost Stations” is only half the story. At least that’s true for GISTEMP as they take their Land Ocean Temperature Index, LOTI, all the way back to 1880 and change hundreds of entries every month. So far in 2024 they made 544 changes in January and 321 in February and they averaged 316 changes per month in 2023.

AlanJ
April 12, 2024 7:18 am

It sounds like NOAA pretty clearly flags estimated values in USHCN with an “E” for estimate, so I don’t see how this could possibly be an issue for anyone. But if it does bother you, you’ll be happy to know that USHCN has been superseded as the official historical climate record for the US by nClimdiv, which uses a much larger network of stations from the GHCN (more than 10,000 compared to 1200 in USHCN).

And as always in these articles whining that adjustments to the network are improper or unjustified, no one seems to be able to say specifically what impacts the adjustments are having or why those are bad. They just say, adjustments = bad. Of course, it turns out that the adjustments bring the full nClimdiv network into alignment with the pristine US Climate Reference Network series, so the evidence shows that the adjustments are proper and doing exactly what they’re supposed to do:

comment image

Reply to  AlanJ
April 12, 2024 7:49 am

So, no visible temperature trend for the last 20 years, and are we supposed to panic about this? Where is the supposed “tipping point”, exactly?

AlanJ
Reply to  stevekj
April 12, 2024 8:03 am

On the contrary, both series show a warming tend of about 0.3 degrees per decade, which is faster than the whole globe:

comment image

Sometimes you have to do more than eyeball stuff while squinting.

Reply to  AlanJ
April 12, 2024 9:37 am

Ok, so the temperature is rising.
What my best action?
Cover my roof with solar panels or steel?

Reply to  David Pentland
April 12, 2024 4:55 pm

Do whatever you think necessary. Just don’t pretend, like this site does, that it’s not happening; or that it’s not consequential.

It is happening and it is going to have consequences. For you.

Reply to  TheFinalNail
April 12, 2024 5:05 pm

/snort/

Reply to  TheFinalNail
April 12, 2024 6:00 pm

Strong El Nino events won’t continue to be prevalent for ever.

But it is all you have, except a rancid brain-washed anti-science belief system.

You have no evidence of any warming by atmospheric CO2…

… and no evidence of any human causation for El Ninos.

There is no evidence the slight natural warming has been anything but beneficial.

There is no evidence that will be “consequences”

(what a ridiculously stupid comment from you)

You are scientifically EMPTY. !

Stop pretending you are not.

Reply to  TheFinalNail
April 12, 2024 11:21 pm

I think it’s necessary to harden our infrastructure (steel roofs for instance) and assure a plentiful, affordable supply of reliable energy to best cope with whatever happens. The consequences I see today are ineffective, damaging attempts to “fight” climate change, as if we could.

You see a little uptick on a chart and cry “the sky is falling”. Look outside, it’s just weather.

Reply to  TheFinalNail
April 15, 2024 7:28 am

DP is aksing a good ?, The focus should be on the right combo of adaptation and source minimization.

ducky2
Reply to  AlanJ
April 12, 2024 11:14 am

Given that CONUS is a continental country in the mid-latitudes, it’s going to show more variation from month to month, which means that the anomalies are more dispersed along the y-axis. This gives a larger slope, but doesn’t equate to faster warming.

AlanJ
Reply to  ducky2
April 12, 2024 11:27 am

That isn’t how slopes work, but I appreciate your attempted salvage operation.

ducky2
Reply to  AlanJ
April 12, 2024 12:10 pm

In OLS, the goal is to find a line that best fits the observed data points. Increased variability means more extreme data point dispersions on both sides of the y-axis, and the regression line will have to accommodate that, which will give a more pronounced slope compared to the global anomaly, which is claimed to be accurate to the hundredth decimal point.

AlanJ
Reply to  ducky2
April 12, 2024 12:29 pm

If all you’re saying is that there is greater uncertainty in the estimated trend for smaller regions than for the globe, that is true and unobjectionable. That doesn’t mean the calculated trend for CONUS is not correct, it just means we are less certain of the correctness than we are for the global trend.

Reply to  AlanJ
April 12, 2024 2:47 pm

It does mean that there is no significant difference between the trend…

Failing stats/maths 101, yet again.. poor AJ.

USCRN-v-UAH-global-trends
Reply to  bnice2000
April 12, 2024 3:23 pm

Oops.. Rushed a bit this morning, calculated UAH trend over full period from 1979…

Here is the corrected graph with all trends calculated over 2005 -> period.

trends-uscrn-etc
Reply to  AlanJ
April 12, 2024 12:54 pm

Try getting the vertical axis labelled correctly.

Reply to  AlanJ
April 12, 2024 2:15 pm

This is the trouble when letting a monkey loose with data.

Global data is much averaged and USCRN has a FAR greater range than global data.

There is absolutely no significant trend difference between USCRN and UAH global.

Now, still waiting for evidence of human causation.

USCRN-v-UAH-global-trends
AlanJ
Reply to  bnice2000
April 12, 2024 2:38 pm

If you’re trying to say that the USCRN trend is likely to be less than 0.3 deg per decade and more similar to the global trend, I am not going to argue. The bounds of uncertainty in the observed trend certainly don’t dispute that. We both agree there has been significant warming in the US, which is the point.

Reply to  AlanJ
April 12, 2024 3:26 pm

Showing you are ignorant of trend significance.. Not a good look

FAIL !!

Corrected graph is here for period 2005 ->

Now, still waiting for evidence of human causation.

In the absence of which, we will have to assume it is a slight NATURAL warming from El Nino events. !

trends-uscrn-etc
sherro01
Reply to  AlanJ
April 12, 2024 3:33 pm

AlanJ,
Warming of What?
Precisely what is that thermometer in a screen measuring?
Geoff S

Reply to  AlanJ
April 12, 2024 8:09 am

“It sounds like NOAA pretty clearly flags estimated values in USHCN with an “E” for estimate, so I don’t see how this could possibly be an issue for anyone.”

It is a problem for me if the stations given an “E” (for estimated) are lumped into any averaging with stations having their own particular data readout (i.e., those not give an “E”). Simple mathematics says that that will result in unfair weighting being given to those stations used to derive “E” values, beyond just providing their own data input.

A more rationale, more mathematically pure, approach would be to simple drop those stations longer longer producing quality data from any statistical analysis of an given objective dataset.

AlanJ
Reply to  ToldYouSo
April 12, 2024 8:21 am

You don’t have to include the estimated values, that’s why they’re flagged. You only need to do that if you are attempting to construct an absolute temperature estimate, because combining partial series in such a case produces spurious trends. If you use anomalies you can simply drop the estimated values. NOAA used to try to construct an absolute temperature index for CONUS, I’m not sure why they bothered, but that was the motivation for needing to estimate station values as I understand it.

Reply to  AlanJ
April 12, 2024 8:57 am

So, left unanswered by you: do the “average” temperature values distributed regularly to the public by USHCN officials include, or not include, stations reported as having an “E” suffix?

I see no reason that using anomalies as compared to absolute temperature values allows one to “simply drop the estimated values” as you assert.

The whole gist of the above WUWT article is that that by including the “E”-designated stations in their reporting, USHCN scientists are incorrectly/improperly reporting averaged temperature data. Actually, Lt. Col. John Shewchuk, certified consulting meteorologist, says it more strongly:
“NOAA fabricates temperature data for more than 30 percent of the 1,218 USHCN reporting stations that no longer exist.” (my bold emphasis added)

AlanJ
Reply to  ToldYouSo
April 12, 2024 9:17 am

So, left unanswered by you: do the “average” temperature values distributed regularly to the public by USHCN officials include, or not include, stations reported as having an “E” suffix?

The USHCN US average did indeed use the estimated values, and indeed could do nothing else, because they attempted to provide an estimate for the US average as an absolute temperature value. Most everyone else does not, because they just use anomalies.

The reason you have to have a continuous record if you aren’t using anomalies takes a a minute to wrap your head around, but is not that complicated. Say I have two generic instruments reporting values of… something, over time:

comment image

And I want to know their average. They are both flat lines, so intuitively, I reckon that the average value should be a flat line somewhere in the exact middle of both series. But if I naively average the series together and plot that:

comment image

Uh oh, I’ve got a problem, why is there a trend in the average? Neither instrument was reporting a change over time in the metric being measured. And the answer is visually obvious: one of the instruments stopped reporting in 1964, so the naive average simply “jumps up” and becomes the value of the one remaining instrument. I have inadvertently induced a trend in the series where there shouldn’t be one.

My options for avoiding this are to either:

a. extend series A so that it never ends (maybe by infilling with values from other nearby instruments.

b. normalize both series to be on a common zero (take the anomaly) so there is no offset between them to begin with, since I don’t care about the offset anyway.

You can do either one, and both will give good results, but option A is more work, and will produce the dreaded “zombie data” that everyone here is up in arms over.

Reply to  AlanJ
April 12, 2024 9:42 am

Your simplistic argument fails because there is no such thing as an average of two measurements from different instruments once hypothetical instrument A fails at hypothetical time 1964.

You have advanced an apples-to-oranges argument: an average of readings from two different instruments (pre-1964) cannot be evaluated or used to develop a trend line by conflating it with data from a single instrument (post-1964).

As you stated, but overlooked: it’s not that complicated.

AlanJ
Reply to  ToldYouSo
April 12, 2024 10:05 am

My example is a simplification. Trying to debate the details of the simplification is pointless. You should be able to understand conceptually what I am getting at. The point is that you have to account for structural changes in the network composition over time, or you will introduce spurious trends, and there are two valid approaches for doing this. USHCN chose the valid option that requires estimating missing values, but you don’t have to follow that lead, and most other orgs don’t.

Reply to  AlanJ
April 12, 2024 11:18 am

Trying to debate the details of the simplification is pointless.”

I will argue instead that facts matter.

To wit: you left out option c, which is to just eliminate altogether any “estimation” of no-longer-active station data from any reported averages.
That would be short, simple and scientifically valid, unlike the other two options you fronted.

AlanJ
Reply to  ToldYouSo
April 12, 2024 11:26 am

If you include incomplete station records, then taking an average will introduce spurious trends into the absolute temperature estimate, that’s why I didn’t include this approach in the list of viable options. If you fail to comprehend why, try re-reading the example I presented above, but this time with an open and inquisitive mindset, instead of merely looking for an angle of attack. Then ask follow-up questions if you’re still struggling.

Reply to  AlanJ
April 12, 2024 5:11 pm

This isn’t an unknown phenomena. Here is a web page to read.

Simpson’s Paradox (Stanford Encyclopedia of Philosophy)

However, it is never a reason for modifying data to make contrived long records. One of these days the bill will come due for having done this. No other scientific endeavor or business is allowed to do this. In fact, criminal penalties can result from “fixing” data.

Reply to  ToldYouSo
April 12, 2024 5:07 pm

Exactly!

Reply to  AlanJ
April 12, 2024 11:55 am

AJ-Man ran away from the “NOAA fabricates temperature data for more than 30 percent of the 1,218 USHCN reporting stations that no longer exist.” quote.

Surprise.

Uh oh, I’ve got a problem, why is there a trend in the average?

Fake Data is fraud, not a “problem”.

AlanJ
Reply to  karlomonte
April 12, 2024 12:32 pm

I’ve not run away from this, I’ve explained why they are doing it. The choice of the word “fabricate” is intended to elicit an emotional response from the WUWT readership, implying a falsification, but the “fabrication” is noting more than estimating the missing values from the values of nearby stations, and it is done transparently and with thorough and clear documentation, and for a specific purpose that is completely optional. You can freely drop the estimated values provided you’re working with the anomalies, and most people do.

Let me know if you are confused about any other facets of the discussion, I’ll do my best to help you.

Reply to  AlanJ
April 12, 2024 3:28 pm

You constantly admit to using FAKED and MAL-MANIPULATED data.

Not science of any sort.

No confusion about that fact.

Reply to  bnice2000
April 12, 2024 3:58 pm

Bingo.

How these climate types have put themselves into a box where making up data is encouraged is completely beyond anything I could ever imagine would happen.

And then they revel in it.

sherro01
Reply to  AlanJ
April 12, 2024 3:48 pm

AlanJ,
So I had a lingering illness and my GP was taking my temperature 3 times a day in case I turned febrile. A couple of times I was in the toilet when he wandered by on his rounds, so we ended up with missing values.
Doc was not well trained in statistics. “Never mind” he said “we can take an average from the other patients in the ward, then fill you missing values. Everyone used the same thermometer, sterilized and reset, of course.”
Don’t regard this story as fanciful and unrelated to the US temperature sets. Have a think about how close in settings and properties two temperatures need to be before you can average them. Some commenters here are making the point that these daily temperatures, even at one site, are too different to be treated statistically with the same equations used for mathematical numbers not derived from life situations. Geoff S

AlanJ
Reply to  sherro01
April 12, 2024 5:02 pm

Your example is quite extreme, but, obviously, yes, estimating temperatures for a station from other nearby stations is less certain than having the temperature for the station to begin with. But if you need a continuous station record, you have to do infilling. The infilling is fine if you’re very careful about it, but it’s a lot of effort for little gain. Just use the anomalies.

Reply to  AlanJ
April 12, 2024 5:38 pm

Now that “infilling” argument is really, really funny when examined at a hypothetical limit condition:

Let’s presume in 10 years only two of the once-operational 1,218 USHCN temperature monitoring stations are still active as a result of continued cutbacks in station maintenance and/or data gathering. NOAA and AlanJ maintain that’s still OK because “infilling” by estimating temperature readings for other 1,216 locations is OK based on real data being obtained from the two remaining active stations.

Reductio ad absurdum . . . ROTFL!

Reply to  AlanJ
April 12, 2024 7:49 pm

It isn’t extreme, it occurs constantly.

Infilling is an euphemism for creating data. You do realize that can be a crime and university academics have been fired for doing that.

Here is a section from:

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0005738

There can be little doubt about the fraudulent nature of fabrication, but falsification is a more problematic category. Scientific results can be distorted in several ways, which can often be very subtle and/or elude researchers’ conscious control. Data, for example, can be “cooked” (a process which mathematician Charles Babbage in 1830 defined as “an art of various forms, the object of which is to give to ordinary observations the appearance and character of those of the highest degree of accuracy”[12]); it can be “mined” to find a statistically significant relationship that is then presented as the original target of the study; it can be selectively published only when it supports one’s expectations; it can conceal conflicts of interest, etc… [10], [11], [13], [14], [15]. Depending on factors specific to each case, these misbehaviours lie somewhere on a continuum between scientific fraud, bias, and simple carelessness, so their direct inclusion in the “falsification” category is debatable, although their negative impact on research can be dramatic [11], [14], [16]. Henceforth, these misbehaviours will be indicated as “questionable research practices” (QRP, but for a technical definition of the term see [11]).

The worst part is the lack of transparency in letting the public know that created data is used in the calculations. It matters not “how careful” a scientist is, hiding the fact of creating data is unethical. How many people in Congress or the President know this? I bet not 1 in 10,000 or worse, of government bureaucrats know this.

Reply to  AlanJ
April 12, 2024 3:57 pm

Here, let me help, you missed the key point:

Fake Data is fraud, not a “problem”.

AlanJ
Reply to  karlomonte
April 13, 2024 3:42 pm

Oh this isn’t fake data, it’s estimated values using nearby stations, and clearly labeled as such. And USHCN is retired, it isn’t used to derive the average US temperature index any more, so the whole thing is a moot point.

Nick Stokes
Reply to  karlomonte
April 12, 2024 4:25 pm

AJ-Man ran away from the “NOAA fabricates temperature data for more than 30 percent of the 1,218 USHCN reporting stations that no longer exist.” “

Everyone here is running away from the simple fact that USHCN was replaced by nClimDiv in March 2014. It is not used.

Reply to  Nick Stokes
April 12, 2024 5:07 pm

And another data fraudster makes excuses.

Reply to  Nick Stokes
April 12, 2024 6:02 pm

USCHN 2.5 still exist.. Or are you ignorant of that as well !!

Nick Stokes
Reply to  bnice2000
April 12, 2024 6:41 pm

OK, show me a USHCN average calculated by NOAA since 2014.

Reply to  Nick Stokes
April 12, 2024 6:03 pm

ClimDiv and/or USCRN are being adjusted so they match.

I suspect it is ClimDiv.. which means it also is TOTALLY FAKE.

How else could it be anything but fake given the deplorable state of the surface sites it is derived from. !

Reply to  Nick Stokes
April 12, 2024 7:54 pm

Stop LYING as the NOAA is still updating it every month and does compare it with ClimDiv as I showed you in the past straight off the NOAA website several times.

Reply to  Sunsettommy
April 12, 2024 9:37 pm

Nick the Gaslighter.

Nick Stokes
Reply to  Sunsettommy
April 12, 2024 10:50 pm

You have never shown a US average calculated by NOAA with USHCN since 2014.

Reply to  AlanJ
April 12, 2024 5:06 pm

That is not the reason it is done. It is done in order to maintain a long record so climate science can claim that they are using long terms stations. Something about short term stations causing spurious trends which is what you are demonstrating.

It is a farce. It is akin to p-hacking to make the data look better. If a station changes for whatever reason, that record should be terminated and a new one started.

Would you let a nuclear plant put in a new subsystem and begin to change data so they could show a continuous long record of released radiation? How about a sewage station replacing a flow meter and changing data to show a continuous record?

AlanJ
Reply to  Jim Gorman
April 13, 2024 3:44 pm

It was done because USHCN was presenting a long record of climate change as absolute temperature values. Lots of other orgs present long records of climate change as anomalies, and they don’t use estimated values (see e.g. nClimDiv US temperature index, which supersedes USHCN).

Reply to  AlanJ
April 12, 2024 2:49 pm

Using FAKED. MAL-ADJUSTED, and CONTAMINATED surface data.

That is the AGW-scam way !!

Reply to  AlanJ
April 12, 2024 9:05 am

“And as always in these articles whining that adjustments to the network are improper or unjustified, no one seems to be able to say specifically what impacts the adjustments are having or why those are bad.”
______________________________________________________________

The earliest GISTEMP Land Ocean Temperature Index record (LOTI) that can be found on the WayBack Machine is from 1997. (my link to it no longer works, don’t know why that is) That 1997 LOTI record is from 1950-1997 and compared to GISTEMP’s LOTI from 2018 when the below chart was produced shows a definite increase when the rate over the 1950-1997 time series is compared in a “Then and now” graph. It shows that by 2018 the changes increased the 1950-1997 time line to show a 0.25°C per century increase.

GISTEMP-CHANGES-1997-2018
AlanJ
Reply to  Steve Case
April 12, 2024 9:28 am

This is not really an answer to the first question – what is the impact of the adjustments – because the two versions of the dataset combine numerous sources of change, most importantly a huge increase in the number of stations and coverage of the globe:

comment image

Even if it did manage to touch on the first question, it fails to address the second: why are the adjustments supposed to be bad? As noted, they seem to bring the full network in line with the pristine reference network, which is strong evidence that the adjustments are good and are doing what they are supposed to.

Reply to  AlanJ
April 12, 2024 11:08 am

You don’t get it. GISS rewrote the original data. That graph I put up only compared trend lines for the 1950-1997 time line. In the 21 years from 1997 to 2018 someone at GISS changed the original 1950-1997 data and the result was a 0.25°C per century increase. Look at the graph, the blue trendline is for 1950-1997 and the red trendline is also for 1950-1997. The gray line shows GISTEMP from 1880-2018 but there’s no trendline shown for that time series.

I’m guessing this is a case where I can tell you what the graph says, but I can’t understand it for you.

GISTEMP is rewriting original data.

…most importantly a huge increase in the number of stations and coverage of the globe:

You’re claiming that GISTEMP discovers new stations that have data going back to the 1950s which have a warming trend greater than what the existing stations show.

AlanJ
Reply to  Steve Case
April 12, 2024 12:12 pm

They didn’t “change the data,” indeed you can still obtain all of the raw station data NASA was using in 1997. What they did do is to add a lot more data to the analysis, as more station records were added to the GHCN archive. They also improved their methodologies, in ways clearly detailed in the scientific literature (and you can read a brief synopsis of those here).

If improving data coverage changes the slope of the trend line, it means the old version of the dataset had an inaccurate trendline (unless you seriously want us to believe that more complete coverage of the globe results in a worse estimate of global temperature…).

You’re claiming that GISTEMP discovers new stations that have data going back to the 1950s which have a warming trend greater than what the existing stations show.

They’re not “discovering” stations, the GISTEMP analysis uses the data in the GHCN archive compiled by the NOAA, and they can only use the data that’s available. The NOAA obtains station records from meteorological organizations all over the world, and they increase the number of records they have access to year over year, either through research and data archiving agreements, or simply personal interactions with other met data stewards who want to get their records archived. Those international met orgs increase their data holdings through digitization of historic paper records or just better access to local weather reporting orgs within the country (e.g. airports).

Reply to  AlanJ
April 12, 2024 12:55 pm

And it always seems to benefit the narrative.

AlanJ
Reply to  Steve Case
April 12, 2024 1:15 pm

I disagree, the net effect of the adjustments, to both land and ocean datasets, is to reduce the warming trend:

comment image

If scientists are actually committing fraud to exaggerate the warming, they’ve accidentally gone the wrong direction.

But even if that were not true, the challenge remains: you have to show that the adjustments are improper or unjustified, and no one here can do that. Looking at the graph above, the adjustments don’t even much impact at the global scale. We could just throw them all out and the “narrative” would be perfectly intact.

Reply to  AlanJ
April 12, 2024 2:54 pm

Again, the blue line is NOT raw data.

It is a Zeke’s FAKE raw data, an earlier mal-adjusted from of GHCN !

You have been conned yet again , by the AG con-masters, because you are incapable of discerning otherwise.

You are brain-washed into a rancid “belief” of the whole scam, hence fall for every little CON they try.

sherro01
Reply to  bnice2000
April 12, 2024 4:06 pm

Does your graph comparison include the errors that Pat Frank recently identified from the change of shape of liguid in glass thermometers? Such errors, included or ignored, affect the significance of your comparison. It is all part of the inability of many in the trade to determine the real uncertainties in this type of temperature data.
Geoff S

Reply to  sherro01
April 12, 2024 5:32 pm

The easy answer is that ANY uncertainty, even from each daily average is simply tossed in the waste can. And at the end of the line, an SEM is calculated for the anomalies from values that are out to at least 3 decimal places. Is it any wonder why the uncertainty comes out so small when it isn’t properly propagated?

AlanJ
Reply to  bnice2000
April 12, 2024 5:02 pm

Again, the blue line is NOT raw data.

prove it

Reply to  AlanJ
April 12, 2024 6:07 pm

It is equivalent to 2005 GISS data… which was NOT RAW.

It was based on manic GHCN adjusted data.

It is FAKE. !!

Reply to  bnice2000
April 12, 2024 6:44 pm

And please , show us where the raw “global” data from 1880-1940 came from, because it doesn’t remotely coincide with any actual real data. !

This will be hilarious !!

Reply to  bnice2000
April 12, 2024 6:56 pm

Also show us where “global” sea temperatures were measured.

AlanJ
Reply to  bnice2000
April 13, 2024 3:45 pm

So you can’t prove it, got it.

sherro01
Reply to  AlanJ
April 12, 2024 4:00 pm

AlanJ,
But there are no criteria to allow you to work out whether adding or deleting a station makes the overall data better or worse.
If you were tasked to design a new, ideal network of stations for the US, what map projection would you select to get even geographic cover? How would you balance the number of high altitude stations with low altitude? How would you best avoid UHI today and in the unpredictable future? Would you continue to monitor the air in a screen a few feet above ground or would you opt for thermometers in the soil? On and on it goes.
The broad outcome is that the present and hoc station distribution is probably non-ideal because nobody can define ideal. The effects of removing or adding stations from the present network is unknown, but heavily exploited by those who (wrongly) are influenced by beliefs about what these temperatures should show. It is rather smelly.
Geoff S

Reply to  sherro01
April 12, 2024 5:24 pm

My answer is it should make little difference. That is being argued on another thread. As long as stations remain the same throughout, it is not relevant to worry uncertainty to obtain a trend. Of course, if you need to know the real temperature that is a different issue. If I average long term rural New York, rural Nebraska, and rural California the trend should be able to be identified, if there is one. Or you could one station per state for a total of 48. The issue is using stations that have been verified to have proper instrument maintenance so they remain constant.

Reply to  sherro01
April 12, 2024 5:24 pm

My answer is it should make little difference. That is being argued on another thread. As long as stations remain the same throughout, it is not relevant to worry uncertainty to obtain a trend. Of course, if you need to know the real temperature that is a different issue. If I average long term rural New York, rural Nebraska, and rural California the trend should be able to be identified, if there is one. Or you could one station per state for a total of 48. The issue is using stations that have been verified to have proper instrument maintenance so they remain constant.

AlanJ
Reply to  sherro01
April 13, 2024 3:46 pm

A new ideal network for the US has already been produced: USCRN. And guess what? it’s exactly consistent with the full nClimDiv network.

sherro01
Reply to  AlanJ
April 13, 2024 6:27 pm

AlanJ,
Please define why it is “ideal”. What criteria are used to determine “ideal”?
Geoff S

AlanJ
Reply to  sherro01
April 13, 2024 7:34 pm

Dear Geoff, refer you to the abundant and thorough documentation on the USCRN website:

https://www.ncei.noaa.gov/products/land-based-station/us-climate-reference-network

Reply to  Charles Rotter
April 12, 2024 12:52 pm

Isn’t destruction of Federal data a criminal offense? The Feds like to use ‘conspiracy’ statutes to snare the unwary as well.

Reply to  AlanJ
April 12, 2024 11:52 am

Another proponent of Fake Data fraud.

Reply to  karlomonte
April 12, 2024 2:55 pm

Yep.. The AGW cabal KNOW that they have to use as much FAKE data as possible to maintain the scam.

Reply to  AlanJ
April 12, 2024 2:09 pm

Yes, we are well aware that ClimDiv is intentionally adjusted to match USCRN.

Their matching algorithm has been improving over time.

FAIL again !!

AlanJ
Reply to  bnice2000
April 12, 2024 2:34 pm

Yes, we are well aware that ClimDiv is intentionally adjusted to match USCRN.

It is not.

Reply to  AlanJ
April 12, 2024 3:30 pm

Yes it is.

Even a blind monkey like you should be able see that. !

There is absolutely zero possibility that faked and urban data could match a more pristine system without it being totally intentional matched.

AlanJ
Reply to  bnice2000
April 12, 2024 5:06 pm

By calling the data fake you assume the conclusion in your premise. A blind monkey might not comprehend this, but a testy bnice should.

Reply to  AlanJ
April 12, 2024 10:33 pm

The fact you have zero-comprehension of the mathematical impossibility of corrupted urban and airport data ever matching pristine site data without manic and intentional adjustments…

… really does show just how mathematically illiterate you really are. !!

Reply to  AlanJ
April 12, 2024 4:19 pm

And of course we know that the USCRN trend depend totally on the 2015/16 El Nino bulge

Slight cooling in all 3 from 2005 to 2015

2005-2015-USA
Reply to  AlanJ
April 12, 2024 4:28 pm

Then the 2015/2016 El Nino bulge..

Then near zero trend from 2017 with a couple of spikes, probably related to the 2023 El Nino at the end.

Rud Istvan
April 12, 2024 7:34 am

I concluded a decade ago that surface temperature data was not fit for climate purpose. Gave many illustrations in essay ‘When Data Isn’t’ in ebook Blowing Smoke. Ghost stations is but one of several serious problems. And the problems are global in GHCN, not just USHCN.

Richard Greene
Reply to  Rud Istvan
April 12, 2024 9:05 am

The average temperature is whatever government bureaucrats want to tell us and there is no way to doublecheck their measurements and statistics.

You either trust them or you don’t trust them

Reply to  Richard Greene
April 12, 2024 10:31 am

It takes a lot of faith which is why the fear of global warming is the new, approved government religion. It replaces the fear of communist nuclear annihilation which my parents generation ran around freaking out and waving their hands over.

As long as I’ve been alive there has always been some boogyman that is out to get you and that you needed to be afraid of. But never fear, if it gets too hot, I’m sure there are still thousands of empty backyard nuclear fallout shelters that are cooler.

Reply to  doonman
April 12, 2024 5:09 pm

And the priests of the new modern religion do not abide having their fear tactics revealed.

sherro01
Reply to  Rud Istvan
April 12, 2024 6:00 pm

Rud,
So did I for Australian data. Encouraged by Warwick Hughes (recipient of the Jones email why should I give you my data…) and economist Alan Moran, I started to study official Australian data about 1992. I have never seen any justification for its use to alarm the populace with yet another hobgoblin. The system was designed, from the 1850s onwards, for purposes unrelated to its present alarmist use, like assisting pilots at airstrips. Geoff S

Richard Greene
April 12, 2024 8:16 am

These were the first two items on my blog’s recommended reading list this morning at 8am

The Epoch Times 30% story was discussed in the UK:

Data From Nonexistent Temperature Stations Undergird Trillion-Dollar Climate Policies – Climate Change Dispatch

Tony Heller claims the correct percentage is about 50%

Startling New Revelation | Real Climate Science

I think the right answer is no one outside of NOAA knows.

NOAA does not use USHCN now

Did they delete the ghost stations

Who knows?

They claim ti use more weather stations now/

Would NOAA government bureaucrats do the right thing if it does not support the official CAGW narrative?

Don’t make me laugh

NOAA now uses nClimDiv, described in detail below. Could be worse or better than USHCN.

Based on about 15 years of experience, NOAA will never correct science errors distorting their statistics.

They added the 114 rural USCRN network that they claim are all properly sited weather stations. Strangely, USCRN has almost the same numbers as nClimDiv with poor siting. That makes no sense.

Who knows if USCRN is really accurate?

Do you trust NOAA?

If not, why trust their USCRN?

The bottom line is NOAA can not be trusted and that means EVERY number they present to the public is likely to be biased. And when Climate Realists discover the bias / and bad science, NOAA ignores them.

The US has a lot of weather stations. Deleting 30% would not be a problem.

^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^

Note to Charles:

The description under the USCRN chart on the WUWT home page says:

“The US Climate Reference Network record from 2005 shows no obvious warming during this period. The graph above is created monthly by NOAA.” WUWT home page

That is not even close to being true.

USCRN has a US warming rate of +0.34 degrees C. per decade since 2005, compared with the UAH global average temperature warming rate of +0.14 degrees C. per decade since 1979

WUWT should not be hiding a significant warming trend in USCRN with a fake description. The fact that the US warming trend is hard to see with two eyeballs does not mean there is no warming trend.

^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^

“NCEI now uses a new dataset, nClimDiv, to determine the contiguous United States (CONUS) temperature.

This new dataset is derived from a gridded instance of the Global Historical Climatology Network (GHCN-Daily), known as nClimGrid.

Previously, NCEI used a 2.5° longitude by 3.5° latitude gridded analysis of monthly temperatures from the 1,218 stations in the US Historical Climatology Network (USHCN v2.5) for its CONUS temperature.

The new dataset interpolates to a much finer-mesh grid (about 5km by 5km) and incorporates values from several thousand more stations available in GHCN-Daily.

In addition, monthly temperatures from stations in adjacent parts of Canada and Mexico aid in the interpolation of U.S. anomalies near the borders.

The switch to nClimDiv has little effect on the average national temperature trend or on relative rankings for individual years, because the new dataset uses the same set of algorithms and corrections applied in the production of the USHCN v2.5 dataset.

However, although both the USHCN v2.5 and nClimDiv yield comparable trends, the finer resolution dataset more explicitly accounts for variations in topography (e.g., mountainous areas).”

https://www.ncei.noaa.gov/access/monitoring/national-temperature-index/background

Reply to  Richard Greene
April 12, 2024 3:00 pm

Strangely, USCRN has almost the same numbers as nClimDiv with poor siting. That makes no sense.”

It is patently obvious to all but the deliberately blind that one or both are being mal-adjusted to match.

How stupid would they look if their fabricated ClimDiv numbers diverged warmer than their “presumed” reference network.

Do a graph of Climdiv minus USCRN over time, you will see they are gradually refining their “matching” algorithm.

Reply to  bnice2000
April 12, 2024 10:37 pm

I’ll help you out RS..

ClimDiv started slightly higher, and they have gradually adjusted their matching algorithm over time , leaving its average difference at the moment about 0.1C above.

ClimDiv-minus-USCRN
strativarius
April 12, 2024 8:24 am

Fat Gut is just an hype merchant.

MarkW
April 12, 2024 8:46 am

Most of the stations that are dropping out, are rural ones. As a result, when they “average” from the nearby stations, they are using a higher and higher percentage of urban and suburban stations.
The end result is an over all average that is increasingly polluted by micro-site and UHI contaminations.

April 12, 2024 8:47 am

For the 2022 climate summary reports by state, for certain states, NOAA included a “hot days” analysis of the entire CONUS (48 states) using a list of 655 GHCN stations with <10% missing data for the period 1900 through 2020.

I used that list of stations to update the analysis through 2023. See my post at WUWT here.
https://wattsupwiththat.com/2024/02/25/open-thread-83/#comment-3872640

I regard the daily files of individual stations, in that list of 655, and in the list of 1,218 stations for the USHCN as useful for certain purposes.

Sparta Nova 4
April 12, 2024 9:37 am

Oh boy.

April 12, 2024 9:45 am

How did a lunatic get in charge of the UN?

MarkW
Reply to  Joseph Zorzin
April 12, 2024 3:18 pm

A very solid majority of countries are run by kleptocrats.

Reply to  MarkW
April 12, 2024 6:08 pm

“A very solid majority of countries are run by kleptocrats.”

And THEY want to rule and control the world. !

And to all the AGW-scammers and trolls…

… you are already being “controlled” by unsubtle brain-washing and/or payments… the really dumb ones will not even realise it…

You will always be the little people who bow and scrape to your masters.

observa
April 12, 2024 10:13 am

Gavin knows we control the future and it’s the lack of aerosols wot dunnit but you can’t use aerosols to fix it cos something could go wrong with that and then where would we be-
https://www.msn.com/en-au/video/webcontent/web-content/vi-84gtxNMQc3ly0A
So fickle energy it is skeptics.

April 12, 2024 10:52 am

Is there any other field of science where we invent observational data?

April 12, 2024 11:59 am

I’ve followed the climate change hysteria for almost 30 years now and I almost don’t believe this. Almost.

How is this fraud allowed to continue? Sadly, this is only the most outrageous climate impropriety I’ve heard in about a month. It’s not the worst but it has to be in the top 10.

SteveZ56
April 12, 2024 12:04 pm

How Climate “Science” advanced over the past quarter century.

1999: It’s getting cooler–hide the decline!

2024: Not enough thermometer readers–no problem! Invent the data!

Nick Stokes
Reply to  SteveZ56
April 12, 2024 4:20 pm

USHCN has not been used since 2014. The replacement, nClimDiv has an order of magnitude more stations.

Reply to  Nick Stokes
April 12, 2024 10:38 pm

So, even more JUNK DATA, that can be fabricated to get whatever they want….

Thanks !

April 12, 2024 12:34 pm

NOAA still records data from these ghost stations by taking the temperature readings from surrounding stations, and recording their average for the ghost station, followed by an “E,” for estimate.

nope, thats not how E stations are Estimated.

please.

We worked hard to get the code open and free.

NOAA do NOT simply take the surrounding temprature average for the defunct stations

even if you did, you can PROVE this doesnt change the answer. the

surface of the earth is OVERSAMPLED such that you need less than 300 stations to determine the Average Monthly temp

how do we know.

take the 1000 or so NOAA stations. calculate the average

youll get ~ 15C.

sample 800 of the 1000 you get 15C, sample 700, 600, 500, 400, 300, 200, 100

result? ~15C

why is this?

the average MONTHLY temp is very uniform. and it changes uniformly and predictably with

latitude and elevation

ducky2
Reply to  Steven Mosher
April 12, 2024 12:56 pm

the average MONTHLY temp is very uniform. and it changes uniformly and predictably with

latitude and elevation

Given that temperature itself doesn’t change uniformly and predictably with latitude and elevation, maybe the issue lies in the utilization of monthly averages.

Reply to  ducky2
April 12, 2024 4:03 pm

Their holy averages toss all kinds of valuable information into the rubbish.

Reply to  Steven Mosher
April 12, 2024 3:02 pm

Mosh, having worked at BEST, knows all about FAKING and MAL-MANIPULATION junk data. !

sherro01
Reply to  bnice2000
April 12, 2024 4:40 pm

bnice2000,
No need for this schoolkids type of comment. You might be unaware of very real contributions that SM has made over the years. Look up some of his work in early Climate Audit articles.
Geoff S

Reply to  sherro01
April 12, 2024 6:15 pm

SM’s contributions ceased when he joined BEST.

Reply to  Steven Mosher
April 12, 2024 4:01 pm

And another Fake Data fraudster stops by to prop up the fraud.

take the 1000 or so NOAA stations. calculate the average

You forgot to report the variance, mosh, along with a host of other statistical indicators.

sherro01
Reply to  Steven Mosher
April 12, 2024 4:35 pm

Steven,
What you have just written does not apply to Australian data.
I have selected 45 Aust stations with long temperature records as closest I can find to “pristine” for study of UHI.
My expectation was that pristine stations would have some properties that set them apart from Urban stations. I failed to find any pristine signature.
I hypothesized that the temperature trend over the same time for pristine stations would be similar (and possibly lower than for Urban stations). But, there are no such systematics evident in the data I studied.
Pristine stations have quite different trends to each other, scattered much the same as urban trends.. What is causing the pristine differences? (Assuming UHI is absent and we are seeing only natural variation). So far, I have no answer. I have run comparisons on data adjusted for altitude and latitude, no clearer.
So, if a trend is changing, there is a problem with averaging. You get a different average if you sample the start of the data or the end of the data. You get different averages for different time cuts, but the comparison within different time cuts changes – stations with higher averages than the rest can become lowest averages in different time periods.
I know that this sounds like “all muck and mystery”, but my present thoughts are that SOMETHING so far unidentified is adding to the observed patterns of variability of temperature trends over time at these very isolated 45 stations.
I have not yet written this up because the more I look, the more complicated it gets. Must have started a dozen articles in the last 3 years, only to give up because of inability to conclude anything of positive interest apart from unexpected complexity. Cheers. Geoff S

Reply to  sherro01
April 12, 2024 6:21 pm

Your comments make an absolute nonsense of “homogenisation” processes. 😉

Things just aren’t homogenous, and any attempt to make them so… is FAKERY.

And that FAKERY is the basis of all surface temperature fabrications around the world.

Reply to  sherro01
April 13, 2024 4:36 am

That’s interesting. My first thought is weather. Dry periods, wet periods, warm spells, cold spells, everything in between. I guess I’m describing random behavior that is not amenable to comparison. Think of 45 drunk guys walking down a road. The random movement of each mixes the paths up as they all walk in the same general direction.

Reply to  Steven Mosher
April 12, 2024 5:32 pm

Make that 15 ±2.0C at least.

Reply to  Jim Gorman
April 12, 2024 10:39 pm

one-sigma or two?

Reply to  Steven Mosher
April 12, 2024 6:53 pm

“surface of the earth is OVERSAMPLED such that you need less than 300 stations to determine the Average Monthly temp . . .the average MONTHLY temp is very uniform. and it changes uniformly and predictably with latitude and elevation”

I think both assertions are demonstrably false.

Just one case in point:
“Alaska’s temperature climate is highly variable. It was moderately warm from the 1920s into the 1940s, much cooler from the late 1940s into the 1970s, and warmer thereafter. Since 1925 (the beginning of reliable records), temperatures in Alaska have increased by about 3°F (Figure 1), compared to about 1.8°F since 1900 for the contiguous United States. There is considerable regional variability in the warming, with the greatest warming occurring in the North Slope (about 4°F) and the least warming (less than 2°F) occurring in the Panhandle and the Aleutians. Most of the warming has occurred in the winter and spring and the least amount in the summer and fall. Summer temperatures have been above average since the late 1980s (Figure 3b), and winter temperatures have been mostly above average since 2001 (Figure 3a). The increase in summer temperatures is primarily due to a large increase in summer minimum temperatures (Figure 3c). The large decadal variability is caused in part by changes in hemispheric climate patterns. For example, a substantial increase in annual average temperature occurred around 1976, followed by gradual additional warming through 2020. Specifically, annual average temperature increased by about 1.5°F from the 1970s to the 1980s and then by about 2°F from the 1980s to the 2010s, with much higher values locally. At Utqiaġvik, annual temperature has increased by more than 12°F since 1976. This warming coincided with a shift in a climate pattern known as the Pacific Decadal Oscillation (PDO). In the past, during the warm phase of the PDO, increased atmospheric flow from the south brought warm air into Alaska during the winter. Accelerated warming has occurred since mid-2013: 2016 and 2019 were the second-warmest and warmest years on record, respectively. The shift to warmer temperatures in the 1970s can be seen in the number of extremely cold nights (Figure 4), which has generally been below the long-term (1930–2020) average since 1980, with the lowest multiyear averages occurring in the 2000–2004 and 2015–2020 periods. The number of warm days was high during the early 1990s, early 2000s, and late 2010s; 2019 experienced the second-highest number of warm days, after 2004 (Figure 5). Over the past 100 years, the length of the growing season in Fairbanks has increased by 45%, and the number of snow-free days has increased by 10%.”
https://statesummaries.ncics.org/chapter/ak/

Nothing in the above-quoted excerpt supports claims of uniform or predictable “monthly temperatures”. My understanding is that the PDO is not predictable.

Attached graphs from same reference give some idea of the magnitude of year-to-year variability of seasonal temperatures in Alaska.

Alaska_T_Variability
ntesdorf
April 12, 2024 4:15 pm

The temperatures that NOAA fabricates for no longer existent stations are never cooler, they are always hotter in order to support the floundering Global Warming initiative. Since the stations no longer exist, the data can never be checked and the fraud exposed. However funding continues to the perpetrators benefit.

Nick Stokes
April 12, 2024 4:37 pm

The addition of the ghost station data means NOAA’s “monthly and yearly reports are not representative of reality,” said Anthony Watts, a meteorologist and senior fellow for environment and climate at the Heartland Institute.”

As I have been telling WUWT for ten years, USHCN was replaced by nClimDiv in March 2014. The USHCN data has no role in NOAA reports. NOAA has not calculated a USHCN-based average since 2014.

Reply to  Nick Stokes
April 12, 2024 5:10 pm

So making up data is ok, then?

Nick Stokes
Reply to  karlomonte
April 12, 2024 5:14 pm

USHCN was replaced by nClimDiv in March 2014.



Reply to  Nick Stokes
April 12, 2024 6:18 pm

USCHN 2.5 still exists… ClimDiv prior to 2014 is the same FAKED never-was-data that was USHCN.

ClimDiv and/or USCRN are being faked so they match each other.

Yes, Nick LOVES made-up and FAKED data.

It is an AGW thing !

Reply to  bnice2000
April 12, 2024 6:54 pm

Notice how they all run away from the real issue, including Stokes.

Reply to  Nick Stokes
April 12, 2024 6:53 pm

So making up data is ok, then?

Reply to  Nick Stokes
April 12, 2024 7:25 pm

The USHCN data has no role in NOAA reports.”

Funny what you say, because according to NOAA’s current websites:

“USHCN is a designated subset of the NOAA Cooperative Observer Program (COOP) Network”
(ref: https://www.ncei.noaa.gov/products/land-based-station/us-historical-climatology-network )

The NOAA COOP Network home page (https://www.ncei.noaa.gov/products/land-based-station/cooperative-observer-network ) provides a clickable link to “dataset folders”, the latest of which under “GHCN-Daily” is title-dated “2024-04-11”
(ref: https://www.ncei.noaa.gov/data/daily-summaries/ , accessed today)

Apparently, your reports of the, uh, demise of USHCN have been greatly exaggerated. 

Nick Stokes
Reply to  ToldYouSo
April 12, 2024 10:48 pm

Your second and third links do not sy anything about USHCN. The first says it is a network of stations which is a subset of the CO-OP network. But the key thing is that for 10 years NOAA has not calculated a USHCN average in the manner described here, replacing missing station data with an average of nearby data. I have a standing challenge for anyone to produce such an average, actually calculated by NOAA since 2014. No-one can. All I get is averages calculated by Tony Heller.

ducky2
Reply to  Nick Stokes
April 12, 2024 11:34 pm

So, the stations that were infilled 10 years ago are being averaged to form the baseline for the anomalies?

Nick Stokes
Reply to  ducky2
April 13, 2024 2:24 am

Makes no sense.

Reply to  Nick Stokes
April 13, 2024 3:04 am

Precisely.. Using knowingly contaminated and faked data makes no sense.

Except as mindless propaganda.

It is what you do.. It is all you have.

Reply to  Nick Stokes
April 13, 2024 7:49 am

“Your second and third links do not sy anything about USHCN.”

That is a laughable reply: the first link clearly states that USHCN is a designated subset of the NOAA COOP Network. The second link specifically discusses the COOP network (as well as “cooperative-observer-network” being included as part of its URL) and third link is to a “clickable” data set specifically referenced in the second link, the COOP home page.

I guess this trail was just too difficult for you to follow. Sorry.

But the key thing is that, up to this day, NOAA is still using data from USHCN is its compilations of surface temperatures across the US and around the world . . . despite what you otherwise imagine to be true.

As for the rest of your post: pffthpftptttt.

ducky2
Reply to  Nick Stokes
April 12, 2024 9:34 pm

Nick, PHA is applied to the entire CO-OP. Nice attempt to mislead, LMAO.

April 12, 2024 9:38 pm

If the ‘Ghost Stations’ are being kept on life support by using an unspecified number of nearby stations, it is effectively the same as weighting the remaining nearby stations more heavily than other independent stations that are actually measured. It also means that the old argument about dividing by SQRT(n) verses (n) is even weaker because (n) is actually smaller than claimed.

Basically, the sampling protocol is flawed and everyone says the king’s clothes look wonderful — with a straight face.

Reply to  Clyde Spencer
April 12, 2024 10:41 pm

everyone says the king’s clothes look wonderful — with a straight face.”

All those who are dependent on payments from the king…

… or are so brain-washed they cannot allow themselves to see the naked truth. !

April 13, 2024 6:38 am

Studying the efficacy of an antibiotic, researchers use both microbiological data and clinical data. If the data from either one of these sources is not available or corrupted, that particular source(s) are tossed out during the evaluation phase in determining efficacy. The missing data is not “guessed” by the researchers based upon other similar data entries but is excluded from analysis. This is standard protocol during a clinical trial.
Why should climate temperature analysis be any different?

Reply to  clougho
April 13, 2024 7:36 am

Because climate types are on a holy mission.

Reply to  clougho
April 13, 2024 7:49 am

It shouldn’t be any different. The number of people that are possibly going to perish as we go wily niky with stopping oil production is no different than what occurs with a disease.

Old.George
April 13, 2024 7:37 am

I know there’s a word for faking data in a scientific paper. Just can’t quite recall it.

Bob
April 13, 2024 9:39 pm

Drastically cut funding and dismantle whole departments. All managers who allowed this kind of dishonesty must be fired and black balled from ever working in any government department in any capacity.

Jim Masterson
Reply to  Bob
April 15, 2024 7:00 pm

Or any university.