Climate Fact Check: December 2023

From JunkScience.com

Get notified when a new post is published.
Subscribe today!
4.9 9 votes
Article Rating
93 Comments
Inline Feedbacks
View all comments
Scorpion2003
January 4, 2024 10:14 pm

Soaring heat across the globe in 2023 suggests a possible acceleration in the effects of human-induced climate change. – Washington Post

They preached about CO2-induced climate change being a long term process and dismissed 6 to 18-year pauses as short-term variations. Yet, conveniently labeling the substantial spike in 2023 as an acceleration of human-induced climate change reeks of hypocrisy, enforcing a narrative while playing fast and loose with the facts. The same goes for those who claim the Monckton pause is definitely over due to this. While supposedly guided by logic, they conveniently ignore the logical expectation that it should, in fact, drop back down.

Reply to  Scorpion2003
January 5, 2024 12:05 am

It took a lot of energy released by a totally natural El Nino event to break the COOLING trend since 2016.

We have some very silly AGW cultists here, but not even they are stupid enough to blame the El Nino on any human causation.

But you can also bet that not one of them will come forward and say the WaPo is either deliberately lying or totally misinformed.

bobpjones
Reply to  Scorpion2003
January 5, 2024 6:27 am

Heads they win, tails we lose. Even if the toss is done fairly.

January 4, 2024 11:21 pm

The media propaganda machine is lying to us 24/7, Covid really showed this up big time to the sheeple.

It’s gotten so ridiculous that I think it’s doing the opposite, they’re actually creating more climate realists, and frankly nobody believes a word of what their governments say anymore.

JamesB_684
Reply to  Alpha
January 5, 2024 12:43 pm

A small fraction of the population believes every word. These credulous people are a.k.a. Leftists.

Reply to  JamesB_684
January 5, 2024 4:38 pm

Two-thirds of Republicans under 30 support the so-called “Climate Change” agenda.

strativarius
January 5, 2024 12:56 am

“”Britain is set to freeze this weekend, with a cold weather alert in force across England””
https://www.express.co.uk/news/weather/1852217/cold-weather-forecast-snow

Always fun to see a warming world getting colder. Do alarmists have any self-awareness?

Anthony Banton
Reply to  strativarius
January 5, 2024 1:50 am

Always fun to see a warming world getting colder. “

I know the “World” has a lo to thank Britain for …. but we are hardly “The” world.

comment image

strativarius
Reply to  Anthony Banton
January 5, 2024 1:59 am

Did you really think only the U.K. is colder?

“”Extreme cold grips Nordic countries, Russia as floods hit Western Europe””
https://www.washingtonpost.com/world/2024/01/03/weather-sweden-nordics-cold-snow/4130f290-aa20-11ee-bc8c-7319480da4f9_story.html

Maybe you’d like to thank them

Scissor
Reply to  strativarius
January 5, 2024 3:20 am

Other than Europe, Asia, parts of North America, most of South America, the Arctic, much of the Antarctic as well as the Southern Hemisphere, Anthony has a point.

Reply to  strativarius
January 5, 2024 3:26 am

I heard somewhere that Spain is having low temperatures s not seen for 30 or so year.

And from Electorverse….Antarctic , 2nd coldest December evah !
Lapland sets all time record low.

sturmudgeon
Reply to  bnice2000
January 5, 2024 5:52 pm

“evah!” and “all time” and similar, are lies. There IS one Truth… Education of ‘the masses’ in the last 100+ years.. Sucks!

Reply to  strativarius
January 5, 2024 5:35 am

“floods hit”

Makes it sound like some evil force rather than a perfectly natural event happening for billions of years.

bobpjones
Reply to  strativarius
January 5, 2024 6:37 am

It’s more due to ‘feels like’.

Some 30 years ago, I was in Leningrad in late November. The temp, was -10C and sinking, but it didn’t feel cold, because it was dry. In the UK, we have many ‘cold’ days, that feel colder due to the damp.

Reply to  Anthony Banton
January 5, 2024 3:17 am

That “anomaly” reference period is the coldest period since the 1930/40s

Start at the time of the new ice ages scare.

A time of extreme high Arctic sea ice

A time when all Arctic region raw temperature data showed a deep trough.

It is also based on a lot of unfit-for-purpose surface data.

Take it with a pinch of salt. !

bobpjones
Reply to  Anthony Banton
January 5, 2024 6:46 am

Can I ask a question, folks?

Looking at the above map, the map states ‘Temperature Anomaly’ using the 79 – 2000 baseline.

That’s not the actual temperature, but a deviation, as I understand it. So, if I’m interpreting it correctly, someone, who doesn’t understand what an anomaly represents, may well think that there are ‘hotspots’ in the Arctic & Antarctic, actually reaching nearly 10C.

Is that a justified assumption?

Reply to  bobpjones
January 5, 2024 9:20 am

It’s misleading and a single day coverage.

Reply to  Sunsettommy
January 5, 2024 9:49 am

And when the media got excited about the July ‘heat wave’ in West Antarctica, the sleuthing I did indicated that the ‘Reanalyzer’ was depicting a weather model forecast, not actual measured temperatures. It is hard to keep up with all the prevarications presented by alarmists. When one is driven by an agenda, and ethically challenged, there is little in the way of social pressure to give priority to The Truth.

bobpjones
Reply to  Sunsettommy
January 5, 2024 10:41 am

I hadn’t spotted the 1-day bit. 👍 Even more misleading.

Reply to  Anthony Banton
January 5, 2024 9:19 am

Always ANOMALY never real time temperatures thus misleading a common tactic of the climate cultist of which you are a long running member.

Reply to  Sunsettommy
January 5, 2024 11:00 am

CR does do modelled real time temperatures if you click the right thing.

Shows all those NH red areas as being rather colder than I would choose to live in.

Reply to  Anthony Banton
January 5, 2024 10:56 am

The moment anyone cites climate reanalyzer, you know the comment is BS.

The clues in the website title for goodness sake – climatereanalyzer

Reply to  strativarius
January 5, 2024 1:56 am

This is a laugh.

The latest maps from WX Charts show Britain will be in the grip of another Arctic freeze from January 17 

I was keeping an eye on the long range forecasts for Christmas 2023, and with some confidence it was reported by various organisations, including the Met. Office, that we would be experiencing temperatures of -2ºC on Christmas day.

What we got was mild, wet weather for most of the period with temperatures up to around 10ºC in Kent.

So only 12ºC out then.

Reply to  strativarius
January 5, 2024 2:21 am

We had a couple of barely sub-zero nights in early December where I live in England. We are now predicted a couple more next week.

A night-time low of -1C at this time of year would not be newsworthy except it’s been mild-unto-tepid so far this winter. I have still not turned on my central heating, and I live in a draughty old house with no double-glazing.

I really look forward to a good freeze but I ain’t seeing cold weather alerts for my region just now. I can’t be bothered to give the Express my clicks so I don’t know exactly what their story claims, but the headline looks like wild exaggeration to me.

strativarius
Reply to  quelgeek
January 5, 2024 4:37 am

Well…..

Their source is the Met Office. Make of that what you will.

bobpjones
Reply to  strativarius
January 5, 2024 6:34 am

I don’t trust main stream rags. They tend to exaggerate, forecasting serious weather, trying to sell more papers. There’s a lady on YouTube, ‘Weather Watcher’, who does a credible analysis, without hyperbole.

In her estimation, we’re going to be too dry for any snow.

sturmudgeon
Reply to  bobpjones
January 5, 2024 5:57 pm

but…but… what will the chillun’ do?

January 5, 2024 1:01 am
  • At that time of day, the very last thing any flock of sheep (livestock of any sort) would or would be doing is ‘heading off into a sunset‘ or even be on their feet.
  • Any photo containing livestock should NOT have dust visible within it
  • The sheep created the place they are in i.e. the dry dusty desert
  • Wrong: The sheep did not make the desert, their ‘managers’ did – the sheep were only trying to exist in a place they were forced/coerced into

iow They were herded into their own extinction by wilfully ignorant, brain-dead, cruel, selfish & greedy and utterly mindless (demented) management

Reply to  Peta of Newark
January 5, 2024 2:54 am

“They were herded into their own extinction by wilfully ignorant, brain-dead, cruel, selfish & greedy and utterly mindless (demented) management”

… oh, just like the rest of us then 😉

Coeur de Lion
January 5, 2024 2:11 am

Do take a look at the NOAA ENSO website and note that this extraordinarily strong etc etc El Niño is due to be fifty fifty neutral or La Niña by July 2024. No sign of El Niño there. Another thirty month La Niña upcoming? Willis E was right. And Monckton needs to sharpen his quill.

sturmudgeon
Reply to  Coeur de Lion
January 5, 2024 5:59 pm

NOAA… are they going to provide the Arks this time?

January 5, 2024 3:14 am

Funny how the “Hottest Arctic” currently has the highest sea ice extent in nearly 20 years, isn’t it. 😉

Jan 3rd, NSIDC is above every other year back to 2005, and also above 2001.

Reply to  bnice2000
January 5, 2024 3:17 am

And of course, UAH NoPol shows a decayed transient at the 2016 El Nino and a much smaller spike this year, and has dropped back down to the level it was at the beginning of the century.

UAH-NoPol-2023
Richard Page
Reply to  bnice2000
January 5, 2024 5:30 am

I keep saying it – it’s a clue in the “spot where they’ve got their thumb on the scales” competition! 👎

Art Slartibartfast
January 5, 2024 3:32 am

“This graph is artwork, not science. First, there is no such record of “global temperatures.”Because there is no physical meaning to the notion of “global temperature, ” it cannot be
measured.” That is all that needed to be stated in that part of the publication, the rest is superfluous.

January 5, 2024 4:21 am

From the link:

…there is no such record of “global temperatures.” Because there is no physical meaning to the notion of “global temperature,” it cannot be measured….

Someone should tell the WUWT Mods then, because this site features two global temperature data sets prominently on the side panel here.

Reply to  TheFinalNail
January 5, 2024 5:15 am

using the CAGW advocates own data against them. What a novel idea – to some at least!

Reply to  Tim Gorman
January 5, 2024 6:05 am

The UAH data set isn’t exactly a poster-boy for ‘climate skepticism’ these days, lol.

Trying to Play Nice
Reply to  TheFinalNail
January 5, 2024 7:14 am

Do you understand what a cycle is? Do you understand that the Earth is no 40 years old?

Scorpion2003
Reply to  TheFinalNail
January 5, 2024 7:39 am

Skeptics have different opinions on the causes of climate change, its rate, impacts, methodology, etc. They are free to do so. It’s normal for views to change as one gains more knowledge. It’s tricky for alarmists to understand this diversity, as they are often convinced that CO2 is disproportionately and discernibly impacting the climate.

Reply to  TheFinalNail
January 5, 2024 11:10 am

Actually, as you are well aware and have admitted several times recently…

UAH shows no warming apart from El Ninos and absolutely no human causation.

It basically proves that human-caused global “climate change” is a myth.

It is a great weapon against the cluelessness of the AGW trogs that you represent.

Reply to  bnice2000
January 5, 2024 1:39 pm

Poor fungal.. all he can manage is a red thumb..

An EMPTY NOTHING !

rbabcock
Reply to  TheFinalNail
January 5, 2024 5:44 am

Someone should tell commenters that all the global data sets are corrupted computer models designed to overstate temperatures with the exception of UAH and even that one doesn’t measure the temperature of every spot on earth every hour. So the data sets “prominently featured on the side panel” used to calculate temperatures to hundredths of a degree are garbage. But if commenters want to believe the pretty red and blue colors on the maps, go right ahead.

Reply to  rbabcock
January 5, 2024 6:07 am

You have to wonder then why WUWT features the UAH update every month, don’t you?

rbabcock
Reply to  TheFinalNail
January 5, 2024 6:34 am

Actually I wonder why you believe all the CAGW propaganda.

Reply to  rbabcock
January 5, 2024 9:22 am

He is a terminal moron who is so concerned about UAH right now because he has to be worried about something.

Reply to  Sunsettommy
January 5, 2024 11:32 am

He is a terminal moron”

Most definitely !!

Reply to  Sunsettommy
January 5, 2024 12:43 pm

“He is a terminal moron””

Is that moron that can’t get any more moronic.

Heck, don’t say that.. fungal will treat it as a challenge !!

Reply to  rbabcock
January 5, 2024 11:15 am

He doesn’t “believe”

It is all pretence

Yesterday he admitted there was no human causation and that El Ninos were totally natural events..

He needs to cure his dependency on the adrenalin rush he gets when a slightly higher natural temperature variation occurs.

Reply to  bnice2000
January 5, 2024 11:17 am

He also admitted that CO2 has absolutely nothing to do with “it”.

He is very much a climate skeptic/realist now, aren’t you fungal.

Reply to  bnice2000
January 5, 2024 1:40 pm

No rebuttal.. oh dear. !

Reply to  bnice2000
January 5, 2024 1:40 pm

Tacit agreement. 🙂

Reply to  TheFinalNail
January 5, 2024 11:12 am

Because it shows REALITY of the atmospheric temperature.

That REALITY is that there is absolutely zero evidence of any human-caused warming.

UAH data is the death-knell of the AGW scam.

Reply to  TheFinalNail
January 5, 2024 5:59 am

“Someone should tell the WUWT Mods then, because this site features two global temperature data sets prominently on the side panel here.”

Yes, one legitimate data set (UAH) and one data set that has been bastardized all to hell (NASA GISS). One legitimate data set and one that is not legitimate. The WUWT mods should remove the NASA GISS chart or label it “not fit for purpose”, because it is a bogus, bastardized, instrument-era Hockey Stick chart.

The way to tell if you are looking at a bogus, bastardized, instrument-era Hockey Stick chart is to look at the Early Twentieth Century. If the Early Twentieth Century is not presented as being as warm as the present day, then you are looking at a bogus, bastardized, instrument-era Hockey Stick chart, created in a computer to scare people into believing that CO2 is causing unprecedented warming.

The truth is there is no unprecedented warming today caused by CO2 or anything else. There is nothing to fear from CO2. All the written, historic temperature records from around the world show it was just as warm in the Early Twentieth Century as it is today. None of them show a “hotter and hotter and hotter” Hockey Stick profile. “Hotter and hotter and hotter” only occurs in climate alarmists computers,not in the real world.

NASA GISS is lying to the American people and to the whole world. It does not represent the temperatures of the world.

Reply to  Tom Abbott
January 5, 2024 6:11 am

Yes, one legitimate data set (UAH) and one data set that has been bastardized all to hell (NASA GISS).

Both show statistically significant warming trends and their confidence intervals overlap. How can one be legitimate and the other not when they both show more or less the same thing?

Reply to  TheFinalNail
January 5, 2024 9:25 am

As usual you ignored many comments correcting you on this showing the changes in data which is a sign of data manipulation.

You will never catch up to reality because you are too far gone to realize how badly you have been snookered.

Reply to  Sunsettommy
January 5, 2024 10:47 am

What has that got to do with the observation that, when error margins are taken into account, there is no statistical difference between UAH and NASA in terms if their warming trends?

Even taking error margins into account, both still show statistically significant warming.

Yet the claim is that one data set is “legitimate” but the other one isn’t. How does that work?

Reply to  TheFinalNail
January 5, 2024 11:20 am

Yep and you admitted that the UAH warming was solely from El Nino and had no human causation. !

Reply to  TheFinalNail
January 5, 2024 12:30 pm

Malarky! If the uncertainty interval, i.e. the error margins, are greater than the difference you are trying to identify then you CAN’T IDENTIFY IT!

The difference would be part of the GREAT UNKNOWN.

Reply to  Tim Gorman
January 5, 2024 1:50 pm

Fungal doesn’t understand stats and maths… don’t confuse it with facts.

Reply to  Tim Gorman
January 5, 2024 5:14 pm

Dear Tim,

Happy New Year!

Do mean the 95% confidence interval for each trend line?

If so, they can overlap yet differences be significant.

See Case #21 in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4877414/pdf/10654_2016_Article_149.pdf.

Quote: It can, however, be noted that if the two 95 % confidence intervals fail to overlap, then when using the same assumptions used to compute the confidence intervals we will find P<0.05 for the difference; and if one of the 95% intervals contains the point estimate from the other group or study, we will find P>0.05 for the difference.

While a common source of confusion, there are many other references along the same lines. (Search: comparing two independent regression lines using “95% confidence intervals”)

All the best,

Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
January 6, 2024 6:55 am

Do mean the 95% confidence interval for each trend line?”

No, the tend line is the best fit to inaccurate data. Ile. a stated-value +/- uncertainty. What is usually done in too much science, especially climate science, is to throw away the uncertainty and assume the stated value is 100% accurate. Then fit the trend line to the assumed 100% accurate stated value.

What *should* be done is to black out the graph between the +/- uncertainty intervals of the data values. Then try to figure out what the trend line would be in the blacked out area. See the attached picture I quickly sketched out. The top graph is assuming all data points are 100% accurate. You can fit a trend line to those data points. The bottom graph is showing the uncertainty interval blacked out. The actual value of the data point at each interval can be anywhere in the blacked out portion. How do you fit a trend line?

I know the first claim is that the slope of the top and bottom of the interval would be the same as the slope of the stated values. But that is misleading and an unfounded assumption. It assumes that all the data values lie at ether the plus or minus boundary. In fact the data values can be anywhere in the black. One point may be at the positive boundary and the next one at 0 (zero). The one after that may be at a +.1 and the next one at -.2.

The simple fact is that the uncertainty subsumes the possibility of defining a trend line! The trend line becomes part of the GREAT UNKNOWN, which is the exact definition of the uncertainty interval.

If the uncertainty of temperature measurements are +/- 0.5C then decadal differences of +0.13C get subsumed into the uncertainty interval. You can only come up with the +0.13C trend line by assuming the stated values of the temperature data is 100% accurate.

It’s only when the stated value +/- uncertainty overcomes the uncertainty interval, i.e. a difference greater than +/- 0.5C (i.e. a stated value of 1 or greater) that you can determine a trend line. But you *still* won’t know what happened in between those points. It will remain part of the GREAT UNKNOWN.

Quote: It can, however, be noted that if the two 95 % confidence intervals fail to overlap”

That’s exactly what I am trying to point out. Until the values become far enough apart to overcome the GREAT UNKNOWN you can’t really know if a difference is measurable. And even then you won’t know the trend line between the two points. It could be a growing sinusoid, a growing exponential, a linear trend line, a second-order trend line, or almost anything!

uncertainty_picture
Reply to  Tim Gorman
January 6, 2024 1:46 pm

Hi Tim,

Do you have a background in biometry or statistics?

You miss-quoted the quote from the paper, which read in full “if the two 95 % confidence intervals fail to overlap, then when using the same assumptions used to compute the confidence intervals we will find P<0.05 for the difference”, (i.e. that as they don’t overlap, differences are significant.)

Quote from a stats application manual:

OLS regression assumes the x values are fixed, and finds the line which minimizes the squared errors in the y values“. Confidence intervals are calculated from the squired errors, such that the poorer the fit (the more scattered in y), the wider the CI’s, and the lower is the R^2 value.

It therefore cannot be presumed that that ‘y’ values are observed without error. Other assumptions are that y-estimates are independent from each other.

It is legitimate to present y-values as means with confidence intervals, but only if you know the sampling distribution – i.e., that y’s are means of samples not individual data-points for which no distribution statistics can be determined. (It is not possible to calculate an error-bar around a single estimate of your weight, or height for example.)

If you don’t agree with the above points, can you provide any references in support of your thoughts.

Cheers,

Bill

Reply to  Bill Johnston
January 7, 2024 5:44 am

I do have experience with measurement, from working as a journeyman mechanic and machinist to being a customer paid carpenter and metalsmith to being an electrical engineer for over 30 years. That includes measuring all kinds of things whose accuracy *HAD* to be adequately assessed or bad things would happen. Rod bearings in engines would either lose oil pressure or seize from be too large or small, cabinets sized to fit a kitchen space could be either too large or small, staircases could be too short to reach the landing, cable lengths in telephone central offices could wind up not reaching the assigned terminal, and on and on and on and ….

 (It is not possible to calculate an error-bar around a single estimate of your weight, or height for example.)”

It actually *is* possible. It’s what the GUM calls a Type B uncertainty interval.

JCGM 100:2008

4.3.1 For an estimate xi of an input quantity Xi that has not been obtained from repeated observations, the associated estimated variance u2(xi) or the standard uncertainty u(xi) is evaluated by scientific judgement based on all of the available information on the possible variability of Xi . The pool of information may include
⎯ previous measurement data;
⎯ experience with or general knowledge of the behaviour and properties of relevant materials and instruments;
⎯ manufacturer’s specifications;
⎯ data provided in calibration and other certificates;
⎯ uncertainties assigned to reference data taken from handbooks.
For convenience, u2(xi) and u(xi) evaluated in this way are sometimes called a Type B variance and a Type B standard uncertainty, respectively.”

BTW, I didn’t “mis-quote” anything.

when using the same assumptions used to compute the confidence intervals we will find P<0.05 for the difference”, (i.e. that as they don’t overlap, differences are significant.)”

me: “Until the values become far enough apart to overcome the GREAT UNKNOWN you can’t really know if a difference is measurable.”

These are the exact same thing said two different ways.

nd finds the line which minimizes the squared errors in the y values“. Confidence intervals are calculated from the squired errors, such that the poorer the fit (the more scattered in y), the wider the CI’s, and the lower is the R^2 value.”

This assumes the y-values are 100% ACCURATE. The squared errors are between the assumed 100% accurate y-values and the trend line. This does *NOT* properly present the y-value measurement uncertainty.

My experience with measurements is almost totally practical based, little academic other than the statistics training I received at university getting my EE degree. I learned *far* more about measurement practice in my EE labs than I did from any “statistics” class or professor. I learned 50 years ago that statisticians get little useful training in actual measurement. Not a single statistics textbook ever stated a data point as “stated value +/- measurement uncertainty” let alone covered how to treat such values. The statistics textbooks haven’t changed over the past 50 years, they still don’t cover measurement uncertainty. You have to go to outside books to get such training.

When I started in on studying climate science I was frankly appalled. The underlying assumption of “all measurement error is random, Gaussian, and cancels” was EVERYWHERE! No attention given to the variance of distributions at all. Just assume all stated values are 100% accurate and the only uncertainty is the sampling error in calculating the average from samples. Even the most base values of Tavg = (Tmax+Tmin)/2 is not an “average”, it’s a median value of two different distributions. The daily temperature profile is a multi-modal distribution and the median is useless in practical terms. And it doesn’t get better when you average those median values!

I’m sorry if that goes against your beliefs. But I’ve never been one to be anything other than brutally honest.

Reply to  Tim Gorman
January 7, 2024 6:48 pm

I’m glad you have done some measurements Tim.
 
A cupboard 450mm wide is 450mm wide, right; or is it something else? For really fine work, you would also probably use a micrometer to squeeze out the last few decimal points, and not a wooden rule, right? In each case you would end up with a single number, which is fair enough given that errors are expressed as +/- whatever the error number is. Same with your weight or your height.
 
The point is, that unless you actually know, by repeat measures, what the error distribution is, you would have nothing on which to base (or estimate) distribution statistics about the mean of a single point. The mean (the point value) would remain the same anyway, and unless there was something radically wrong in the way you undertook the measurements, errors would likely be ‘Gaussian’ – their distribution about the mean would form a normal distribution. Further, if you were expertly using fit-for-purpose tools, every time you measured the same thing, you would probably get values that all pretty closely clustered around the true mean. In fact, using repeated measurements you can determine the relationship between the number of measurements of an object needed to achieve a certain level of precision. But I fear this is not what you are talking about.
 
You said earlier: “What is usually done in too much science, especially climate science, is to throw away the uncertainty and assume the stated value is 100% accurate. Then fit the trend line to the assumed 100% accurate stated value”, which is an opinion, for which you have no evidence. As individual points are independent, there is no a priory requirement that they lie along where the line is best fit.
 
You also say “The bottom graph is showing the uncertainty interval blacked out. The actual value of the data point at each interval can be anywhere in the blacked out portion. How do you fit a trend line?” Simple, the mean will be lie at mid-point between the error bars, so you find each average and fit a line to those points.
 
[The mean won’t lie “anywhere in the blacked out portion”. Calculated as the mean of squared differences from the mean [sum{(y-mean)^2}/n], error bars will be equidistant around the mean by definition.] Maybe you need to brush-up on your stats (see for instance: https://www.indeed.com/career-advice/career-development/how-to-calculate-uncertainty)  
 
The answer is, as I said, ““OLS regression assumes the x values are fixed, and finds the line which minimizes the squared errors in the y values“.      
 
You also say:me: “Until the values become far enough apart to overcome the GREAT UNKNOWN you can’t really know if a difference is measurable.”

These are the exact same thing said two different ways.”

They are not, there is no such thing as the GREAT UNKNOWN! If you caught it, how would you know you had it, anyway?
 
You also say:Not a single statistics textbook ever stated a data point as “stated value +/- measurement uncertainty” let alone covered how to treat such values.”
 
However, if you search for “measurement uncertainty”, you would probably get about 383,000,000 results (0.37 seconds), like I just did. So, it’s not as though uncertainty it is a mystery.    
 
By the way, if average-T is defined as (Tmax+Tmin)/2, than that is what average-T is.
 
All the best,
Bill Johnston
(End of discussion for me.)
 

Reply to  Bill Johnston
January 8, 2024 6:51 am

The point is, that unless you actually know, by repeat measures, what the error distribution is, you would have nothing on which to base (or estimate) distribution statistics about the mean of a single point. “

It’s called a Type B uncertainty in the GUM.

he mean (the point value) would remain the same anyway, and unless there was something radically wrong in the way you undertook the measurements, errors would likely be ‘Gaussian’ “

Only if systematic bias is insignificant or can be identified and eliminated. That’s pretty damn difficult to do in field measurement devices for temperature where calibration drift is unknown and micro-climate changes with the seasons.

Couple this with the fact that most similar measurement devices tend to drift in the same direction it makes it a very questionable assumption that systematic uncertainty can be random, Gaussian, and cancels across a number of different devices measuring different things.

It does seem to be a common assumption in climate science, however, that all measurement uncertainty is random, Gaussian and cancels out!

Further, if you were expertly using fit-for-purpose tools, every time you measured the same thing, you would probably get values that all pretty closely clustered around the true mean”

Are different field measurement devices “fit for purpose” as you have defined? There is no guarantee that they will all give values closely clustered around a true mean. Your use of the term “true mean” shows that you have not accepted the concepts of the GUM.

JCGM 100:2008 “Although these two traditional concepts are valid as ideals, they focus on unknowable quantities: the “error” of the result of a measurement and the “true value” of the measurand (in contrast to its estimated value), respectively” (bolding mine, tpg)

Your assumption of the mean being a true value is based on the assumption that all measurement uncertainty is random, Gaussian, and cancels.

The “true mean” is only a estimated value, not a “true value”. The uncertainty is the dispersion of values around the mean that can be reasonably attributed to the mean. The SEM is *NOT* the measurement uncertainty of the mean. It simply doesn’t matter how precisely you calculate the mean, precision is not accuracy. We’ve had that discussion before.

 Simple, the mean will be lie at mid-point between the error bars, so you find each average and fit a line to those points.”

Accuracy is of primary concern in measurements, not how precisely you calculate the mean. Assuming the mean is 100% accurate simply can’t be justified.

which is an opinion, for which you have no evidence.”

No, you don’t have evidence. Again from the GUM: “they focus on unknowable quantities: the “error” of the result of a measurement and the “true value” of the measurand” (bolding mine, tpg)

The true value is UNKNOWN. It is part of the GREAT UNKNOWN. Uncertainty is not error. You simply can’t just assume the mean is evidence of a true value. That just stems from an unstated assumption that all measurement error is random, Gaussian, and cancels. As Bevington covers in his book, it’s impossible to drive the SEM to zero no matter how large your sample, random fluctuations will prevent it. If you can’t drive the SEM to zero then you simply don’t know the “true value”.

Even if you have the total population associated with something, each measurement value will have an uncertainty. You can, of course, calculate the mean to a very precise value for the population. But that is done using only the stated values of the measurement data and ignores the uncertainties of each data element. That’s the purpose of showing the wide, black line for the distribution. The mean will also have a large black line as well because of the uncertainties in each element.

 Calculated as the mean of squared differences from the mean [sum{(y-mean)^2}/n], error bars will be equidistant around the mean by definition.] “

Why? Asymmetric uncertainty intervals certainly exist in field measurement devices! I think what you are actually describing is the best fit metric to the stated values – assuming the stated values are 100% accurate. The best fit metric has nothing to do with the accuracy of the data points themselves and, therefore, nothing to do with the accuracy of your trend line.

“They are not, there is no such thing as the GREAT UNKNOWN”

Of course there is! GUM: “hey focus on unknowable quantities: the “error” of the result of a measurement and the “true value” of the measurand” (bolding mine, tpg)”

What do the words “unknowable quantities” mean to you other than the GREAT UNKNOWN?

Again, uncertainty is not error. All measurement uncertainty is not random, Gaussian, and cancel.

Reply to  Tim Gorman
January 8, 2024 3:09 pm

Dear Tim,
 
This kind of bullish discussion where you just throw half-baked, ill-conceived concepts around, that are not grounded in first-hand knowledge or research of the subject area, is a waste of time and energy. You have not shown by example, that anything you have claimed is correct.

Having been a research officer since 1971, I am well versed in the practical application of sampling theory, and in the use of instruments for measuring things in biological sciences in lab and field, including weather monitoring. Also, in data analysis. However, I don’t claim to be a statistician and what I don’t know in that area, or what I am unsure about, I either undertake further research, or ask for advice.
 
I have read the mighty GUM, and you have yelled at me about it before. But have you looked at any of the other about 383,000,000 results (0.37 seconds) about “measurement uncertainty”, that you reckoned didn’t exist. Well, here is a challenge. You have weighed yourself before, let’s say the number 62 kg shows on the scale index. You calculate for me and others who may still be interested, a GUM Type B uncertainty for that number.

[Hint: for calibrated scales that read to the nearest kg, the instrument uncertainty is ½ the “great unknown” last digit, which is actually known to be 1kg. The instrument error is thus +/- 500g i.e., it is half the interval scale. So, compared to that precise value, what is your Type B uncertainty of the number 62 kg and why would it be different?]
 
I actually don’t think you will calculate a GUM Type-B uncertainty for 62 kg measured on bathroom scales.  
 
For all these other great unknown errors, biases, drift and uncertainties call them what you will; except in your imagination, how do you know they exist if you can’t measure them?   
 
How did you ever manage to measure anything to a tolerance, if you think errors are infinite, or at least they can’t be controlled? Provide some insights into how you did it. What was your strategy?
 
You say without evidence: “Only if systematic bias is insignificant or can be identified and eliminated. That’s pretty damn difficult to do in field measurement devices for temperature where calibration drift is unknown and micro-climate changes with the seasons.” Absolute crap! The purpose of an instrument is to provide a standard against which “micro-climate changes” can be measured, otherwise there would be no measure of the extent of a change.  
 
Also, you say: “Couple this with the fact that most similar measurement devices tend to drift in the same direction it makes it a very questionable assumption that systematic uncertainty can be random, Gaussian, and cancels across a number of different devices measuring different things.” What experience do you have of this – what fact (whose fact) are you talking about?
 
How does this drift happen; and what about periodic re-calibration; and what difference does it make to an instrument reporting to the nearest 1.0 or 1.x degree-F or C, or the nearest 0.2mm (1-point) of rainfall. Systematic uncertainty by definition is NOT random, it’s systematic and if it is significant it can be detected as skew or cumulative drift. On the other hand, if variation is random (and there are statistical tests for that), it is not systematic.
 
You say in relation to samples: “There is no guarantee that they will all give values closely clustered around a true mean. Your use of the term “true mean” shows that you have not accepted the concepts of the GUM”.
 
I use the term “true mean” in exactly the same sense as used in the GUM, and therefore I thought you would have been familiar with it. Most of the rest of what you say is an unqualified ramble of mixed, unsubstantiated ideas. Obviously, a bandwidth value calculated as the mean of squared differences from the mean [sum{(y-mean)^2}/n], will be equidistant around the mean by definition. SEM describes variation around a mean calculated from samples, and will therefore not be zero. (You can also derive an SEM of an estimate, but I don’t think you are referring to that) However, SEM cannot be calculated (or assumed) for independent single-values. A single value holds no information about that value, unless it is derived from samples.
    
I previously said that if you are regressing sample means, and can show error-bars, the line of best fit minimises the squared differences with the means, which is why it is called the method of least squares. A competent researcher would also have some understanding about their data and therefore the most appropriate approach to analysing them. Most statistical packages provide tools to help in this regard.

There are many other approaches for dealing with data that don’t behave linearly, and how many times have you seen inappropriate least squares trend-lines fitted to cycling data, or data that shows non-normal characteristics, on WUWT. How many people using such approaches analyse linearity, independence, normality, variance distribution and potential outliers in residuals?
 
Finally what hope do we have of countering some of the outrageous claims made in the name of climate science, if we strop-around basing arguments on unsupported narratives, opinions and clichés such as “the GREAT UNKNOWN”, and “uncertainty is not error. All measurement uncertainty is not random, Gaussian, and cancel”.   
 
As I doubt you will actually calculate a GUM Type-B uncertainty for 62 kg measured on bathroom scales, I won’t continue this thread.
 
All the best,
Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
January 9, 2024 7:58 am

Bill,

I am going to answer your entire rant with one question.

You are the chief engineer in charge of the o-ring seal on the booster rocket that failed on the Space Shuttle Challenger launch.

The investigating committee comes to you and asked how you evaluated the operational tolerance for the o-ring.

Are you going to answer:

  1. We calculated the mean out to the millionth digit and used the SEM as the uncertainty, or
  2. We calculated the possible values that could be reasonably attributed to the o-ring based on the spread of observed measurements.

Your answer will determine the track of the rest of your life.

Reply to  Tim Gorman
January 9, 2024 12:52 pm

Dear Tim,

I see you can’t “calculate a GUM Type-B uncertainty for 62 kg measured on bathroom scale“.

A scientist (but possibly not an engineer), would want to know if it truly was something to do with the seal, or some gunga, like a speck of dust, between it and the housing. If the bolts were tensioned OK, if the particular seal was somehow inferior (it could have had a fault), if it let go or did not let go at the right instant, if it had been sabotaged ….

However, after the thing blew-up the o-ring probably vaporized, or was damaged, so what hard evidence pointed to something to do with a zillionth decimal point?

I thought something came loose and hit something …. complex things space shuttles!

(Do you prefer a window seat in a Boeing Max these days? I understand they were designed by engineers, also they left a few spare nuts and bolts lying around.)

Cheers,

Bill

Reply to  TheFinalNail
January 5, 2024 9:28 am

USCRN shows no such trend if you pick “all months”.

Reply to  DMacKenzie
January 5, 2024 11:22 am

UAH USA48 and USCRN are a really good trend match over the period, apart from UAH reacting a bit more to the 2015/16 El Nino, as you would expect the atmosphere to do.

Reply to  TheFinalNail
January 5, 2024 11:17 am

“Both show statistically significant warming trends and their confidence intervals overlap. How can one be legitimate and the other not when they both show more or less the same thing?”

They both only show the same thing from the beginning of the satellite era beginning in 1979. UAH does not cover anything prior to 1979. The bastardization of the temperaure record before 1979 is what I’m complaining about, and what you are completely, and conveniently ignoring.

The only satellite-era bastardization that has taken place is the bastardization of the temperature record by NASA GISS and NOAA after 1998, when they mannipulated the numbers so they could claim year after year that it was the “hottest year ever!”, in their ongoing efforts to scare people about CO2. Whereas the UAH chart shows NO years between 1998 and 2016 that could be called the “hottest year ever!” because none of them were hotter than 1998.

But the most significant complaint is that NASA GISS hides the fact that the Early Twentieth Century was just as warm as it is today. NASA GISS wants us all to believe that we are living in the hottest times in human history. And it’s all a BIG LIE and is shown to be a lie by the written, historical temperature records from all over the world. They refute this lie.

Climate Alarmists don’t want to talk about the written, historic temperature records. The reason is the written records refute the proposition that CO2 is going to overheat the world. There’s more CO2 in the air today, but it is no warmer today than it was in the recent past. Climate Alarmists don’t want people knowing this. Therefore, CO2 demonstrates it has had little influence on the Earth’s temperatures, when all is said and done.

Reply to  Tom Abbott
January 5, 2024 12:32 pm

+100

Reply to  Tom Abbott
January 5, 2024 5:01 pm

The Earth is only a few degrees warmer than at the end of the Little Ice Age.

In most places in the US, a person will die of hypothermia if they stay outdoors in the winter with little clothing and no heat for a long time.

Reply to  scvblwxq
January 6, 2024 6:07 am

Actually that’s true for nine months of the year!

Reply to  TheFinalNail
January 5, 2024 11:19 am

You admitted yesterday that UAH shows warming only at El Ninos, and that there was no evidence of any warming by human CO2 or other causation.

Reply to  Tom Abbott
January 5, 2024 9:25 am

You appear to be referring to the USCRN graph as bogus. Only the best surface station network on the planet….very large in extent, no stations located on airport tarmac or even close to habitation, top quality monitoring equipment of known calibration, truly data that stands tall against junk media statements….you are badly informed. And what it shows are random fluctuations around a barely changing mean….

Reply to  DMacKenzie
January 5, 2024 11:25 am

“you are badly informed.”

You misread what I wrote.

I referred to NASA GISS and UAH. I said nothing about USCRN. Your criticism does not apply.

I like USCRN. It puts the lie to the Alarmist claim that the present day is the hottest day in human history, at least in the United States, since that is the area covered by this data set.

Reply to  DMacKenzie
January 5, 2024 11:29 am

I can’t see where Tom mentioned USCRN.. he was talking about the global data sets.

USCRN has control over what they fabricate with ClimDiv in the US…

They cannot allow their fabricated from the urban affected surface sites to deviate far from USCRN, that would look very bad for them.

So yes, warming in the USA basically stopped once USCRN was installed 😉

I suspect that if decent systems were in place in other countries, they also would shows basically no warming.

Reply to  bnice2000
January 5, 2024 12:48 pm

It is actually quite funny…

.. if you graph the difference between USCRN and ClimDiv, you can see that ClimDiv started a bit high. with a pretty constant zig-zag +/-

They have gradually improved their data fudging so that they are now about the same… _+/- a bit each way.

Mr.
Reply to  Tom Abbott
January 5, 2024 9:35 am

All “global average” anything are just numerical constructs derived from disparate, inconsistent, manipulated “records”.

That is the bottom line.

Reply to  Mr.
January 5, 2024 11:52 am

Even if the data has not been manipulated, IT IS NOT FIT FOR PURPOSE.

Combining NH and SH temperatures together with no regard to the variances of the data is scientific fraud. Even the anomaly variances are different between hemispheres. All you get is a multi-modal distribution of data where the average is useless for any purpose whatsoever. You can’t even tell where any changes in the average are coming from!

Reply to  Tom Abbott
January 5, 2024 4:55 pm

The Earth is still in a 2+ million-year ice age, in a warmer but still cold interglacial period that alternates with very cold glacial periods.

Over 20% of the land is frozen and ice falls out of the sky as snow someplace every day.

About 4.6 million people die each year from cold-related causes compared to around 500,000 dying from heat-related causes.

Cold or cool air causes blood vessels to constrict, raising blood pressure, which causes more strokes and heart attacks in the cooler months.

Warming in an ice age is very good, not bad.

Reply to  scvblwxq
January 6, 2024 3:26 am

“Warming in an ice age is very good, not bad.”

I agree.

Reply to  TheFinalNail
January 5, 2024 11:06 am

Poor fungal.. always ready to have a moan and a whinge

With absolutely nothing to offer of his own.

A pathetic little worm.

Reply to  TheFinalNail
January 5, 2024 7:11 pm

“Someone should tell the WUWT Mods then . . . .”

I thought WUWT management included the global temperature data for reference. Its inclusion may not actually be an endorsement by them–but I don’t speak for them.

Reply to  Jim Masterson
January 6, 2024 12:17 am

Gotta remember…. fungal is totally paranoid about things like that. !

January 5, 2024 5:33 am

“With its stout orange trunk and long, graceful needles, the tree looks like any other ponderosa pine
growing on Mount Bigelow. But a sliver of its wood, taken amid Earth’s warmest year on record, shows that this tree has a story to tell — and a warning to offer… But then came 2023, the hottest year that humanity — and Bigelow 224 — had ever seen. ”

Absurd. Only a severely demented person could believe that this tree is telling any story about anything. It not only tells stories- it gives warnings! I wish I were a cartoonist- I’d love to mock that in a cartoon. Maybe do it with AI except too old to learn such new tricks- or just lazy. I wish I could read the full story in the WP but I’m not gonna pay for it.

Reply to  Joseph Zorzin
January 5, 2024 9:32 am

It seems reasonable that of the tens of thousands of mountain tops in the world, at least ONE could be found that is the warmest its TREEMOMETER has ever seen.

Reply to  DMacKenzie
January 5, 2024 1:55 pm

Funny that tree lines were so much higher than now during the MWP and before. 😉

If it was significantly warmer that it is now, trees could grow there again.

Trees would LUV it. !

Reply to  bnice2000
January 6, 2024 3:33 am

Yes, tree growth blows up the Climate Alarmists’ “hottest year evah!” meme.

Going by history, and the growth of trees, today isn’t even close to being the hottest time ever. Today, there are glaciers sitting where trees used to grow.

There’s only one way that could happen: It was warmer in the past than it is now. And we can date those trees accurately.

Reply to  Joseph Zorzin
January 5, 2024 12:52 pm

Trees respond much more to atmospheric CO2 than temperature.

Anything above about 280ppm is a massive plus.

Given that Mann’s Hockey stick is based on tree rings, all it is, is an indication that CO2 levels were a growth constraint through the last 1000 or so years until humans rescued the planet by releasing sequestered carbon into the shorter-term carbon cycle.

Reply to  bnice2000
January 5, 2024 7:24 pm

If plants die, that’s the end of all life.