Michael E. Mann’s Forecast Fiasco

Well, the 2024 hurricane season has come to an end and we can now close out Michael E. Mann’s forecast—delivering a prediction so spectacularly off-target it could make a dartboard blush. As we previously noted in our post “Michael E. Mann, the Black Knight,” he reminds us of the Monty Python character who loses every limb in battle yet stubbornly insists, “’Tis but a scratch!” This time, Mann’s sword of speculative forecasts landed on a projection of 33 named storms for the 2024 Atlantic hurricane season—”the highest count ever predicted,” as he proudly declared back in April.

Well, the season has closed, and reality had other plans. Instead of the hurricane Armageddon Mann foresaw, we ended up with a grand total of 18 named storms—a far cry from the 33 he predicted. For perspective, that 18 is just barely above the historical average of 14. And for Mann, whose forecast has been roundly criticized as one of the most inaccurate in recent memory, it’s more like a monument to overconfidence.

Steve Milloy of JunkScience summed it up aptly by calling Mann’s prediction “the wrongest count ever predicted.” While that might sound harsh, it’s tough to argue with the numbers. Mann didn’t just miss the bullseye—he missed the entire dartboard and hit the pub wall.

Let’s be clear: there’s nothing inherently wrong with making predictions. But when those predictions are presented with the weight of academic authority and serve as fodder for climate alarmism, they deserve scrutiny. Mann’s forecast wasn’t some cautious, probabilistic estimate; it was a bold declaration of climate doom. And when reality came knocking, it left Mann’s claims in shambles. Yet, much like the Monty Python knight, Mann continues to stand in the wreckage of his prediction, defiantly insisting, “I’m invincible!”

This isn’t the first time Mann’s claims have faced challenges. His career includes the controversial “Hockey Stick” graph, which has been the subject of ongoing debate for decades. While Mann’s defenders argue that his methods were groundbreaking, his critics contend that they relied heavily on selective data and opaque statistical techniques. The 33-storm prediction seems to follow a similar pattern: overselling an extreme scenario to grab headlines, only for the facts to come up far less dramatic.

Now, Mann’s defenders might argue that a lower-than-expected storm count is itself evidence of climate unpredictability or variability. That’s the beauty of these forecasts—they’re often so malleable that no matter what happens, they can be spun to support the broader narrative of a climate crisis. If there had been 33 storms, Mann might have been hailed as a prophet. With 18 storms, he can pivot to discussing how unpredictability is proof of our dangerous climate future. It’s a win-win—for him, at least.

The real issue here isn’t just Mann’s blown forecast; it’s the broader impact of such exaggerated predictions. They feed into the narrative that extreme climate policies—like Net Zero mandates, carbon taxes, and bans on conventional energy—are urgent and necessary. But when those policies are based on flawed or overstated science, the costs fall on ordinary people. Energy prices spike, economic growth slows, and yet the climate models driving these policies continue to falter.

So what should we take from Mann’s hurricane misfire? First, that bold claims demand bold evidence—and a track record of accuracy to back them up. Second, that predictions are only as useful as their outcomes, and Mann’s hurricane forecast falls squarely into the “not useful” category. Finally, that science isn’t served by doubling down on failed predictions; it’s served by acknowledging uncertainty and revising approaches when the facts don’t align.

Mann’s 2024 hurricane forecast wasn’t the tempest he predicted—it was a storm in a teapot, a phrase that’s been around far longer than Monty Python’s Black Knight but captures the same essence of exaggerated drama leading to anticlimactic reality. Instead of ushering in a new era of catastrophic storms, the season fizzled into an average year that left Mann’s 33-storm prophecy looking more like a footnote in the annals of overblown climate predictions. Perhaps next time, instead of swinging wildly at reality, Mann might consider grounding his forecasts in, well, reality itself. Because when your predictions consistently miss the mark, it’s time to set down the teapot and take a good, hard look at the kettle.




Get notified when a new post is published.
Subscribe today!
4.8 51 votes
Article Rating
392 Comments
Inline Feedbacks
View all comments
NotChickenLittle
December 1, 2024 2:05 pm

Hey, the only thing wrong is, we’ve just got to name more storms!

I’m sure that the up to 4 FEET of lake-effect snow falling now in places in the northeast US, will be blamed as a side-effect of climate change – Man-caused of course. It doesn’t matter that the climatistas predicted that children just won’t know what snow is…

Editor
Reply to  NotChickenLittle
December 1, 2024 2:46 pm

The “northeast US” is a big area and lake effect snow is very local. Sometimes a little makes it to New Hampshire. Heck, sometimes it doesn’t make it to Buffalo!

Here are CoCoRaHS reports for the last two mornings, I have a sister-in-law outside of Buffalo, who mentioned this AM they finally got into the good band.

 https://maps.cocorahs.org/?maptype=snowfall-depth&units=us&base=std&cp=BluYlwRed&datetype=custom&displayna=0&from=2024-11-30&to=2024-12-01&dc=0.9&key=dynamic&overlays=state,county&bbox=-79.45042256128336,42.34896183546686,-77.23393086206462,43.244760802332955

John Hultquist
Reply to  NotChickenLittle
December 1, 2024 3:32 pm

 I got a taste of Lake-effect snow in about 1960 or ’61. Several of us had been in Erie, PA for an event and left, heading south, after lunch. The rise off the old lake shore is about 800 feet, at that small elevation it was snowing bigly. About 20 miles south of the lake there was no snow, and the rest of the trip was routine. The snow today, Dec. 1st, appears to be similar.  

Gums
Reply to  John Hultquist
December 1, 2024 5:23 pm

Yeah, John, it’s a sure confirmation of Gorebull Warming.

Gums…

Editor
Reply to  John Hultquist
December 1, 2024 8:30 pm

John, Yeah, that’s pretty typical. In early lake effect events, it may be dry or rainy at the shore due to the warmish lake but mixing and elevation makes for a big change in just a few miles. I grew up in northeast Ohio and skied in western New York. Once Lake Erie freezes over, then there’s a lot less snow buts still a lot of clouds.

John Hultquist
Reply to  Ric Werme
December 2, 2024 11:26 am

If not known to you, search up BUFKIT.

Bryan A
Reply to  NotChickenLittle
December 1, 2024 7:42 pm

Next thing you know, they’ll be naming Wind Gusts.
There goes Gust Gus

Reply to  Bryan A
December 2, 2024 7:14 am

They can name them Mann Gusts. Everything Mann says is basically a Wind Gust

Reply to  stevekj
December 2, 2024 7:53 am

If you’re going to name a wind after Mann, they should be siroccos.
(Lots of hot air.)

Reply to  Gunga Din
December 2, 2024 11:19 pm

Mann mistral, ….it’s a cold dry wind….he hehe mother nature has a sense of humour

gezza1298
Reply to  NotChickenLittle
December 2, 2024 3:08 pm

Can’t believe how I survived as long as I have with just ordinary low pressures with no name sweeping the UK for decades.

Tom Halla
December 1, 2024 2:10 pm

If Michael Mann was acting as a scientist, not an advocate, he would have withdrawn MBH98, the Hockey Stick paper, after McIntyre and McKittrick discovered his algorithm produces hockey sticks from red noise.
Instead, he doubled down.

Reply to  Tom Halla
December 1, 2024 2:54 pm

Yes, scientists that follow the scientific method instead of the egotistic method do that. Einstein retracted his “blunder” papers. Didn’t affect his greatness at all.

Bill Powers
Reply to  doonman
December 2, 2024 8:05 am

Mannmade Hurricanes? Seems since it is illegal to cry out “fire’ in a movie theater because it induces panic. It should for similar reasons be illegal to make hurricane predictions because our public school and university set who get their news from MSNBC and media matters are being psychologically damaged and motivated to fits of property damaging protests, by the constant onslaught of anxiety these Mann Morons are creating with felonious wild arse guesses..

KevinM
Reply to  Bill Powers
December 2, 2024 9:46 am

No, make no form of communication illegal.

Jeff Alberts
Reply to  Tom Halla
December 1, 2024 4:22 pm

It was all for “the cause”.

Scissor
Reply to  Tom Halla
December 1, 2024 5:00 pm

I’ve been misspelling “Mannbearpig.”

December 1, 2024 2:21 pm

Instead of the hurricane Armageddon Mann foresaw, we ended up with a grand total of 18 named storms—a far cry from the 33 he predicted. For perspective, that 18 is just barely above the historical average of 14.

But wait, did you count Light Breeze Reeves, Insignificant Zephyr Lammy, and Feeble Puff Starmer? These have all wreaked havoc across the UK.

Reply to  PariahDog
December 1, 2024 4:47 pm

Don’t forget all the Rayner slapping down everywhere.

Reply to  Archer
December 2, 2024 1:06 am

I thought she had put her slapping days behind her….

Crispin in Val Quentin
Reply to  PariahDog
December 2, 2024 2:20 pm

And what about the storm of protests, eh?

MarkW2
December 1, 2024 2:30 pm

What should actually be taken from this is that you might just as well throw a couple of dice; and I say that as a 100% serious point, not a throw away remark. The result would be just as accurate…

I wonder how many climate scientists really understand what I’m saying here (not many, I suspect),

John Hultquist
Reply to  MarkW2
December 1, 2024 3:37 pm

might just as well throw a couple of dice
If “couple” is 6 or 7 that might work. Low of 6 and high of 42, the latter being the answer to everything.
I’d go with the average, plus or minus 7. (I like 7s)

Jeff Alberts
Reply to  John Hultquist
December 1, 2024 4:23 pm

That would be 6×9, not 6×7.

Jeff Alberts
Reply to  Jeff Alberts
December 1, 2024 9:28 pm

Why the downvotes?? Don’t you guys know the story?

Reply to  Jeff Alberts
December 2, 2024 1:06 am

No.

Reply to  Leo Smith
December 2, 2024 7:17 am

Hitch Hiker’s Guide to the Galaxy, man! Jeff is obviously a hoopier frood than you are 🙂

Reply to  Jeff Alberts
December 2, 2024 5:19 am

Brilliant allusion! So funny.

oeman50
Reply to  Jeff Alberts
December 2, 2024 5:25 am

“If 6 turn out be 9, I don’t mind.” Hendrix

Sparta Nova 4
Reply to  MarkW2
December 2, 2024 9:02 am

Natural variability? What a concept! We need a new UN agency to study this! Anyone got a spare $20T?

captainjtiberius
December 1, 2024 2:34 pm

The leftist major media will say Mann was 54.5% correct if they report it at all. Mann will double down and continue to predict excessive numbers. He knows his failures will never see the light and if by chance he gets one, he becomes the hero of Mann made climate change again.

Rich Davis
Reply to  captainjtiberius
December 1, 2024 4:24 pm

No, no, no silly denierz! Here’s the press release:

Vulnerable people everywhere give a sigh of relief! The horrendous hurricane season of 2024 has ended and not a moment too soon, with 4 more monster storms than average. It brought huge devastation through much of the eastern US. Nobel Prize winner Michael E Mann had predicted a higher than average count of named storms and once again has been proven prescient. He modesty commented “It wasn’t rocket science, with the constant barrage of disasters, even the evil Trump voters have to admit, Climate Change is here and it’s all our fault”.

All credible scientists now predict, we’re doomed!

# # #

Sparta Nova 4
Reply to  Rich Davis
December 2, 2024 9:03 am

I am glad you are on our side.

Crispin in Val Quentin
Reply to  captainjtiberius
December 2, 2024 2:28 pm

Good point. 33 ± 15 is close enough, right?

From another perspective, 18 ± 15 is smallest uncertainty that will include reality and Mann’s far-fetched imaginings.

All things considered, we’d be better off with a dart board with numbers from 5 to 25.

Editor
December 1, 2024 2:40 pm

I attended a recent https://see-sciencecenter.org/science-on-tap/ event in Manchester NH last month that focused on tropical storm predictions. I asked the speakers about Mann et al’s forecast, and got a reply that was semi-scientific and mentioned the warm GoM, etc. He concluded with “And perhaps he wanted to attract attention.” 🙂

BTW, we should call out Mann’s colleagues too. They’re probably laying low this week. From https://penntoday.upenn.edu/news/2024-tropical-cyclone-prediction :

The team, comprising Shannon Christiansen, a senior research coordinator in the Mann Group, and Michael Kozar, a former graduate researcher in the Mann Research Group, today released their prediction for the 2024 North Atlantic season, which spans from June 1 to Nov. 30. They forecast an unprecedented 33 named tropical cyclones, potentially ranging between 27 and 39.

They seem to deserve equal “credit” or maybe some sympathy for having to work with Mann.

Editor
Reply to  Ric Werme
December 1, 2024 4:13 pm

It occurred to me that it might be worth saving Mann’s prediction. Apparently there’s no research paper that backs up the for-public posts. https://web.sas.upenn.edu/mannresearchgroup/highlights/ has a link titled “2024 Prediction” that goes to https://web.sas.upenn.edu/mannresearchgroup/highlights/highlights-2024hurricane/ which has the following five paragraphs, a table of one line summaries of previous years, and four references.

The 2024 Atlantic Hurricane Season: University of Pennsylvania Forecast

University of Pennsylvania EES scientists Dr. Michael E. Mann and Shannon Christiansen, and Penn State ESSC alumnus Dr. Michael Kozar have released their seasonal prediction for the 2024 North Atlantic hurricane season, which officially starts on 1 June and runs through 30 November.

The prediction is for 33.1 +/- 5.8 total named tropical cyclones, which corresponds to a range between 27 and 39 storms, with a best estimate of 33 named storms. This prediction was made using the statistical model of Kozar et al. (2012, see PDF here). This statistical model builds upon the past work of Sabbatelli and Mann (2007, see PDF here) by considering a larger number of climate predictors and including corrections for the historical undercount of events (see footnotes).

The assumptions behind this forecast are (a) the persistence of current North Atlantic Main Development Region (MDR) sea surface temperature (SST) anomalies (+1.9°C in April 2024 from NOAA’s Coral Reef Watch) throughout the 2024 hurricane season, (b) development of a moderate La Nina (Niño3.4 anomaly of -0.5°C) conditions in the equatorial Pacific in late Boreal summer and fall 2024 (ENSO forecasts here; we used mid-April 2023), and (c) climatological mean conditions for the North Atlantic Oscillation in Fall/Winter 2023-2024.

If neutral ENSO conditions (Niño3.4 anomaly of 0.0°C) take shape later in 2024, then the prediction will be lower: 30.5 +/- 5.5 storms (range of 25 – 36 storms, with a best guess of 31).

Using an alternative model that uses “relative” MDR SST (MDR SST with the average tropical mean SST subtracted) in place of MDR SST yields a lower prediction (19.9 +/- 4.5 total named storms). This alternative model also includes positive ENSO conditions.

KevinM
Reply to  Ric Werme
December 2, 2024 9:53 am

33.1 +/- 5.8 
Accuracy has decimals. sigh

KevinM
Reply to  KevinM
December 2, 2024 9:55 am

(meaning interval was too narrow or confidence was too high) The same techniques used to destroy clear language are being turned on math.

Reply to  KevinM
December 2, 2024 10:50 am

Confidence intervals are not standard deviations and are not a metric for accuracy. Confidence intervals are meant to give an indication of how precisely you have located a mean. It tells you nothing about how accurate that mean actually is.

KevinM
Reply to  Tim Gorman
December 2, 2024 2:41 pm

Confidence Interval describes the range that the statistician is confident that a number will fall between. Just because software like MiniTab gives three decimal places does not mean you should use them.

Reply to  KevinM
December 2, 2024 3:34 pm

A confidence interval describes the confidence one has in a statistical value like the population mean and is based on sampling error. If you sample a population you can narrow the interval in which the population mean lies by increasing the size of the sample. With a large sample you can be more and more confident you have located the population mean.

from statology.org: “A confidence interval provides the range of values, calculated from the sample, in which we have confidence that the true population parameter lies.” (bolding mine, tpg)

The “true” population parameter, e.g. the mean, tells you nothing about the accuracy of that population parameter, only how precisely you have located it.

If your sample is the population itself your confidence level in the mean is 100%. You are absolutely sure you have located the population mean!

The uncertainty interval tells you the interval of values that the population mean might actually take on. You can have 100% confidence in the value of a mean while that mean is wildly inaccurate, e.g. your uncertainty interval can actually be larger than the value of the mean itself, 50 +/- 100, a relative uncertainty of 200%!

Crispin in Val Quentin
Reply to  Tim Gorman
December 3, 2024 8:13 pm

TimG

“with a large sample you can be more and more confident you have located the population mean.”

I believe it is correct to say you have located the centre of the range of values that includes the population mean. Unfortunately people tend to think that increasing the sample size more accurately gives the value of the mean, not merely the centre of the range. The only way to increase the precision of the location of the mean value is to use more accurate data or instrumentation, but that doesn’t reduced the magnitude of the range. In short, for a given accuracy, the range remains the same, but the location of the middle of the range is better known with increased sample size. In short, accuracy is not improved by a statistical increase in precision. This truism holds for anything measured.

Reply to  Crispin in Val Quentin
December 4, 2024 5:31 am

The larger the sample the smaller the interval in which the population mean lies will be. You aren’t really locating the center of the interval, you are determining the interval itself. What gets forgotten is that each member of the sample will be a value given as “stated value +/- uncertainty”. Thus the statistical description of the sample should be its mean *and* the propagated uncertainty of the members onto the mean. When you then combine those sample means into a data set you will still have a data set with members given as “stated value of the mean +/- uncertainty”. The standard deviation of the stated values gives the interval in which population mean should be but it must have the uncertainty interval added onto that standard deviation. Statistics and statisticians *always* forget to propagate that uncertainty throughout all the calculations.

It is called (a misnomer) the “standard uncertainty”. It is defined as the population standard deviation divided by the sample size. What it should actually be is the standard deviation of the sample means. The larger the sample size the smaller the standard deviation of the sample means will be since the samples *should” be closer to the population mean – in the limit the sample size would *be* the population and the mean would be known with 100% confidence.

Be careful using the term “precision”. What you are actually describing is resolution and accuracy. Precision is the repeatability of a measurement. Resolution is how small of an interval you can measure. Accuracy is how close you are to the true value.

A high resolution meter can have just as much measurement uncertainty as a lower one if its accuracy is poor. What you really need is a *better* instrument: greater precision, higher resolution, and increased accuracy.

Reply to  Crispin in Val Quentin
December 4, 2024 5:56 am

Tim and I have tried to point this out many times. An average can not have more precision than what was measured. You may keep an extra decimal places during interim calculations but the final answer must be the same as what was measured.

The same applies to sample means calculations, but climate science refuses to follow standard practice in science and engineering.

If all measured temps are in integer form, then the mean must be in integer form also. If you do sufficient measurements of the same thing, the standard error of the mean (SDOM) may have many decimal digits. The final result however should be something like:

25 ±0.01

That does not mean that the average is 25.01. It means that there is an interval from -0.01 to +0.01 where the means may lay. It also shows that time and money have been wasted in trying to find an ever decreasing interval of the SDOM that has no meaning.

Reply to  Jim Gorman
December 4, 2024 7:01 am

An average can not have more precision than what was measured.”

An average can not have more resolution than what was measured.

And the data element with the least number of decimal places (i.e. low resolution) determines the final resolution. 25.0001 averaged with 30 will have its value in the units digit, not in the thousandths digit. (25.001 + 30)/ 2 gives an average of 28, not 27.5005 or even 27.5.

Reply to  Tim Gorman
December 4, 2024 7:09 am

Picky, picky!

Reply to  Jim Gorman
December 4, 2024 8:16 am

grin!

Editor
Reply to  Ric Werme
December 1, 2024 4:22 pm

I see that Colorado State has released their summary already, I’ll look at it more after dinner. See it at https://tropical.colostate.edu/Forecast/2024-11.pdf . It’s 45 pages – I will not quote it here! Their forecast was for 23 named storms, we had 18.

bar
Loren Wilson
Reply to  Ric Werme
December 1, 2024 7:10 pm

The researchers at CSU should know that supplying an average value without a standard deviation is withholding information. Let’s put this table into context. Named storms per year are 14.4 ±5.5 or ±38%. So reporting that this is 125% of the average is really saying that this is completely normal. Hurricanes are 7.2 ±3.4 or 47% per year so 11 named hurricanes is just outside of one standard deviation. Again, not anything to worry about. But what about ACE, you might ask? Average ACE is 123 ±65 so an ACE of 162 is well within one standard deviation. Another normal year according to the statistics. Mann’s prediction was 3.4 standard deviations away from the 30-year average and would have been scientifically interesting if it had come to pass.

Editor
Reply to  Loren Wilson
December 1, 2024 8:38 pm

I take it you haven’t gotten to page 13 yet.

bar
Reply to  Ric Werme
December 2, 2024 4:38 am

How does this table invalidate what LW said? “The researchers at CSU should know that supplying an average value without a standard deviation is withholding information”

Standard deviation tells you about the variance of the data. A confidence interval only specifies the precision of the value, it’s more of a standard deviation of the sample means than the accuracy of the calculated mean. The standard deviation tells you about the uncertainty of your mean, it is a measure of accuracy.

KevinM
Reply to  Tim Gorman
December 2, 2024 10:02 am

Making science “accessible” means dropping some science-y parts.

What stands out to me is that the forecast got worse after the date advanced 4 months from April to August. It would have improved if we were in decimal point territory.

Loren Wilson
Reply to  Ric Werme
December 2, 2024 5:33 pm

Those are the confidence intervals of their prediction. I am giving the uncertainty (as a standard deviation) of the historical data. This provides context when someone says, “This hurricane season was unusually busy.”

KevinM
Reply to  Ric Werme
December 2, 2024 9:52 am

“a senior research coordinator”

December 1, 2024 2:49 pm

Sadly, but typically, I don’t remember Mann even providing a confidence interval for his prediction. Variability of climate *is* a tacit admission of uncertainty and no “prediction” is believable that doesn’t identify the uncertainty of the prediction. But then, climate science today doesn’t recognize that uncertainty exists in *anything*, it all just cancels out.

Rich Davis
Reply to  Tim Gorman
December 1, 2024 4:30 pm

But Mikey doesn’t make predictions, he makes projections.

Prediction (noun) a thing predicted; a forecast.

Projection (noun) an estimate or forecast of a future situation or trend based on a study of present ones.

Don’t you see the difference?

Robert B
Reply to  Rich Davis
December 1, 2024 7:26 pm

Yep. Nothing wrong with your methodology of your projection if it was off. Now away, you stupid English kinighit.

Editor
Reply to  Tim Gorman
December 1, 2024 8:41 pm
Reply to  Ric Werme
December 2, 2024 4:30 am

If neutral ENSO conditions (Niño3.4 anomaly of 0.0°C) take shape later in 2024, then the prediction will be lower: 30.5 +/- 5.5 storms (range of 25 – 36 storms, with a best guess of 31).”

This isn’t an interval based on the uncertainties of the variables involved. It’s a range for the output of the models used in determining the projection based on changing the inputs. It tells you nothing about the actual accuracy associated with the outputs.

From the study referenced in the paper: “These analyses employed Poisson regression, a tool that is appropriate for modeling a Poisson process with a rate of occurrence that is conditional on underlying state variables”

Neither paper includes a measurement uncertainty budget for anything. They assume that the state variables can be stated with 100% accuracy. Poisson distributions are based on “counts” that can be specified, e.g. 5 persons entered the queue line. For climate, the counts are not discrete, independent occurrences, each one is themselves a result of non-linear, chaotic variables in the biosphere.

From the linked paper: “However, successful seasonal predictions of particular “flavors” of TCs in the Atlantic basin have remained elusive. In principle, the factors that govern different flavors of TCs may differ, and additional predictive skill, as well as insight, might arise from modeling them separately, rather than collectively”

If you read the paper, the statistical analysis is based on *counts* of TC (tropical cyclones) on an annual basis – i.e. the outputs of their model. No propagation of measurement uncertainty for the variables involved in determining the state of the climate resulting in each count.

Bastardi revised his prediction based on current measurements instead of adamantly clinging to outputs from models based on guesses of future “states”. Mann’s statement for his prediction should have been +/- 20 to account for the actual annual variance of the data. Just using his statement of 33 +/- 6 you get an almost 20% relative uncertainty. Using actual data variance would give a relative uncertainty of 20/30 = 66%. 20% relative uncertainty is atrocious. A 66% relative uncertainty is so bad that it makes the prediction a joke to begin with! Mann’s prediction should have been given no credence at all!

December 1, 2024 3:01 pm

The problem is the MSM don’t get it

Reply to  MIke McHenry
December 1, 2024 3:08 pm

They’re idiots.

Jeff Alberts
Reply to  Joseph Zorzin
December 1, 2024 4:24 pm

No, they’re propagandists. They won’t admit failure.

Editor
Reply to  MIke McHenry
December 1, 2024 3:37 pm

Actually, the problem is that the media do get it: If it bleeds it leads.

December 1, 2024 3:04 pm

So now Mann will apologize for his failed prediction?

Rich Davis
Reply to  Joseph Zorzin
December 1, 2024 4:34 pm

No, no! His projection was directionally correct. And best of all a bunch of Trump voters got hurt!

Reply to  Rich Davis
December 1, 2024 7:54 pm

POS troll

Jeff Alberts
Reply to  Pat from Kerbob
December 1, 2024 9:31 pm

You need to get your sarcasm detector fixed.

Rich Davis
Reply to  Pat from Kerbob
December 2, 2024 4:26 am

Pat you missed my implied sarc tag.
All is forgiven!

December 1, 2024 3:07 pm

I read recently in the Journal Science that a study attempting to reproduce 880 biomedical science papers by 1500 plus scientists failed to reproduce 75% of them. What does that say about academic climate science

Nick Stokes
Reply to  MIke McHenry
December 1, 2024 4:15 pm

Nothing.

Jeff Alberts
Reply to  Nick Stokes
December 1, 2024 5:35 pm

Right, because government approved Climate Science is infallible.

Reply to  Nick Stokes
December 1, 2024 5:42 pm

Nick approves of UNREPLICATABLE SCIENCE.

Wonders never cease 😉

Sparta Nova 4
Reply to  bnice2000
December 2, 2024 9:12 am

But… wait for it…. replicatable BS.

Reply to  Nick Stokes
December 1, 2024 5:42 pm

Nothing

Bullshit. The majority of all scientific papers are unreproducible crap. This confirms it – again.

Reply to  Nick Stokes
December 1, 2024 11:47 pm

No, that’s your credibility.

Reply to  MIke McHenry
December 2, 2024 3:58 am

What it highlights is the need to know *all* of the variables involved in the experiments along with their measurement uncertainties.

Biological science is very much like climate science where there are multiple, possible non-linear complicating factors in every possible system. For instance, if you are trying to find the impact of a drug on mice you *must* have an exact DNA match among the test subjects for every single experiment in order to duplicate it. That’s almost an impossibility when different experimenters at different locations, with mice from different ancestries, are involved. The original experimenter must give detailed descriptions of each and every variable in the experiment – and many don’t. Then those variables must be matched by later experimenters or they must increase the measurement uncertainty intervals to account for non-matched variables.

In both biological and climate science, measurement uncertainty is almost always ignored. At best the uncertainty interval given is the standard deviation of the sample means or the standard deviation of the collected data. Neither is a proper statement of actual measurement uncertainty associated with experimental (or model) studies.

And the really sad thing is the surprise exhibited in both disciplines when results don’t match among experiments/models and don’t match real world results. The truth is that they probably *do* match if proper attention is applied to evaluating the uncertainties involved.

Nick Stokes
December 1, 2024 3:09 pm

Well, perhaps it’s time to remember WUWT’s own not very different prediction, echoing friend Joe Bastardi:

comment image

Reply to  Nick Stokes
December 1, 2024 3:18 pm

Hey Nick, “It is difficult to make predictions, especially about the future.”

Maybe nobody knows tomorrow’s climate (weather) .

The real and present danger of climate alarmism on the other hand…

1000011091
Nick Stokes
Reply to  Charles Rotter
December 1, 2024 3:23 pm

“I reported other people’s forecasts”

Well, specifically Joe Bastardi’s. But I see no scoffing at him.

Editor
Reply to  Nick Stokes
December 1, 2024 4:02 pm

Yeah, pretty much everyone knows to derate Joe’s storm predictions.

Noe that Joe’s prediction was made a year ago, Mann et al’s was in April and much closer to the start of the tropical storm season.

It may well be that Bastardi’s forecast influenced Mann. 🙂

Nick Stokes
Reply to  Ric Werme
December 1, 2024 4:11 pm

Weatherbell made the same forecast May 5 2024

WUWT repeated Joe without derating.

Reply to  Nick Stokes
December 1, 2024 5:08 pm

And changed the forecast in September.

Reply to  Nick Stokes
December 1, 2024 5:21 pm

On 3rd Sept, due to changing patterns, Joe forecast 15-20.

There were 18.

Pretty darn good forecast, wouldn’t you say !!

Jeff Alberts
Reply to  Nick Stokes
December 1, 2024 4:25 pm

Joe’s was much closer than Mann’s.

Nick Stokes
Reply to  Jeff Alberts
December 1, 2024 5:03 pm

Joe was 25-30. Mann was 27-39.

Reply to  Nick Stokes
December 1, 2024 5:22 pm

Joe downgraded to 15-20 on 3rd Sept.. (there were 18)

Mickey Mann doubled down

Jeff Alberts
Reply to  Nick Stokes
December 1, 2024 5:36 pm

Is that not closer?

Nick Stokes
Reply to  Jeff Alberts
December 1, 2024 6:28 pm

A little, but not close.

Reply to  Nick Stokes
December 1, 2024 6:34 pm

On 3rd September, Joe forecast 15-20.. There were 18..

PRETTY GOOD FORECAST !!

Poor old “always wrong” Mickey was still doubling down saying he was correct on 26th Sept.

What a cretin he further proved himself to be.

And you worship him.. proving you are also totally DAFT !!!

Loren Wilson
Reply to  Nick Stokes
December 1, 2024 7:15 pm

And that is the point. Even people not blinded by the CAGW religion can’t predict one hurricane season, yet you think we can predict the weather out 100 years. Clearly, the climate models are not fit for purpose yet you defend them.

Reply to  Loren Wilson
December 2, 2024 4:52 am

As you pointed out in another of your posts, the problem is not the prediction itself, it is the fact that the accuracy of the prediction is not given (i.e. the standard deviation). The accuracy of Mann’s prediction is poor, VERY poor. It should have been given little if any attention. Mann’s prediction was like the local weatherman saying “it’s going to rain tomorrow” without providing any indication of how likely the prediction is.

Nick Stokes
Reply to  Tim Gorman
December 2, 2024 12:43 pm

In fact Mann gave a range from 27 to 39.

Jeff Alberts
Reply to  Nick Stokes
December 1, 2024 9:21 pm

I guess it just goes to show. No one really knows. So let’s destroy civilization.

Rich Davis
Reply to  Jeff Alberts
December 2, 2024 5:24 pm

It’s the least we can do

Rich Davis
Reply to  Nick Stokes
December 2, 2024 5:23 pm

What if there had been 17.5 storms Nick? Would 15-20 have been ‘close’?

Reply to  Nick Stokes
December 1, 2024 6:40 pm

That is because he doesn’t generate a lot of propaganda bullcrap with it while Dr. Mann who was much further off will never admit that he was waaay off the mark as he pompously made stupid oh my gosh over the possible very active year is bad for us.

Nick Stokes
Reply to  Sunsettommy
December 1, 2024 6:51 pm

It wasn’t Mann who was headlining “Hurricane Season from Hell”!

Reply to  Nick Stokes
December 1, 2024 7:15 pm

HA HA HA HA HA another stupid dodge, his prediction is well known and so is his refusal to admit his forecast was really bad is all there you fool!

Nick Stokes
Reply to  Sunsettommy
December 2, 2024 5:29 pm

Where is WUWT admitting its “Hurricane Forecast from Hell” forecast was really bad?

Reply to  Nick Stokes
December 1, 2024 7:23 pm

Poor Nick,

… totally and deliberately ignoring the fact that Joe forecasted 15-20 on Sept 3rd,

while Mickey Mann-Mouse was still spruiking huge numbers on 26th September.

Makes you look incredibly stupid.

You are doing your best Biden impression, hey. !

Totally unaware that you are doing so.

sycomputing
Reply to  Nick Stokes
December 2, 2024 7:14 am

“Well, specifically Joe Bastardi’s. But I see no scoffing at him.”

Ok. I scoff at Joe Bastardi . . .

Now what?

Reply to  Nick Stokes
December 2, 2024 8:25 am

Sooo … you are defending Mann being so wrong by saying Joe was less wrong with his original prediction?
(Joe did lower his number in September.)

Nick Stokes
Reply to  Gunga Din
December 2, 2024 12:46 pm

I’m pointing out the irony of WUWT coming out with a “Hurricane Season from Hell” prediction, and then all this stuff bashing Mann for a very similar prediction. I don’t see anything from WUWT saying “We got it wrong”.

Rich Davis
Reply to  Nick Stokes
December 2, 2024 5:29 pm

Joe Bastardi is worthy of consideration and so his prediction was reported but he did not speak on behalf of WUWT, so I don’t see how it’s reasonable to say that WUWT got it wrong. WUWT accurately reported that both Bastardi and Mann got it wrong.

Reply to  Rich Davis
December 2, 2024 6:15 pm

Stokes is gaslighting, as usual.

Nick Stokes
Reply to  Rich Davis
December 2, 2024 7:00 pm

WUWT ran the headline:
Time to Pack a Bug-Out Bag, Hurricane Season from Hell Predicted
That is more than just reporting. Charles said:
 this year’s seasonal tropical hurricane forecasts, from multiple sources, are the most dire and frightening I’ve seen, perhaps ever made”

But as to getting it wrong, it seems to be all Mann, no WUWT.

Reply to  Nick Stokes
December 1, 2024 5:04 pm

Joe lowered the forecast to 15-20 with the September 3 update.

Joe does not double down and pretend, as does Mickey Mann.

He actually accepts that he initially got it wrong… something you should do for once in your life.

Meteorologist Bastardi: The whys of the hurricane season so far – Climate Depot

Reply to  Nick Stokes
December 1, 2024 5:19 pm

On 3rd Sept, Joe down-graded to 15-20 Named storms.

There were 18

Reply to  bnice2000
December 1, 2024 7:17 pm

Notice that he is suddenly silent after you destroyed him with that update link you posted.

LOL.

Reply to  Sunsettommy
December 1, 2024 7:27 pm

Poor old Nick, he’s like this little monkey. !!

sne
Simon
Reply to  bnice2000
December 2, 2024 10:47 am

Says the man who thinks El Ninos have caused the warming of the last 100 years. Too funny.

Nick Stokes
Reply to  Sunsettommy
December 2, 2024 5:46 pm

Notice that he is suddenly silent”

I don’t actually see bnice’s posts. Apart from being insulting, juvenile etc, the facts are never right. The link is dated October 3. So there is Joe with a “forecast” 15-20 when 13 storms have already happened.

Editor
December 1, 2024 3:34 pm

I’ve said it often enough before: The only purpose of the hurricane forecast is to get headlines. The number of actual hurricanes later in the season is irrelevant. By the time anyone has noticed that there were fewer hurricanes, the headlines will have done their dirty work, everyone will have moved on, and there will be new doom-laden forecasts for new future dates. It’s a dirty game, and Jonathan Swift recognised it three centuries ago: “Falsehood flies, and the Truth comes limping after it”.

Editor
Reply to  Mike Jonas
December 1, 2024 4:33 pm

So why did Colorado State issue a 45 page review of their forecast? I’m sure they expected no headlines to come from it.

Ages ago, after I got over my revulsion that people in Colorado knew enough about tropical systems to make seasonal forecasts, I found their post-mortems fascinating, especially after a blown forecast – you can learn a lot more from failures than from successes. Check out 1995’s, the first year after the AMO flipped. They forecasted 100-140% of average activity, we got 237%.

See it at https://tropical.colostate.edu/Forecast/Archived_Forecasts/1990s/1995-11.pdf – it’s only 27 pages.

Rich Davis
Reply to  Mike Jonas
December 1, 2024 4:42 pm

Exactly right. Mikey should have projected 40 named storms. He wasn’t greedy enough.

All that gets reported now is that there were 4 more storms than average, doom doom doom. Nobody will address any question of accuracy. The guilty fake news media will certainly keep a lid on any accountability.

cc
December 1, 2024 4:17 pm

he should take a hint from the late Harold Camping, who finally gave up predicting the end of the world after several attempts ended with the world not ending and Jesus didn’t show up on Harold’s schedule.
doubtless, he’ll have an eternity to discuss with the Creator what the errors were. Not sure if M-M will have that same privilege.

Rich Davis
Reply to  cc
December 1, 2024 4:49 pm

Well Satan seems to be quite interested in Climastrology.

Brian0127
December 1, 2024 5:20 pm

This guy should be history, the person who ignored one of science’s basic tenets, comparing like with like , but who still went on to claim the rigour of science to add provenance to his hokum.
Who in their right mind would compare pre-industrial methodologies with the modern array of weather stations and satellites and claim sufficient accuracy to detect 1degC over 140 years.
If the myth of climate change is a modern version of the Emperor’s New Clothes then he is Emperor.

Anthony Banton
Reply to  Brian0127
December 1, 2024 11:33 pm

You do realise that you are also saying that the MCA/LIA is inaccurately accredited?

Working backwards, why would you think that temperature measurements/proxies are accurate to within a few tenths of a degree then, when/*if* instrumental records now are not?

Reply to  Anthony Banton
December 1, 2024 11:53 pm

The reality of the LIA is based on vast amounts of historical data. Proxies merely confirm it.

Who believes proxies are accurate to less than a degree?

Anthony Banton
Reply to  Graemethecat
December 2, 2024 3:15 am

Historical data from the present when veiwed in the future will also give “confirmation”.

We are talking of a few tenths of a degree.
And human memories are able to account for that?

It is true though that the odd severe winter or heatwave summer colours our memory of the past.
It does not however categorise it into a climate trend amounting to a few tenths of a degree.

Other things don’t equate either- take the Thames freezing.
That wouldn’t happen now as it was the old London bridge that drasticaly slowed the river flow and allowed ice break-up to dam.

Anthony Banton
Reply to  Graemethecat
December 2, 2024 3:15 am

Historical data from the present when veiwed in the future will also give “confirmation”.

We are talking of a few tenths of a degree.
And human memories are able to account for that?

It is true though that the odd severe winter or heatwave summer colours our memory of the past.
It does not however categorise it into a climate trend amounting to a few tenths of a degree.

Other things don’t equate either- take the Thames freezing.
That wouldn’t happen now as it was the old London bridge that drasticaly slowed the river flow and allowed ice break-up to dam.

Reply to  Anthony Banton
December 2, 2024 7:33 am

How do you explain the cultivation of barley and rye by the Greenland Vikings on terrain which is now permafrost? Do you consider this evidence for the MWP or not?

Jeff Alberts
Reply to  Graemethecat
December 2, 2024 7:47 am

Not to mention modern treelines STILL below historical ones.

Rich Davis
Reply to  Jeff Alberts
December 2, 2024 5:41 pm

The Russians planted fake tree stumps while colluuuuuuuding with Trump.

Rich Davis
Reply to  Graemethecat
December 2, 2024 5:39 pm

We don’t call it MWP anymore please keep up with the development of dogma.

It’s the MCA now. The medieval climate anomaly. Probably just 0.00000001° less cold…and…and… only regional. (No impact on the moon or Mars).

Reply to  Graemethecat
December 2, 2024 5:01 am

Who believes that current official records are accurate to less than a degree when they are recorded to the nearest unit digit in Fahrenheit? You get down to the hundredths and thousandths digit via averaging while ignoring significant digit rules for physical science. If you followed the rules average temps would be given to the units digit. A hundred year temp difference would be barely exceed a tick in the units digit.

Reply to  Tim Gorman
December 2, 2024 5:35 am

Who believes that current official records are accurate to less than a degree when they are recorded to the nearest unit digit in Fahrenheit?

If this is so, how can you say anything about climate? You’re seemingly always sure there’s no warming. How do you know that? Moreover, how are you so sure about various paleoclimatic phenomena? (LIA/MIA, warm periods, Holocene Climatic Optimum.) And please spare me your idiocy about “historical data” (like Thames freezing) because that’s just extremely coarse and unevenly sampled, high uncertainty measurement.

Reply to  nyolci
December 2, 2024 6:10 am

If this is so, how can you say anything about climate?”

That *IS* the question! Climate is determined by more than just temperature and let alone by anomaly. Temperatures in Las Vegas and Miami can be very similar while the climates are quite different.

Hardiness zones are a far better indicator of climate and they don’t change much over even as long as a century.

“You’re seemingly always sure there’s no warming.”

Bullcrap! I am *always* sure that we don’t actually *know* whether there is warming on a global basis or not. That’s far different from saying there is no warming. The measurement uncertainty associated with the “global average temperature” is so large that there is no way to actually *know* what is happening. The globe could be cooling just as easily as it can be warming.

” Moreover, how are you so sure about various paleoclimatic phenomena? (LIA/MIA, warm periods, Holocene Climatic Optimum.)”

Because the paleoclimatic conditions have more indicators than just “temperature”.

“And please spare me your idiocy about “historical data” (like Thames freezing) because that’s just extremely coarse and unevenly sampled, high uncertainty measurement.”

ROFL!! In other words, don’t confuse me with facts! The *fact* that a river freezes over is *NOT* uncertain. The actual temperature that caused that has a large uncertainty but the *fact* that it happened has no uncertainty at all. It doesn’t matter if the temps that caused it to happen were -20C or -10C. And if the status of the river changed from not freezing to freezing over a long period of time that *does* indicate a climate change regardless of the absolute temperature measurement.

Reply to  Tim Gorman
December 2, 2024 7:28 am

Excellent rebuttal to the nonsense posted by Banton and nyolci.

Reply to  Graemethecat
December 2, 2024 9:00 am

Excellent rebuttal to the nonsense posted by Banton and nyolci.

Well, that idiot didn’t understand the question, and, more generally, didn’t understand the irony of the situation.

Reply to  nyolci
December 2, 2024 10:13 am

You seem to be the only one that thinks temperature determines climate. So tell us again who is the idiot.

Reply to  Tim Gorman
December 2, 2024 11:03 am

You seem to be the only one that thinks temperature determines climate.

Forget about climate for a moment. Just tell me how can we know anything about temperature. Fokk, you’re already trying to sneak out…

Reply to  nyolci
December 2, 2024 11:25 am

The uncertainty interval, ignored by climate science and your ilk, tells how well a measured quantity is known.

Reply to  nyolci
December 2, 2024 11:32 am

How can you know a temperature in the hundredths digit if all you know is the units digit?

Do you really expect us to believe you can tell how long a board is in thousandths of an inch using a ruler marked only in inch increments? Why do you suppose they make micrometers?

Jeff Alberts
Reply to  Tim Gorman
December 2, 2024 7:49 am

Bullcrap! I am *always* sure that we don’t actually *know* whether there is warming on a global basis or not. That’s far different from saying there is no warming. The measurement uncertainty associated with the “global average temperature” is so large that there is no way to actually *know* what is happening. The globe could be cooling just as easily as it can be warming.”

Exactly! Every time I see the “hottest xxx ever” I think, where?? Certainly not anywhere I go.

Reply to  Jeff Alberts
December 2, 2024 9:03 am

we don’t actually *know* whether there is warming on a global basis or not.

What you deniers claim, when driven to the logical conclusion, means that we cannot know anything about temperature (I stop mentioning climate here not to mess up your denier brains). In other words, we can’t say that temperatures are the same, decreasing, increasing, whatever.

Reply to  nyolci
December 2, 2024 9:41 am

There is that stupid word “denier”…. BOOM you are quickly discredited.

Cheers.

Reply to  Sunsettommy
December 2, 2024 10:26 am

Yeppers. The equivalent of the NAZ1 rule.

Simon
Reply to  Sunsettommy
December 2, 2024 10:51 am

What word would you give to a group who continually “deny” reality. I think “denier” is very fair. It just cracks me up that you flowers get soooo upset about a word, then spend the rest of the time calling others names (alarmists, warmistas). If you are going to throw stones….. bit like those howling about Joe pardoning Hunter. They quickly forget the outrageous pardons Trump did in his time (and the ones coming soon). Can’t have it both ways.

Reply to  Simon
December 2, 2024 11:16 am

Who did Trump pardon that he had previously promised NOT to pardon? Also who did Trump give a blanket pardon to for any federal crime he may have committed within the past 10 years?

But I’m not surprised you can’t see the (D)ifference.

Simon
Reply to  Tony_G
December 2, 2024 3:20 pm

Theres a difference but it’s so minor it’s kinda irrelevant. The whole pardon thing is so corrupt they should either get rid of it or place strict rules round it. For the record I think Biden has made a mistake but you wont read that bit.

Reply to  Simon
December 2, 2024 3:52 pm

A blanket pardon for any crime committed for a ten year period is unprecedented, which is nothing “minor” and a difference that is far from “irrelevant”. Same with the fact that Joe maintained that he wouldn’t pardon Hunter. You’re comparing two totally different acts – that is not in the least bit “irrelevant”. But I’m not surprised you can’t remove your partisan glasses to see that.

Also, “for the record”, you did not previously say that you thought it was a mistake. but you wont read that bit – looks like I did, huh?

Simon
Reply to  Tony_G
December 2, 2024 4:58 pm

Partisan glasses? WTF? Oh please don’t make me laugh. Are you saying you agreed with Trump pardoning all of those guys who lied to the FBI and basically put the US in a dangerous position by covertly talking to the Russians. What the hell, Hunter basically lied about a gun. bad but hardly in the same league. A gun he never used. But here’s the thing, I say Joe was wrong to pardon his son, but you think Trump is just fine with all his shinanigans (pardoning among others, Charles Kushner, a man guilty of tax fraud and blackmail?). Tell me again who the partisan is?

Reply to  Simon
December 3, 2024 6:59 am

Simon, again you show your lack of reading comprehension. Whether I agree with Trumps’ pardons or not is beside the point, the President has the right and power to pardon whoever he wants. Same goes for Biden. I have no problem with Joe pardoning Hunters crimes that he has been charged with, aside from the fact that he promised that he would not do so. (Despite this promise, any of us with half a brain knew that he was going to issue the pardon, and we were quite certain that he would do so after the election.)

The problem here, that you refuse to acknowledge, is that this is a blanket pardon for ANY crime that Hunter MIGHT HAVE COMMITTED during a ten year period starting in 2014, whether he has been charged or not. Were any of Trumps’ pardons blanket pardons like that?

Banton: “Trump has promised to pardon many who took part in 6th Jan. He won’t go back on that.” – ok, so he is going to keep his promise to issue pardons. Has he issued pardons to anyone he had promised NOT to pardon? Has he issued any 10 year blanket pardons for all crimes committed during that time?

who would not do the same? Yes, pretty much anyone in Joe’s position would do the same. It’s not in the least bit surprising. We all knew his promise to not do so was an empty promise. As you say, who would not do the same?

*I* am not trying to “have it both ways”. You guys are equating a blanket pardon to the normal pardons for specific crimes, and are unable to see the difference.

Anthony Banton
Reply to  Tony_G
December 2, 2024 9:38 pm

The difference is that Trump has promised to pardon many who took part in 6th Jan. He won’t go back on that.
Biden is merely thinking, he can’t defeat the inevitable and so acts first while he can for the person who is most important in his life.
Oh, and hypocrites – who would not do the same?
At last the Dems have realised that there is no point in holding the moral high ground with the MAGA lot, they simply laugh in your face and carry on.

Reply to  Anthony Banton
December 3, 2024 6:48 am

Democrats do not hold the moral high ground, they just pretend they do.

Joe Biden may have to pardon his brother and several other famiies members before it’s all over.

Trump pardoned people who had already served a jail sentence for their crimes. Joe Biden pardoned Hunter to prevent him from serving a jail sentence.

I think airing Joe Biden’s dirty laundry will be a good substitute for convicting him. He’s too senile to put in jail, but the American people need to know the truth about this corrupt, treasonous president.

I think the Republican Congress is going to be looking into all the dealings of the Biden Crime Family. Some of the family may be appearing before Congress in the near future to take the Fifth Amendment.

Reply to  Tom Abbott
December 3, 2024 7:06 am

take the Fifth Amendment

“nor shall be compelled in any criminal case to be a witness against himself” – the self-incrimation clause. Can someone who has been pardoned invoke the Fifth for crimes he has been pardoned for? Did this blanket pardon open the door to force Hunter to testify against others to ten years worth of crimes?

Reply to  Tony_G
December 4, 2024 4:09 am

“Can someone who has been pardoned invoke the Fifth for crimes he has been pardoned for?”

That’s a good question. I don’t know the answer.

Reply to  Tom Abbott
December 4, 2024 5:38 am

According to three different legal analysts on three different cable news channels you can no longer invoke the 5th for the crimes you have been pardoned for. If the testimony might implicate you in a crime you have not been pardoned for then you can. Since apparently Biden pardoned Hunter for *all* federal crimes from 2014 on, both known and unknown, it’s likely he can’t invoke the 5th on anything. Meaning he can be required to testify on crimes his other family members may have committed as part of a criminal enterprise which he was a part of. It’s why a bunch of legal analysts think he’s going to issue blanket pardons for all of his family members before he steps down.

Simon
Reply to  Tom Abbott
December 3, 2024 10:50 am

“but the American people need to know the truth about this corrupt, treasonous president.”
More words exactly…. but I was talking about Trump.

“I think the Republican Congress is going to be looking into all the dealings of the Biden Crime Family.”
Already been done Tom . They found nothing (despite telling the world it was coming…. Coming….. COMING!!!!!)

Reply to  Simon
December 4, 2024 4:10 am

I think it is still coming, Simon. Stayed tuned.

Simon
Reply to  Tom Abbott
December 3, 2024 11:07 am

“I think the Republican Congress is going to be looking into all the dealings of the Biden Crime Family.”

Speaking of crime families Tom. How did you honestly feel about Trump appointing Charles Kushner to the position of Ambassador to France. A man who Trump pardoned who was convicted of Tax fraud, election fraud and who hired a prostitute to frame his bother in law because he cooperated with the law?

Reply to  Simon
December 4, 2024 4:23 am

This is Trumps statement about pardoning Kushner:

https://www.foxnews.com/politics/who-is-charles-kushner-trump-pardon

“”Since completing his sentence in 2006, Mr. Kushner has been devoted to important philanthropic organizations and causes, such as Saint Barnabas Medical Center and United Cerebral Palsy,” the statement said.

“This record of reform and charity overshadows Mr. Kushner’s conviction and 2 year sentence for preparing false tax returns, witness retaliation, and making false statements to the FEC.”

end excerpt

I don’t think he should have been appointed, although it is pretty much a ceremonial position. I don’t think it is a good idea to appoint relatives to government positions. I recall a lot of people being uneasy with JFK appointing his brother, RFK as Attorney General.

And Trump doesn’t need the extra controversy this stirs up even though it is a “tempest in a teapot”.

One shouldn’t be represented in court by relatives, and one shouldn’t appoint relatives to government positions. Both generate suspicions in people’s minds.

Simon
Reply to  Tom Abbott
December 4, 2024 11:45 am

Agreed.
And while it is Christmas season I too must concede I’m disappointed that Biden buckled and pardoned Hunter. I just don’t get the whole “president can pardon anyone” thing. Causes a lot of public discontent.

Rich Davis
Reply to  Simon
December 2, 2024 5:54 pm

Simpleton, I embrace the moniker. Denier of Climastrology’s most sacred dogmas. Climate Heretic, c’est moi!

Simon
Reply to  Rich Davis
December 2, 2024 6:25 pm

Awesome

Anthony Banton
Reply to  Simon
December 2, 2024 9:32 pm

Can’t have it both ways.”

Sad thing Simon, is that in their mind they can. And they carry the ambiguity without the slightest awareness of the hypocrisy of it.
Or they do and they feel entitled to the doing.
Either way it is nauseating
Do as I say and not as I do is perfectly fine with them.

Simon
Reply to  Anthony Banton
December 2, 2024 9:58 pm

Amen….

Reply to  Simon
December 4, 2024 9:45 am

At times the pot calls the kettle black.

Hearing the pot and kettle commiserate, complain, and try to lable others as blacker than them simply makes me smile at the willful hypocracy.

Reply to  Sunsettommy
December 2, 2024 11:25 am

There is that stupid word “denier”…. BOOM you are quickly discredited.

Yeah, this is what you like to do 🙂 Characteristic solution 🙂

Reply to  nyolci
December 2, 2024 9:47 am

You don’t understand physical science at sll. Iif I use a 3-digit voltmeter to measure my wall socket and it says 120 volts there is no way for me to say what the value of the tenths digit is. No amount of averaging multiple readings will ever let me know what the tenths digit is. It will remain unknown and inknowable forever. If the voltage changes from 120.1 to 120.2 I’ll never know. No amount of averaging will tell me.

It’s no different with temperatures. If all I have are recorded temps in the units digit I can never know what the tenths digit us – unless I am a climate scientist if course!

Reply to  nyolci
December 2, 2024 10:04 am

What you deniers claim, when driven to the logical conclusion, means that we cannot know anything about temperature

You have no idea about what a temperature is. An anomaly is not a temperature! It is a ΔT, a change. Show us references that have carefully analyzed the actual absolute temperature of the globe so the ΔT can be properly referenced. Without this knowledge, a scientific judgement can not be applied.

Rich Davis
Reply to  nyolci
December 2, 2024 5:50 pm

You are right nyolci, we deny the dogmas of Climastrology and refuse to venerate your saints—St. Beadyeye Mann, St. Grrrrrrreta, St. Algore the Dim and St Lurch Kerry-Heinz

Reply to  Tim Gorman
December 2, 2024 8:58 am

Temperatures in Las Vegas and Miami can be very similar while the climates are quite different.

Okay, the question is: how can you say anything about temperatures. Forget about climate. Temperatures in Las Vegas or Miami, what the hell do they mean here? I’m eagerly waiting to be entertained.

Reply to  nyolci
December 2, 2024 9:37 am

You *can’t* go by temperatures! That’s the whole point! Which you can’t, for some reason, seem to accept. Enthalpy is what you should be looking at and so should climate science.

Reply to  Tim Gorman
December 2, 2024 11:27 am

You *can’t* go by temperatures!

I don’t wanna go by anything. I’m just curious what “Temperatures in Las Vegas and Miami” means at all in your interpretation.

Reply to  nyolci
December 2, 2024 11:39 am

I’ll give you a hint. Where does mold grow better? Lad Vegas or Miami?

Reply to  Tim Gorman
December 3, 2024 7:37 am

Where does mold grow better? Lad Vegas or Miami?

No, this is not what I asked. What is the temperature in Las Vegas if you have 4 stations, all are showing 28C, uncertainty is +-0.5C for each instrument. What is the temperature in Las Vegas? What is the temperature in Nevada if the whole state has 100 stations, all are showing 28C, uncertainty is +-0.5C for each device? What is the temperature in the contiguous Unites States if there are 1000 stations, all are showing 28C, uncertainty is +-0.5C for each instrument? Can we say anything at all about the temp in the cUSA if we take your bs seriously? The average is obviously 28C but the uncertainty a la Gorman is going to be +-2C, 50C, 500C, resp. In other words we can’t say anything even about Nevada. -10C is a legitimate value, well within the 95% (or 68%, whatever) range. Not to mention the cUSA where absolute zero is just halfway the range.

Reply to  nyolci
December 3, 2024 11:24 am

No, this is not what I asked. What is the temperature in Las Vegas if you have 4 stations, all are showing 28C, uncertainty is +-0.5C for each instrument. What is the temperature in Las Vegas?

You have proposed a difficult problem with numerous pieces.

1. Define the random sample being used.

Tₗᵥ = {28±0.5, 28±0.5, 28±0.5, 28±0.5}

2. Determine the mean of the stated values.

μₗᵥ = 28°C

3. Determine the standard deviation of the stated values.

SDₗᵥ = 0

This means there is no reproducible uncertainty, only repeatable uncertainty. If the temperatures had been different, the reproducibility uncertainty would be added to the measurement uncertainty. This is what was done in NIST TN 1900.

4. We will use fractional uncertainties for the calculation of uncertainty.

u꜀ / 28 = √[(0.5/28)²+(0.5/28)²+(0.5/28)²+(0.5/28)²]
u꜀ = 28√(0.00032 + 0.00032 + 0.00032 + 0.00032)
u꜀ = 28√0.00128
u꜀ = 28(0.0358)
u꜀ = ±1.0°C

At this point, because the SD is zero, the uncertainty calculation has been completed. If you examine NIST TN 1900, you will find that NIST has declared measurement uncertainty to be negligible, consequently only the reproducibility uncertainty was calculated.

The Tₗᵥ = 28 ±1.0°C.

What is the temperature in Nevada if the whole state has 100 stations, all are showing 28C, uncertainty is +-0.5C for each device?

Tₙₑᵥ = {28±0.5₁, …, 28±0.5₁₀₀}
μₙₑᵥ = 28
SDₙₑᵥ = 0

u꜀ = √{100(0.25)²} = 5

Tₙₑᵥ = 28 ±5°C

If you doubt this, find and show us a reference where the uncertainty in a random variable is calculated differently.

Reply to  Jim Gorman
December 4, 2024 3:22 pm

SDₗᵥ = 0

This means there is no reproducible uncertainty, only repeatable uncertainty.

Wrong, and you apparently have no idea what you’re talking about. Uncertainty is in the measurements. It’s in all the values that happen to be 28C in this case. All these values are actually T_i + E_i where T_i is the “true value” (that exists but we don’t know) and E_i is the error. If we average them we have [sum(T_i)/n] + [sum(E_i)/n], where the two terms are the true average and the error of the average, resp. The E_i is actually a random variable, the T_i is not (this is the actual temperature whatever it happened to be at that moment). I really hope you understand why.

Reply to  nyolci
December 4, 2024 3:50 pm

Utter nonsense!

All these values are actually T_i + E_i “

No! They are not! Have you *ever* bothered to read the GUM for meaning at all? It is an INTERNATIONAL standard!

“that exists but we don’t know”

How do you know E_i if you don’t know T_i?

How do you know T_i?

Once again, AN AVERAGE IS A QUOTIENT. When you have quotients the uncertainties that are added are the relative uncertainties of each factor.

You do *NOT* add (ΣE_i)/n!

The relative uncertainties are ΣE_i / E and ẟn/n where ẟn is the uncertainty of n which is zero. Since you don’t know T_i (which you just confirmed is the case) then you can’t know E_i and the ΣE_i is unknowable.

Your “true value +/- error” was abandoned internationally as early as the 50’s. There is a reason why – which you just said: “here T_i is the “true value” (that exists but we don’t know”.

If you don’t know T_i then you don’t know E_i and the value of ΣE_i is unknowable.

It’s why the best practice today is the use of “stated value of x +/- u_x.

The stated value is your best estimate of the true value where u_x is the interval determined by the range of values reasonably attributable to the true value.

This concept doesn’t require knowing the true value. Your’s does.

Reply to  Tim Gorman
December 5, 2024 1:40 am

How do you know E_i if you don’t know T_i?

Who has claimed we know E_i? I claimed it was a random variable, and what I presented above was a simple error model (here “model” means something different from the “model” in climate modelling, so you don’t have to cross yourself). What we know, from calibration, is the distribution of E_i, or at least a very good approximation. This is a pretty simple but good model where we can see most of the errors. Eg. if E(E_i) != 0 we have a certain systematic error. If the distribution of E_i depends of the magnitude of T_i, that’s another kind of systematic error. For most modern instruments, E_i is not dependent on T_i in a very large range, symmetric and zero centered, and the drift is negligible for a very long time. E_i is almost always Gaussian but this is not that important here, it can be anything.
WHAT WE ARE INTERESTED IN IS THE distribution of SUM(E_i) where each E_i and E_j are pairwise independent. This latter is certainly true, it means our measurements are independent, and this is certainly true.

Once again, AN AVERAGE IS A QUOTIENT.

Forget about the average. Apparently you either get confused very quickly or you do it on purpose. Just use the sum. That’s enough.

You do *NOT* add (ΣE_i)/n!

Yes, we do. For the sum we literally add them together if we use this simple model. I mean the so called “outcomes”.

The relative uncertainties are ΣE_i / E and ẟn/n where ẟn is the uncertainty of n which is zero.

We are getting into the territory of Cloud Cuckoo Land here. Why the fokk would be the E (which is Sum(E_i)?) in the denominator?

u_x is the interval determined by the range of values reasonably attributable to the true value.

Almost always u_x = 0,5*k*Stdev(E_i) where k is a small integer. This is why this model is so simple.

u꜀ / 28 = √[(0.5/28)²+(0.5/28)²+(0.5/28)²+(0.5/28)²]

I quoted this from your previous post. When you put the measured value in the denominator in a calculation like this, that should always be a red flag for multiple reasons. At that moment you have to know you’re doing something bad. I just mention one reason. What if the temperature is just 0C?

Reply to  nyolci
December 5, 2024 4:20 am

What we know, from calibration, is the distribution of E_i, or at least a very good approximation. “

Malarky! No instrument retains lab calibration after installation in the field. Plus the microclimate of the installation also has an impact on the distribution of any random fluctuation. You don’t seem to understand that a random variable is *NOT* guaranteed to be Gaussian. Many thermometers have different measurement uncertainty for rising temperatures than for falling temperatures, especially LIG thermometers. It’s called “hysteresis”. Gravity retards readings in a liquid column as it rises and it accelerates it as it falls. That generates an asymmetric measurement uncertainty distribution.

All you are doing is parroting the common climate science mantra that “all measurement uncertainty is random, Gaussian, and cancels”!

” Eg. if E(E_i) != 0 we have a certain systematic error.”

NO! You will find ΣE_i ≠ 0 if you have an asymmetric measurement uncertainty profile even if there is no systematic bias in the measuring instrument.

You do not seem to understand basic measurement protocol. The distribution of E_i will ONLY approach a Gaussian distribution if you are measuring the same measurand multiple times with the same instrument under the exact same environmental conditions.

That assumption simply does not apply to a network of temperature measuring stations with varying microclimates, different measuring devices, and single measurements.

Remember, for field temperature measurement, you get ONE SINGLE attempt at measuring the same measurand. You simply don’t get the ability to take multiple measurements of the same measurand since the environment changes from instant to instant. You can’t use single measurments of different measurands to generate a random variable for E_i.

And, in addition, Hubbard and Lin showed clear back in 2002 that you cannot develop an adjustment factor for multiple measuring stations. Any adjustment factor must be generated on a station-by-station basis because of varying microclimates among the stations. And generating the adjustment factor for a single station requires the use of a calibration lab at the station location, a requirement that is (as far as I know) never met in the field. Once that adjustment factor has been determined it will change over time. There is no field measurement station that retains calibration over time. Calibration drift is a fact of life.

This is why the use of “best estimate” +/- measurement uncertainty concept was developed! That uncertainty has to include the possibility of calibration drift as well as every other factor you can think of.

 For most modern instruments, E_i is not dependent on T_i in a very large range, symmetric and zero centered, and the drift is negligible for a very long time.”

Bullcrap! This may be true for a lab instrument maintained in a controlled environment. It is *NOT* true for a field instrument exposed to a varying microclimate. Consider that the paint on a station enclosure will change over time from exposure to UV, freezing temps, wind erosion, etc. All of that will cause calibration drift in the measurement station’s readings. Even PRT sensors require a calibration adjustment curve because its response is *not* a perfectly linear function. That means that E_i for the PRT sensor is *not* symmetric or zero centered.

I’ll repeat, all you are doing is repeating the climate science mantra that “all measurement uncertainty is random, Gaussian, and cancels”. I’ve heard it so many times that I can immediately recognize it. I knew it was garbage the first time I heard it and it is still garbage.

Reply to  nyolci
December 5, 2024 4:41 am

Forget about the average. Apparently you either get confused very quickly or you do it on purpose. Just use the sum. That’s enough.”

In other words “forget that I don’t know what I am talking about”.

“Yes, we do. For the sum we literally add them together if we use this simple model. I mean the so called “outcomes”.”

You say “forget about the average” and then turn around and spout off about ΣE_i)/n which IS the average!

“We are getting into the territory of Cloud Cuckoo Land here. Why the fokk would be the E (which is Sum(E_i)?) in the denominator?”

You are making a fool of yourself. Have you been drinking?

E_i(avg) = ΣE_i)/n — a quotient

Take the functional relationship of V = πR^2H. This is a product (handled the exact same way as a quotient). Therefore you have to use relative uncertainty. u(V)/V = u(π) + u(R)/R + u(R)/R + u(H)/H.

You must do the same treatment for E_i.

u(E_i avg)/E_i avg = u(ΣE_i)/ΣE_i + u(n)/n

You want to make the assumption that u(ΣE_i) is 0 (zero). It isn’t. The average of E_i is the mean of a distribution. That implies that there is a standard deviation of that distribution as well. That standard deviation is the uncertainty of ΣE_i.

I simply have never understood why those supporting climate science as it stands today ALWAYS, *ALWAYS*, ignore the fact that a distribution with a mean also has a standard deviation. It stems from their need to justify their “averages” as being 100% accurate. Thus they assume that all measurement uncertainty is random, Gaussian, and cancels.

And, for the third time, that is exactly what you are doing! You even speak of Stdev(E_i) and then ignore the fact that the term is the definition of the uncertainty of E_i! It’s not even apparent that you understand what the term “k” actually is. Do the words “coverage factor” or “Student-T” mean anything to you at all?

” What if the temperature is just 0C”

The use of Celsius is contrary to all rules of thermodynamics. You *must* use Kelvin. Therefore you won’t find a 0 (zero) value of temperature for the atmospheric temperature. If you did it would mean that Earth has become a frozen ball from top to bottom.

You betray you ignorance of physical science with everything you try to assert. Why do you persist in such a masochistic pursuit?

Reply to  Tim Gorman
December 5, 2024 5:09 am

In other words “forget that I don’t know what I am talking about”.

No. I just wanted to have the simplest example, with the least amount of confusion to you.

Therefore you have to use relative uncertainty.

You don’t have to use the relative uncertainty for products. This is not a “law of nature”. In your example, the measurement of R or H has an uncertainty in a very broad range that is independent of the magnitude. It means relative uncertainty is less meaningful here. But for temperature this is actually even more puzzling because you then proceed to simply use the Celsius value that can be legitimately zero or negative giving meaningless values.

You want to make the assumption that u(ΣE_i) is 0 (zero).

No, and we never have said that. Remember the square root law.

The use of Celsius is contrary to all rules of thermodynamics.

Jim just did that 🙂

It’s not even apparent that you understand what the term “k” actually is.

Yeah, you like to think that 🙂

Reply to  nyolci
December 5, 2024 6:15 am

You don’t have to use the relative uncertainty for products”

Of course you do! John Taylor covers this in detail in his tome on uncertainty and how it is derived.

” In your example, the measurement of R or H has an uncertainty in a very broad range that is independent of the magnitude. It means relative uncertainty is less meaningful here. “

Why do you keep on making such wrong assertions? R^2 and H have different dimensions which means their uncertainties have different dimensions. It makes no physical sense to add quantities that have different dimensions! The use of relative uncertainty converts the uncertainties to a dimensionless percentage which you *can* add!

 But for temperature this is actually even more puzzling because you then proceed to simply use the Celsius value that can be legitimately zero or negative giving meaningless values.”

As I have already pointed out, this is just one more mark *against* climate science. Anything to do with temperature should be done using the Kelvin scale! Thus you will never have zero or negative temperatures!

Reply to  nyolci
December 2, 2024 12:41 pm

I’m just curious what “Temperatures in Las Vegas and Miami” means at all in your interpretation.

Latent heat is the answer. It describes energy that is not sensible. At which location does temperature best indicate the energy in the atmosphere? When you can answer that, you will have learned something about thermodynamics.

Anthony Banton
Reply to  Tim Gorman
December 2, 2024 9:02 am

The measurement uncertainty associated with the “global average temperature” is so large that there is no way to actually *know* what is happening. The globe could be cooling just as easily as it can be warming.”

That is patently wrong.
Else the monthly averages assertained by the likes of UAH, Hadcrut, GISS, NOAA, Berkeley, RSS, JMA would vary greatly and have no relation to previous months.
It can plainly be seen that they are within agreement of themselves and of previous months.
The effects of EN can clearly be seen in them FI – just boost of a few tenths of a degree on the GMST.
If you putative “uncertainty would not show up if it were the case.

You keep banging on about uncertainty as though it is an unpenetrable barrier to knowing stuff.
There is a clear linear trend amongst the NV on dislpay and that alone shows your search for 101 certainty is an ideologically instigated objection that has no real objective reason.
Other than denial.
Why are the GMST series’ consistent within themselves and they not display haphazard movement?

I look forward to your descent into the rabbit-hole that only you can navigate.

Reply to  Anthony Banton
December 2, 2024 9:58 am

Else the monthly averages assertained by the likes of UAH, Hadcrut, GISS, NOAA, Berkeley, RSS, JMA would vary greatly and have no relation to previous months.”

Malrky!

Averaging cannot increase resolution – unless you are a climate scientist. Temps recorded in the units digit cannot identify the value of the tenths digit. You simply do not understand the basics of metrology. Accuracy, precision, and resolution are different things. Precision – getting the same measurement each time- does NOT guarantee the accuracy of the measurement. Nor does it make the measurement uncertainty any less.

Nor does measurement uncertainty cause “haphazard” measurements. A thermometer with a systematic measurement uncertainty of 1 degree will give consistent results, they just won’t be accurate!

Reply to  Anthony Banton
December 2, 2024 10:34 am

You keep banging on about uncertainty as though it is an unpenetrable barrier to knowing stuff”

That is *exactly* what measurement uncertainty is meant to convey. Right along with significant rules.

Climate science routinely throws away measurement uncertainty using the meme that “all measurement uncertainty is random and Gaussian and thus it all cancels out”. Leaving the stated values to be 100% accurate. Trend lines developed from these assumed 100% accurate values are phantoms at best. There is no way to actually tell if they bear any relationship to reality or if they are correct or not. Best/fit metrics only tell you how closely the trend line fits these assumed 100% accurate values.

Reply to  Tim Gorman
December 2, 2024 12:05 pm

We’ve been over this literally dozens of times on WUWT, but it is now abundantly clear that Simon, Banton, nyolci, Stokes et al are unable or unwilling to understand the fundamental concepts of experimental error and metrology. Without this understanding, it would simply be impossible to manufacture components for car engines, for example. These clowns would measure valve clearances and the like with wooden rulers, then average them to get the “correct” value.

Anthony Banton
Reply to  Graemethecat
December 2, 2024 9:48 pm

Try living in reality.
Of course there are experimental errors.
They cannot be mitigated, only minimised.
It does not, however, make the world unknowable.
Ever noticed how far the world has come using science?
Or is it just the things that you find objectionable?
Like say, the fact that, for a global problem the world needs to act together globally to solve.
Yet this is somehow deemed to be a socialist scam. (?).
And we can’t know if it’s happening anyway.
Beggars belief.
And as I keep saying the ultimate rabbit-hole on here.

Reply to  Anthony Banton
December 2, 2024 11:51 pm

You are the ones who think it is possible to determine temperatures to 3 decimal places with instruments of 0.1C resolution. You are the ones who think anomalies can simply be added.

The Gormans have tried educating you how experimental error and instrumental resolution work, how anomalies are not temperatures, but clearly it hasn’t got through.

Reply to  Anthony Banton
December 3, 2024 4:40 am

Ever noticed how far the world has come using science?

Hate to burst your bubble, but science has come a long way in measurement techniques and measurement capability.

With these advances came the realization that ALL measurements are only ESTMATES. That is where current metrology originated and why the GUM is GLOBALLY accepted as correct way to express both stated values and the uncertainty in those values.

Reply to  Jim Gorman
December 3, 2024 5:53 am

Don’t let Bill J. hear you say this!

Reply to  Anthony Banton
December 3, 2024 5:52 am

Measurement uncertainty is not error!

Reply to  nyolci
December 2, 2024 8:47 am

And please spare me your idiocy about “historical data” (like Thames freezing) because that’s just extremely coarse and unevenly sampled, high uncertainty measurement.

Historical data is coarse. That is the whole point. The uncertainty involved is large. However historical data does give an indication of what the absolute temperatures were. Things like tree lines or artifacts being uncovered from receding glaciers give a glimpse of what the absolute temperatures were at the time.

Anomalies can not provide an absolute temperature estimate that is better than historical data. Absolute temperatures are what determine climate, not anomalies.

Reply to  Jim Gorman
December 2, 2024 9:05 am

However historical data does give an indication of what the absolute temperatures were.

  1. Nowadays we have data of much higher quality. Historical data is just a side note, a small complement here.
  2. How about the Minoan warm period? We don’t have any historical data from that.

Anomalies can not provide an absolute temperature estimate that is better than historical data.

I think you should at least try to understand these things first before you “pontificate” about them.

Reply to  nyolci
December 2, 2024 10:23 am

Even the best measurement devices today have measurement uncertainty in the 0.3C to 0.5C range. Since the temps are rounded to the nearest units digit even that accuracy winds up being at least +/- 0.5C for everything. That isn’t much better than measurements from the 19th century.

Reply to  Tim Gorman
December 2, 2024 11:31 am

Even the best measurement devices today have measurement uncertainty in the 0.3C to 0.5C range.

Then what can you say about the temperature in Las Vegas? Because you claim that (contrary to science) uncertainties add up when averaging. It means that if Las Vegas has 4 stations, then the uncertainty of the average temp will be like +-2C. For Nevada, that will be like +-20C. In other words, we can’t say anything about the temperature in Nevada. Not to mention the contiguous United States. Nothing. Apparently the same applies to any other type of measurement. Is this what you say?

Reply to  nyolci
December 2, 2024 11:58 am

So what don’t you understand about this?

  1. temperature is an extensive property. You can’t average extensive properties. It’s why you get values of uncertainty you don’t’t believe.
  2. who says the temps across Nevada don’t vary by 20C? If you average Pikes Peak temps with Colorado Spgs temps what is the variance you will see? Variance is s direct metric for uncertainty.
  3. an average is a quotient. For quotients you use RELATIVE uncertainties. You simply do not understand metrology at all.

If you have 100 2” x 4” boards, each with an uncertainty of +\- 1”, how long will they be laid end to end? The relative uncertainty in that total length becomes the relative uncertainty of the average value as well.

The relative uncertainty will grow every time you add another element with uncertainty.

Reply to  Tim Gorman
December 2, 2024 3:07 pm

Variance is s direct metric for uncertainty.

Spatial and temporal variance of a quantity is not measurement uncertainty, you idiot.

Reply to  nyolci
December 2, 2024 4:10 pm

HAHAHAHAHAHAHAHAHAHAAH

Keep digging, it is amusing.

Reply to  nyolci
December 2, 2024 4:18 pm

Total and utter bullshit. The spatial variance between the summit of Pikes Peak and Colorado Springs generates conditions that cause a wide variance in temperature. Thus averaging those temperatures together creates a *very* uncertain value for the average!

In a distribution the wider the variance in the data set the less pronounced the “hump” around the mean becomes. That implies that those values close to the mean become more and more likely to be the “true” mean. Distributions with small variances, at least for physical attributes, typically have a large spike around the mean, thus the differences between the values close to the mean and the mean itself becomes more pronounced.

This applies to data from spatial variances as well as temporal variances.

Just how do you get a value for “temporal variance” if you haven’t MEASURED the time interval?

You are as lost in statistical world as bdgwx and bellman are. You have no idea how to relate statistical descriptors to the real, physical world we live in. You keep wanting to stray off into metaphysical never-never land – e.g. asserting that temporal variances aren’t valued by measurement!

Reply to  nyolci
December 2, 2024 5:14 pm

Spatial and temporal variance of a quantity is not measurement uncertainty, you idiot.

Every mean has a distribution associated with it. If you believe in statistics, then that distribution will have both mean and a VARIANCE.

If you portray an average of measured temperatures as another measurement, then the variance in the average is part of the uncertainty.

Look at NIST TN 1900. Although I don’t personally agree with the result, it is a good learning tool.

TN 1900 calculates measurement uncertainty for the monthly average temperature. It uses the variance as the starting point. That measurement uncertainty must be propagated through any other calculations using that MEASUREMENT.

Why don’t you show us a resource that supports your assertions instead of relying on ad hominems to make your argument? Maybe, just maybe you can’t find any?

Reply to  Jim Gorman
December 3, 2024 5:09 am

Every mean has a distribution associated with it. If you believe in statistics, then that distribution will have both mean and a VARIANCE.

We are talking about measurement uncertainty, you cretin. That has nothing to do with the fact that temperatures are different at different times and different places. It’s so hard to debate people who confuse Austria with Australia.

Reply to  nyolci
December 3, 2024 5:55 am

Keep digging, the hole you are in isn’t deep enough yet.

Reply to  Tim Gorman
December 2, 2024 11:57 pm

Correction: temperature is an intensive property, so independent of the amount of matter. Enthalpy is an extensive property, and can indeed be averaged. Banton, nyolci, Stokes and Simon fail to understand the distinction.

Reply to  Graemethecat
December 3, 2024 4:08 am

I agree with your correction! My fingers didn’t type what I was thinking I guess!

Reply to  nyolci
December 2, 2024 2:38 pm

What is the standard deviation of the four LV stations? You forgot this little detail.

Reply to  karlomonte
December 3, 2024 5:14 am

What is the standard deviation of the four LV stations?

Oops, you are getting closer… The “standard deviation” (or some similar value) is stated in the measurement device’s characteristic sheet. Here it is assumed that it’s known by every party to the conversation. You can assume a sensible value or just denote it with, say, v. You know, what? Let’s v=0.5C. Keep going on, you’re getting closer! Now you have the necessary information to tell me what the “standard deviation” of the temperature of Las Vegas is. I mean the “real” one, not the one in the Gorman World of Delusions ™.

Reply to  nyolci
December 3, 2024 5:57 am

The “standard deviation” (or some similar value) is stated in the measurement device’s characteristic sheet.

BZZZZZZT.

Wrong.

You know nothing about metrology.

Reply to  karlomonte
December 3, 2024 12:02 pm

He wouldn’t know a measurement uncertainty budget if it bit him on the butt!

Reply to  Tim Gorman
December 3, 2024 2:26 pm

Glaringly and blindingly obvious that he doesn’t know.

Reply to  nyolci
December 3, 2024 12:23 pm

Now you have the necessary information to tell me what the “standard deviation” of the temperature of Las Vegas is

You tell us and show your calculations. You have no bona fides to be the all knowing professor concerning metrology. Declaring something wrong requires showing the correct calculation, so have at it and show us your ability in determining uncertainty along with references to your sources.

Reply to  Jim Gorman
December 5, 2024 1:45 am

You tell us and show your calculations

In this very example this is just v/2. The square root law. So the average has a value of 28C and the error like variable is the half of what we have for each individual instrument. In other words we have a very good measurement for the average of four stations. For the contiguous US you have a very small error, around one thirtieth of the original.

Reply to  nyolci
December 5, 2024 4:43 am

 For the contiguous US you have a very small error, around one thirtieth of the original.”

You do *NOT* divide each term by 2. Again, you have no idea of how to use relative uncertainty when a functional relationship is a quotient or product .

Reply to  Tim Gorman
December 5, 2024 6:18 am

You do *NOT* divide each term by 2.

2 is sqrt(4) in this example. N=4 here. N=1000 for the cUSA, sqrt(1000) is approx 31.6.

Reply to  nyolci
December 5, 2024 8:16 am

You are calculating the standard deviation of the sample means. That is sampling error. It is NOT measurement incertainty. You *still* exhibiting exactly zero knowledge of metrology!

Reply to  Tim Gorman
December 6, 2024 1:39 am

You are calculating the standard deviation of the sample means

No, and I can’t understand how you misunderstood this.

Reply to  nyolci
December 6, 2024 5:53 am

standard deviation of the sample means = SD/sqrt(n)

The metric “standard deviation of the sample means” is *SAMPLING ERROR”. It is *NOT* measurement uncertainty.

If your functional relationship is A = Σx/n (where A is the average) then the uncertainty of A becomes

u(A)/A = u(Σx)/x + u(n)/n where u(Σx) is the propagated measurement uncertainty of the data points.

u(n) = 0 so the uncertainty equation becomes

u(A)/A = u(Σx)/x

There is *NO* sqrt(n) appearing in the uncertainty equation anywhere.

You HAVE to use relative uncertainty because Σx has a different dimensional expression than n.

This is simply metrology. You can find it anywhere. Be it in ISO documents such as the GUM, in metrology tomes such as those from Taylor, Bevington, etc; or anywhere in the internet (go here for example: https://web.mit.edu/fluids-modules/www/exper_techniques/2.Propagation_of_Uncertaint.pdf

Reply to  nyolci
December 2, 2024 10:29 am

I think you should at least try to understand these things first before you “pontificate” about them.

I think you should apply your pontifications to yourself.

Reply to  Anthony Banton
December 2, 2024 1:47 am

You do realise that you are also saying that the MCA/LIA is inaccurately accredited?

They never realise this. They can live with internal contradictions in their thought. They are railing against the Hockey Stick graph and then speak about the Minoan, Egyptian, and perhaps the Swedish, Mongolian, and Czechoslovakian Warm Periods with a straight face.

Anthony Banton
Reply to  nyolci
December 2, 2024 3:19 am

Yes, it is one reason I post on here – to withness the cognitive dissonance and the extremes that denizens can take it.

Reply to  nyolci
December 2, 2024 7:06 am

The Hockeystick erased the MWP. It has a flat handle.
That was the first clue that it was flawed. It didn’t match the historical record.

Compare it to Lamb’s work of 1965:
The early medieval warm epoch and its sequel – ScienceDirect
Which clearly states in the abstract.

Changes of prevailing temperature and rainfall in England between periods of 50–150 years duration around 1200 and around 1600 are found which, on all the evidence at present available, probably amounted to 1.2–1.4°C and 10% respectively. Changes in some reasons of the year may have exceeded these ranges of the annual mean. 

Mann’s Hockey stick was global, not CET but it required belief in the MWP being a local event. This was already unrealistic when the Hockeystick was first published.

In no way do I under-rate the importance of M&M’s work in discovering how Mann got it wrong. But we always knew he got it wrong.

Jeff Alberts
Reply to  MCourtney
December 2, 2024 7:51 am

Mann’s Hockey stick was global, not CET but it required belief in the MWP being a local event. This was already unrealistic when the Hockeystick was first published.”

It was reportedly global, but in reality it was all US Southwest Bristlecone Pines.

Anthony Banton
Reply to  Jeff Alberts
December 2, 2024 9:21 am

it required belief in the MWP being a local event.”

No, it simply required enough global proxies to establish a comprehensive depiction.
Mann was simply the first to do it
And that moving on of adding further proxies has continued to be the case since.

Reply to  Anthony Banton
December 2, 2024 9:44 am

It was mostly based on the CO2 data collected by Dr. Isdo who says it wasn’t a temperature proxy thus the “hockey stick” paper is a fraud!

Reply to  Anthony Banton
December 2, 2024 10:30 am

Just like averaging the output of climate models?

Sure, keep up the propaganda.

Reply to  MCourtney
December 2, 2024 8:57 am

but it required belief in the MWP being a local event

I love when you come up with bs like this. In global (or hemispheric) reconstructions MWP simply doesn’t show up. This doesn’t need any belief in any premise. FYI Now we have multiple, independent modern reconstructions that are all agree. Conclusion: if it existed at all, it wasn’t global. On closer look MWP and especially LIA looks like a North Atlantic phenomenon. Pretty much in line with historic observations.

Reply to  nyolci
December 2, 2024 9:45 am

Now the warmist lies comes flowing in there have many times posted of many MWP events shown up in many places in the southern hemisphere.

Reply to  nyolci
December 2, 2024 7:25 pm

I love when you come up with bs like this. In global (or hemispheric) reconstructions MWP simply doesn’t show up

To bantom and nyol.

Please give your explanation for the the higher northern latitudes being warmer (quite a bit warmer in fact) than today for a period exceeding three centuries during the MWP and RWP with the rest of the planet being ”colder” than it is today and why proxies taken at these sites and much more southerly locations – including the southern hemisphere – agree with the empirical evidence below.
Perhaps you have a model? Lol.

Below is one of many examples around the arctic. (it is still much too cold for trees to grow there BTW).

anicent-tree
Reply to  Mike
December 3, 2024 4:55 am

+100

Refutation of religious dogma is always difficult. Just ask Martin Luther.

Reply to  Mike
December 3, 2024 5:58 am

Their stock-in-phrase answer/dodge: “The arctic isn’t ‘global’!”

Reply to  Mike
December 4, 2024 3:24 am

Love that tree-stump science! 🙂

I notice there is no reply to your comment from the climate alarmists. Perhaps they don’t have an explanation for how those tree stumps got under the ice.

The logical conclusion has to be that it was warmer in the past than it is now for trees to be able to grow in those locations.

Reconstruct that!

And tree-line science shows it was warmer in the past than it is today, all over the world, i.e., globally.

Anthony Banton
Reply to  MCourtney
December 2, 2024 9:08 am

“The Hockeystick erased the MWP. It has a flat handle.”

Who’s to say it was ever there in the first place?
Science advances – if you want to go back to H H Lamb’s schematic then it never would.
Better and more proxies have been uncovered since then.
Many more.
That you don’t like them is a damn shame.

If Mann “got it wrong”, then so did dozens of climate scientists since. ….

comment image?width=1800

Reply to  Anthony Banton
December 2, 2024 9:51 am

Better and more proxies have been uncovered since then.

Proxies only give a ΔT, and nothing about actual absolute temperature.

Tell us what the average absolute temperature was at each point of your graph and what the dispersion (standard deviation) of absolute temperatures that can be attributed to the mean temperature actually is.

Reply to  Anthony Banton
December 2, 2024 10:13 am

Fairy stories based on the non-physical idea of a “global” temperature.

Reply to  Anthony Banton
December 2, 2024 10:31 am

Are you a goalie?

Rich Davis
Reply to  Anthony Banton
December 2, 2024 6:05 pm

A thousand Elvis impersonators all say “Thank you verruh much”

Reply to  Anthony Banton
December 2, 2024 7:09 pm

Ha ha ha ha ha ha ha ha ha ha.
tree rings….Ah ha ha ha…Tree rings show mainly how wet it was.
That graph is worth less than a mosquito fart.

Reply to  nyolci
December 2, 2024 5:17 pm

They are railing against the Hockey Stick graph and then speak about the Minoan, Egyptian, and perhaps the Swedish, Mongolian, and Czechoslovakian Warm Periods with a straight face.

Why don’t you tell us what the temperatures were during these time periods? If you can’t determine the absolute temperatures, then you are just blowing smoke to cover up your inability to provide necessary data

Reply to  Jim Gorman
December 3, 2024 5:19 am

Why don’t you tell us what the temperatures were during these time periods?

Something like the third hit in Google: https://www.nature.com/articles/s41597-020-0530-7/figures/1 And yes, the caption speaks about the base period and uncertainties. Contrary to what you always claim.

Reply to  nyolci
December 3, 2024 6:08 am

±1K in the year minus 10000? Absurd.

And yes, the caption speaks about the base period and uncertainties.

Climatology (and you) doesn’t understand that error is not uncertainty, especially standard error from statistics.

You have zero clues about which you yap.

Reply to  karlomonte
December 3, 2024 8:34 am

±1K in the year minus 10000? Absurd.

Yeah, you surely know. You have dozens of publications in the field, I guess.

Reply to  nyolci
December 3, 2024 10:00 am

No, you fool, I understand what real measurement uncertainty is, dividing sigma by root-N ain’t it.

Reply to  nyolci
December 2, 2024 5:17 pm

They are railing against the Hockey Stick graph and then speak about the Minoan, Egyptian, and perhaps the Swedish, Mongolian, and Czechoslovakian Warm Periods with a straight face.

Why don’t you tell us what the temperatures were during these time periods? If you can’t determine the absolute temperatures, then you are just blowing smoke to cover up your inability to provide necessary data

Reply to  Jim Gorman
December 3, 2024 5:19 am

See above.

Reply to  nyolci
December 3, 2024 6:09 am

More absurdity from the liberal art of climatology.

Reply to  Anthony Banton
December 2, 2024 6:33 am

You continue to believe that anomalies are a gauge of absolute temperatures. They are not! Tell us what average temperature is for these two anomalies:

Station 1 – +1
Station 2 – +2

Reply to  Jim Gorman
December 2, 2024 8:52 am

You continue to believe that anomalies are a gauge of absolute temperatures.

This is hilarious that you don’t understand even this extremely simple thing. Really the entry level for freshmen.

Reply to  nyolci
December 2, 2024 9:43 am

Explain how absolute temperatures can be deduced from anomalies.

Reply to  Graemethecat
December 2, 2024 11:34 am

Explain how absolute temperatures can be deduced from anomalies.

There is an operation called “addition”. That’s the way you go.

Reply to  nyolci
December 2, 2024 11:49 am

Er, no. How do you know that all the anomalies have the same reference temperature?

Reply to  Graemethecat
December 2, 2024 12:06 pm

He has no idea. So how can he deduce the “climate” that generated the anomalies?

Reply to  Graemethecat
December 2, 2024 2:57 pm

How do you know that all the anomalies have the same reference temperature?

They don’t. The reference temp is always specified. Just you have to read the fokkin papers.

Reply to  nyolci
December 2, 2024 12:05 pm

Location 1 has a +1C anomaly. What temperatures were involved in determining that anomaly?

Location 2 has a +1C anomaly. What temperatures were involved in determining that anomaly?

Use your “addition operation” to determine eah answer. Show your work.

Reply to  Tim Gorman
December 2, 2024 1:22 pm

nyolci: *crickets*

Reply to  Graemethecat
December 3, 2024 5:24 am

nyolci: *crickets*

Yeah, you really think that. This is the saddest part. You’re so lost in the sauce… To be honest, it is extremely hard to keep up with you, guys, and for all the wrong reasons. See https://en.wikipedia.org/wiki/Brandolini%27s_law for further information.

Reply to  Tim Gorman
December 3, 2024 5:22 am

Use your “addition operation” to determine eah answer. Show your work.

I don’t do any work in this field. I just read what scientists write. Here’s a graph I obtained in 30 sec using the very convoluted search phrase “temperature reconstructions”. https://www.nature.com/articles/s41597-020-0530-7/figures/1 It states how the uncertainties are obtained (and actually lists them in the accompanying article), it states the reference period for the anomalies, and, mind you, the article describes how the proxies were handled to get temps.

Reply to  nyolci
December 3, 2024 5:37 am

I don’t do any work in this field. I just read what scientists write.

No kidding! We wouldn’t have known! What field do you work in and are physical measurements crucial to your work?

If you have no expertise in physical measurements how do you assess the uncertainty of the scientific papers you read?

What first made me a sceptic was climate scientists increasing resolution through averaging. That violated everything I had ever learned in upper level lab classes in college. The second thing was throwing away uncertainty in the measurements while calculating anomalies and then claiming the standard deviation of the mean of much smaller numbers showed how accurate they calculated the changes.

Hokem pokum from the begining and boy have they sucked you in. We called this kind of magic FM. F’ing Magic.

Reply to  Jim Gorman
December 3, 2024 6:11 am

Exactly.

Reply to  Jim Gorman
December 3, 2024 6:35 am

What field do you work in and are physical measurements crucial to your work?

It’s called telecommunications and, you may guess it, nowadays it’s done with the computer. Physical measurements are not crucial to what I do NOW. But, mind you, “measurement and instrumentation” is literally my mayor so – whether you like it or not – measurements is my field. And I can recognize how ignorant you are in the theory. You literally get almost everything wrong. BTW, back in the old days I actually worked in that field (we mostly “measured” sound though for various purposes).

Reply to  nyolci
December 3, 2024 6:46 am

And I can recognize how ignorant you are in the theory.

Another case of Unskilled and Unaware.

Reply to  nyolci
December 3, 2024 4:53 pm

If measurements is your field then you should have a multitude of reference material and experience in performing measurement uncertainty calculations. Show us some quotes from the books you have studied and use everyday. What NIST references do you use?

I too worked for Southwestern Bell Telephone and AT&T. A number of jobs. Design of special service lines and line testing. Supervised installation and ongoing maintenance on O carrier, N carrier, and T carrier. Designed and project managed many digital central office installations, both WE & NT. Lots of statistical work forecasting and determining circuit quantities, expense and capital budgets.

If you make measurements for guarantees of service levels you should be aware of the legal requirements that must be met in contractual arrangements and regulatory reporting. Do you meet ISO requirements for certification of your results? Let us see the uncertainty budgets you have built that meet ISO design criteria.

Reply to  Jim Gorman
December 4, 2024 3:04 pm

If measurements is your field then you should have a multitude of reference material and experience in performing measurement uncertainty calculations.

Yep, I do.

What NIST references do you use?

I have never used the NIST. I graduated in Hungary. We used Hungarian material. Also, our studies were structured very differently. I’m just realizing how different it was. We had a very great emphasis on theory that’s seemingly completely missing from the US engineering curriculum.

Reply to  nyolci
December 4, 2024 3:28 pm

Theory is covered quite adequately in US engineering curriculum. The problem is that theory many times assumes a simple universe which doesn’t match reality. Just ask any electrical engineer that has ever designed a circuit based on *THEORY*. Real world materials *never* match assumed theory. It certainly sounds as if it was *your* curriculum that was lacking in how to apply theory to the real world. That is a major learning experience in every engineering lab I have been in or monitored. Part of that learning experience is understanding the limitations of measurement.

One of the first books we used as a freshman EE student was “Electronic Components and Measurements” by Wedlock and Roberge from MIT.

At the very start of the book, Section 2.2.10, “results” the book says:

“One of the primary objectives of laboratory work is to clarify theory and indicate when and how well it applies in practical situations. Plot theoretical and experimental results on the same graph whenever possible”.

Part of that “clarify” is laying out a measurement uncertainty budget in order to separate it out from differences caused by actual component variation.

It’s pretty damn obvious that you never learned that basic lesson – one that engineers here in the US are (or at least used to be) expected to understand.

Reply to  Tim Gorman
December 5, 2024 2:01 am

Theory is covered quite adequately in US engineering curriculum.

Good on you.

The problem is that theory many times assumes a simple universe

I have some bad news to you. This is exactly the other way around. Most things used in practice are approximations. Very good approximations but still.

It certainly sounds as if it was *your* curriculum that was lacking in how to apply theory to the real world.

You’re again getting into Cloud Cuckoo land. FYI theory here is that we had 9 semesters of mathematics, 2 semesters of physics, etc. as the base. Of course we had the subjects from the field but in the first 2 years the emphasis wasn’t on them. At that time this was usual for each engineering field in Hungary. 4 years to Bachelor’s. I had the 5 years Master’s. And these were based on a very good high school education. This was called the Prussian (German) system. Now we have, unfortunately, the Bologna system (this is the EU name) that’s likely essentially the same as in the US, 3-3.5 years to Bachelor’s. I spent one year in Australia in a tech uni as a student (computer science). I was surprised… no, shocked how better the Hungarian uni (with the fraction of the funding) was. It was truly a shock. I guess this is gone by now.

Plot theoretical and experimental results on the same graph whenever possible”.

Yeah, here the “theoretical” is when you assume perfect components with zero impedance wires. It doesn’t mean “theory” can only handle perfect components. The problem is too many variables and extreme mathematical complexity. So we use approximations and empirical values (ie. we just measure it and use the resultant value, or we use statistics). I remember you guys were raging when in a climate model they allegedly used statistics to model a certain marginal phenomenon. Remember that.

Reply to  nyolci
December 5, 2024 4:58 am

I have some bad news to you. This is exactly the other way around. Most things used in practice are approximations. Very good approximations but still.”

ROFL! That’s why the speed of light had to be arbitrarily defined instead of measured or calculated based on theory, right?

Yeah, here the “theoretical” is when you assume perfect components with zero impedance wires”

No, the operation of a transistor is based in quantum mechanics. That means it is a statistical function describing how an electron can “pass” through an energy barrier. That means there is no “perfect” description of the characteristics of a transistor.

Apparently all of your education never touched on the subject of quantum mechanics and things like “shot noise”.

Your expertise in physical science is shown by the idiocy of your assertions. Your education is irrelevant. An educated fool is still a fool, he just doesn’t recognize himself as a fool

Reply to  Tim Gorman
December 5, 2024 6:28 am

That’s why the speed of light had to be arbitrarily defined instead of measured or calculated based on theory, right?

I’m kinda at loss here. I don’t see the relevance of this.

No, the operation of a transistor is based in quantum mechanics.

I still have some bad news to you. Everything is based on quantum mechanics. The so called classical laws are asymptotic approximations. But even with the classical laws you have to apply approximations. You don’t even need quantum mechanics for that.

Apparently all of your education never touched on the subject of quantum mechanics and things like “shot noise”.

You completely miss the topic and then come at these outlandish assertions. Congratulations.

Reply to  nyolci
December 5, 2024 8:18 am

In other words you know nothing of dhot noise and its impact on signal analysis. And you claim to be educated in measurement practice. What AI bot are you using to feed you this garbage?

Reply to  Tim Gorman
December 5, 2024 1:31 pm

In other words you know nothing of dhot noise and its impact on signal analysis.

I didn’t know it was an exam 🙂 You and your tiring urge to prove you know more than me is… very tiring.

Reply to  nyolci
December 5, 2024 2:39 pm

I’m not proving that I know more than you. I am proving that you don’t know nearly as much as you think you do!

Reply to  Tim Gorman
December 6, 2024 1:45 am

I’m not proving that I know more than you.

No this is a true assertion at last.

Reply to  nyolci
December 5, 2024 8:33 am

Everything is based on quantum mechanics. The so called classical laws are asymptotic approximations. But even with the classical laws you have to apply approximations.

Your basic argument is that only the perfect is correct.

I am reminded of an engineer’s admonition that one should not let perfect be the enemy of good. To prove your argument you need to show why good is not sufficient.

If I can measure the output of a transistor amp to millivolts and even microamps, do I need to know energy down to an individual electron calculated through quantum theory? NO!

Your whole argument is nothing more than a red herring posted by a troll.

Reply to  Jim Gorman
December 5, 2024 1:32 pm

Your basic argument is that only the perfect is correct.

No, and it’s very surprising how you could misunderstand what I said.

Reply to  nyolci
December 5, 2024 8:51 am

At that time this was usual for each engineering field in Hungary. 4 years to Bachelor’s. I had the 5 years Master’s

Funny how you have all this education but you never provide a reference or quote from the books you studied. Why is that?

Have you never studied any further from your graduation to learn new things? Things like all the JCGM documents, metrology books on measurement uncertainty issued subsequent to the issue of the first GUM, NIST or Eurachem or other national documents about measurements.

Why do you never use any of these to support your assertions? You do know everyone knows why don’t you?

Reply to  Jim Gorman
December 5, 2024 1:50 pm

Funny how you have all this education but you never provide a reference or quote from the books you studied

Why the fokk should I provide this? We had normal uni textbooks, most of those had been printed in the local uni press. One relatively famous was Villamosságtan from Simonyi who was the father of Charles Simonyi, a coincidence. Villamosságtan means the Theory of Electricity, sounds stupid in English.

Why do you never use any of these to support your assertions?

This is climate science. I don’t have to support my assertions ‘cos these are not my assertions. I’m not a climate scientist. I’m just telling you you should read them. It’s not my duty to make you think, and frankly, I don’t think you can think.

Reply to  nyolci
December 5, 2024 4:07 pm

You *do* have to support your assertions. Otherwise you have no basis for making any assertions.

You don’t need to be a climate scientist to know that you can’t average intensive properties. Any self-respecting reader that knows even a minimum about physical science would know that.

Anyone familiar with metrology knows that you have to propagate measurement uncertainty. You can’t just wish it away the way climate science AND YOU do.

I have read lots of climate science studies. Almost none of them handle measurement uncertainty properly. Pat Frank does. Hubbard and Lin do. I can’t really think of any other prominent climate scientist that doesn’t use the “random, Gaussian, and cancel” meme to justify ignoring measurement uncertainty.

Reply to  Tim Gorman
December 6, 2024 2:26 am

You *do* have to support your assertions.

Again, these are not my assertions. This is just science. If science says so, that should settle any debate.

Reply to  nyolci
December 6, 2024 5:54 am

It’s not “just science”. It’s “just BAD science”.

And you simply cannot recognize bad science because you don’t know science at all!

Reply to  nyolci
December 6, 2024 8:20 am

If science says so, that should settle any debate.

Wow, that is likely the most ignorant statement I’ve ever seen on this site. That is not how science works.

Reply to  nyolci
December 6, 2024 10:25 am

This is climate science. I don’t have to support my assertions ‘cos these are not my assertions.

Yes you do need to support your assertions. You continually make assertions supported by Appeal to Anonymous Sources. Here is a web site that explains this in detail.

https://www.thoughtco.com/logical-fallacies-appeal-to-authority-250336

Appeal to Anonymous Authority

The Appeal to Anonymous Authority is, essentially, giving testimony or advice that refers to unnamed sources, such as making a statement based on what “experts” say or what “historians” contend, without ever naming the sources. This calls into question the validity of the testimony.

Instead of identifying who this authority is, we get vague statements about “experts” or “scientists” who have “proven” something to be “true.” This is a fallacious Appeal to Authority because a valid authority is one who can be checked and whose statements can be verified. An anonymous authority however, cannot be checked and their statements cannot be verified.

This is your modus operandi. You have obviously never had a management job where decisions must be supported by facts, not innuendos.

Reply to  Jim Gorman
December 6, 2024 10:24 pm

You continually make assertions supported by Appeal to Anonymous Sources.

I appeal to the actual authority, scientists.

Reply to  nyolci
December 7, 2024 4:04 am

You TOTALLY missed the point.

You *NEVER* specify *which* actual authority you are referring to. No actual name and no actual reference.

Thus your argument is fallacious and is named Appeal to Anonymous Sources.

Reply to  Tim Gorman
December 9, 2024 2:49 pm

You *NEVER* specify *which* actual authority you are referring to

Science is not a bazaar. There are no multiple versions of the laws. Science is when it is settled, and then it is just one thing, one authority. This is the thing we know.

Reply to  nyolci
December 3, 2024 6:26 pm

Measurement is *NOT* your field. If it was you would understand about measurement uncertainty. The only one getting literally everything wrong is you.

If you were involved in measuring sound in telecommunications then you should be familiar with the various weighting schemes such as A, C, and Z. And you should also know that you must specify exactly what the measurement protocol should be for in order to know what weighting scheme to apply. Use of the wrong weighting scheme creates a huge measurement uncertainty in what is actually measured.

My guess is that you don’t have a clue as to what I am talking about!

Reply to  Tim Gorman
December 4, 2024 3:05 pm

Measurement is *NOT* your field.

Yes, it is, however you whining.

Reply to  nyolci
December 3, 2024 6:10 am

Bullshit. Those fantasy graphs do not show measurement uncertainty.

Reply to  karlomonte
December 3, 2024 8:36 am

Those fantasy graphs do not show measurement uncertainty.

Yep, again. People who confuse Austria with Australia. A very simple scientific paper is an insurmountable obstacle for them.

Reply to  nyolci
December 3, 2024 10:03 am

The word “uncertainty” is nowhere in the caption, yet you claim that it is.

Yer nothing but a joke as a troll. Appeal to Authority, look it up.

Reply to  karlomonte
December 5, 2024 1:49 am

The word “uncertainty” is nowhere in the caption, yet you claim that it is.

Jesus Christ… Failing such a simple test. The third sentence (that is in the second row on my browser) starts whit this: “The uncertainties for each method”

Reply to  nyolci
December 3, 2024 6:16 pm

Did you bother to actually *read* the papers involved?

For example: “Two of the methods — composite plus scale (CPS) and pairwise comparison (PAI) — generate composites by standardizing the temperature variance across proxy time series, then restoring it to a target value at the aggregated level. The term “scaling” is used in this paper to refer to matching the variance of a composite to that of a target, a technique commonly used for large-scale climate reconstructions that rely on proxy data that have not been calibrated to temperature, including those focusing on the past millenium”

There is a reason why it is difficult to calibrate proxy data to temperature. A prime example is assuming that tree ring width is dependent on temperature – when insect population. shading from other growth,. competition for moisture with other growth, etc are all *more* important to tree ring width than temperature. Since these factors are not known and can *never* be known, the measurement uncertainty associated with tree rings is *very* wide when it comes to temperature.

Go look at the table listing the the uncertainties. Much of it has to do with lake and marine sediments. Those sediments are modulated by *many* different factors – again leaving the measurement uncertainty quite large for actual temperature. The uncertainties can’t even be added because since the dimensions are different for each one you must use relative uncertainties and the base value to use as the denominator is indeterminant. No functional relationship is given for how the various factors combine to give a total!

Reply to  Tim Gorman
December 3, 2024 6:34 pm

Yet nyolci says:

But, mind you, “measurement and instrumentation” is literally my mayor so – whether you like it or not – measurements is my field.

Yet he can’t even recognize how relative uncertainties can’t even be calculated due to missing data.

From : https://academic.oup.com/treephys/article-abstract/30/6/667/1619936?redirectedFrom=fulltext

Way and Oren (2010) found that increased temperature generally increases tree growth, except for tropical trees. They suggest that this probably occurs because temperate and boreal trees currently operate below their temperature optimum, while tropical trees are at theirs. The response of growth to temperature was not simply accelerating the same trajectory of ontogeny achieved at current temperatures. Remarkably, temperature shifted the trajectory. Warmer trees were taller and skinnier, with more foliage and fewer roots!

Taller and skinnier. I wonder how skinnier affects tree ring width. It would seem to mean that as temps increase, ring width is smaller. Who knew?

Reply to  Jim Gorman
December 4, 2024 4:14 am

I suspect that his “major” consisted of being a telephone repair tech being taught how to use his test instrument to check out customer lines on problem complaints. Thus his claim that measurement uncertainty is given by the instrument tag and that averaging can eliminate some systematic uncertainty!

Reply to  Tim Gorman
December 4, 2024 3:07 pm

Did you bother to actually *read* the papers involved?

Yes.

There is a reason why it is difficult to calibrate proxy data to temperature.

This is a field on its own. With its experts. And you are not among them.

Reply to  nyolci
December 4, 2024 3:33 pm

Bullcrap! You didn’t even know enough to identify that the uncertainties of the various factors in the paper have different dimensions thus requiring the use of relative uncertainty. You didn’t know enough to even know that a functional relationship of the various factors is needed in order to determine the weighting of the relative uncertainties of each factor.

Understanding how to perform measurement uncertainty protocols does *NOT* require knowing anything about the actual biological factors at all!

This is nothing more than you using the argumentative fallacy of Appeal to Authority once again. And its not even a valid use of the fallacy because the authorities didn’t even perform actual measurement uncertainty propagation in the papers!

Reply to  nyolci
December 2, 2024 5:30 pm

There is an operation called “addition”.

In other words, “I don’t have a clue.”

Why is no one surprised?

Reply to  nyolci
December 2, 2024 5:23 pm

This is hilarious that you don’t understand even this extremely simple thing. Really the entry level for freshmen.

Yet your inability to explain why anomalies are not absolute temperatures illuminates your lack of knowledge about the subject.

Your ad hominems only take up space and add nothing of substance. That is what uneducated trolls do.

Reply to  Jim Gorman
December 3, 2024 5:34 am

Yet your inability to explain why anomalies are not absolute temperatures illuminates your lack of knowledge about the subject.

I don’t understand your problem here. Anomalies are called “anomalies” precisely because they are not absolute temps but a difference from a certain, stated value (usually something like 1850-1950 local avg). They use anomalies because in Thailand and Cuba, the temp is like 22-33 C during the year, and in Hungary it is like -5 – 40C. If you average them, you get a value that doesn’t tell you anything. But if you average the anomalies (calculated w/r/t the local annual/seasonal/monthly, whatever you want, average during the years 1xxx to 1xxx) you will see how they are moving. In every other respect, anomalies behave exactly like absolutes (well, to be accurate, under some very specific, rarely happening conditions, the usage of anomalies eliminates certain systematic errors, but this is not the reason why they use them).

Reply to  nyolci
December 3, 2024 6:13 am

 the usage of anomalies eliminates certain systematic errors

More standard climatology bullshit, you just wish this to be true (it isn’t). Subtraction increases measurement uncertainty!

Reply to  nyolci
December 3, 2024 5:48 pm

If an average of an intensive property is garbage then so is any anomaly you create from those garbage anomalies. Garbage In, Garbage Out.

Nor do anomalies tell you anything about *climate*. A change from -20C to -19C won’t change the climate at all.

If you want to get some *good* information on climate changes then go look at the changes noted in the hardiness zone annual studies. You’ll find that they change very little – meaning climate has changed very little!

Reply to  Tim Gorman
December 5, 2024 1:51 am

If an average of an intensive property is garbage

It’s not garbage. You have to properly weight them, that’s all.

A change from -20C to -19C won’t change the climate at all.

A change of 1C in the global average is pretty much, unfortunately.

Reply to  nyolci
December 5, 2024 4:47 am

It’s not garbage. You have to properly weight them, that’s all.”

Nope. Does the sum of 20C and 10C = 30C make any physical sense at all? If not then the average makes no physical sense either. There is no way to weight them to make physical sense.

The sum of 20kg and 10kg *does* make physical sense so using that sum to calculate a physical average makes sense as well.

Why do you persist in making a fool of yourself when it comes to physical science?

“A change of 1C in the global average is pretty much, unfortunately.”

There is no global climate. There is a sum of local and regional climates. If a change of 1C doesn’t change the local or regional climate then how can it change the global climate?

Reply to  Tim Gorman
December 5, 2024 6:36 am

If not then the average makes no physical sense either.

The average of 15C is actually the temperature if you mix two buckets of water of 10 and 20C. Actually, temperature is just average internal energy per molecule per degrees of freedom. So 15C is the average temperature of two buckets of water of 20 and 10 C, without even mixing it. (Same mass per bucket.) So it does make physical sense.

If a change of 1C doesn’t change the local or regional climate

If there’s a change of 1C in the local or regional climate, that’s a change in the local or regional climate. I know why you’re confused. A change of 1C in global climate does not necessarily mean a similar (or indeed, any) change in the climate of a certain locality. You may not notice it where you live. It doesn’t mean the change is not happening.

Reply to  nyolci
December 5, 2024 8:32 am

When you mix the liquids you no longer have two objects from which to calculate an average. You have one object that had an intensive property of temperature.

Two different objects cannot have their average kinetic energies averaged into one value. That’s like saying a car travelling st 5 mph and one at 10 mph can have aversge kinetic energies of 8 mph. That’s makes no physical sense at alll.

You are back to the “numbers is numbers” nonsense so you can average anything whether it makes physical sense ir not.

If a 1C change does not change local climate then it can’t change the global climate either. I keep telling you that if you want to know about *real* climate change then go look at changes in hardiness zones.

Reply to  Tim Gorman
December 5, 2024 1:54 pm

Two different objects cannot have their average kinetic energies averaged into one value.

Sorry, I didn’t know that 🙂

If a 1C change does not change local climate then it can’t change the global climate either.

No one has said that it cannot. Sorry, this is so tiring… Comprehension, this burning problem of you, Gormans.

Reply to  nyolci
December 5, 2024 4:11 pm

Sorry, I didn’t know that”

Obviously.

“No one has said that it cannot. Sorry, this is so tiring… Comprehension, this burning problem of you, Gormans.”

That’s the entire assertion you have been trying to defend. Climate change would be denoted prominently in hardiness zone changes if it were actually happening – especially catastrophically as the CAGW proponents assert! Yet I have seen none such change. In fact deserts are blooming all over the globe today – meaning climate there is actually *improving*. Global grain harvests keep on setting records every year – meaning climate is actually improving. Where is the catastrophe in CAGW coming from?

Reply to  Tim Gorman
December 6, 2024 2:33 am

Climate change would be denoted prominently in hardiness zone changes if it were actually happening

You have attributed a claim to me that I haven’t claimed, as far as I understand it. You said that I sad that “a 1C (global) change does not change local climate”. I only said (and this is the position of science, coincidentally) that there are localities where this change doesn’t show up. It doesn’t mean this change doesn’t show up anywhere. Of course most localities are affected.

Yet I have seen none such change

I don’t doubt that you feel like that. But your feelings are irrelevant in a scientific debate.

Reply to  nyolci
December 3, 2024 6:02 pm

the usage of anomalies eliminates certain systematic errors, but this is not the reason why they use them).

So much bullshit sh*t.

1. A monthly average temperature is derived from a random variable containing daily temperatures. That random variable has a mean “μ” and a variance “σ²”. The individual measurements also have a measurement uncertainty that is added to variance of the random variable.

Please try to tell us that isn’t true.

2. A baseline temperature is derived from a random variable containing 30 monthly average temperatures. That random variable has a mean “μ” and a variance “σ²”. The individual measurements also have a measurement uncertainty that is added to variance of the random variable.

Please try to tell us that isn’t true.

3. The monthly anomaly is computed by SUBTRACTING the mean of the monthly random variable by the mean of the baseline random variable.

As you should know, when subtracting means of random variables, the variances are ADDED by RSS. That should result in each anomaly having an uncertainty in the ±2 to ±4 °C.

As you you know, that uncertainty is thrown in the trash can and is not propagated into subsequent calculations using anomalies.

God forbid you would show an anomaly of 0.01 ±2°C. I would be embarrassed too!

The best you and climate science can do is state that anomalies eliminate uncertainty. If you are an expert in metrology, show us the math that justifies the claim. The least you can do is find a paper that has the math you are unable to provide.

Reply to  Jim Gorman
December 5, 2024 1:54 am

A monthly average temperature is derived from a random variable containing daily temperatures. That random variable has a mean “μ” and a variance “σ²”.

Yes. And when the measurements are independent, then the variance of the average is just the average of variances. In other words, stdev is sigma per square root the number of samples.

Reply to  nyolci
December 5, 2024 4:49 am

A change of 1C in the global average is pretty much, unfortunately.”

That only tells you how closely you have calculated the mean. It tells you nothing about the accuracy of that mean. The accuracy of the mean is the standard deviation of data set. No dividing by “n”.

Reply to  Tim Gorman
December 5, 2024 6:37 am

The accuracy of the mean is the standard deviation of data set. No dividing by “n”.

Exactly. That’s a bit more complicated, involving weighting, squaring and square rooting. The stdev of the global mean is actually very low.

Reply to  nyolci
December 5, 2024 8:41 am

The standard deniation of the mean is NOt the measurement uncertainty of the mean. It is solely a metric for sampling error. Saying that the standard deviation oof the sample means is also the measurement uncertaintu implies that st the limit the measurement uncertsinty could be zero.

Why do you persist. With this garbage?🗑️

Anthony Banton
Reply to  Jim Gorman
December 2, 2024 9:15 am

And you continue to think that nothing can ever be known due your precious uncertainties.

Reply to  Anthony Banton
December 2, 2024 9:46 am

BWAHAHAHAHAHA!!!

So, you couldn’t answer it.

Reply to  Sunsettommy
December 2, 2024 10:16 am

Neither Banton nor nyolci ever do.

Reply to  Anthony Banton
December 2, 2024 10:32 am

your precious uncertainties

Just another indication of your compete ignorance of metrology.

Reply to  Anthony Banton
December 2, 2024 10:41 am

I can know that a change from 1.5 +/- .5 to 2.5 +/- 0.5 is an actual change since it is outside the uncertainty interval. I do NOT know if a change from 1.5 */- 0.5 to 1.6 +/- 0.5 is an actual change or merely an artifact resulting from the vagaries of the measuring instrument or environment.

Reply to  Tim Gorman
December 2, 2024 11:39 am

I can know that a change from 1.5 +/- .5 to 2.5 +/- 0.5 is an actual change since it is outside the uncertainty interval.

And you’re wrong. You can’t even say that for sure. I mean for 100% certainty. This is a good illustration how deeply ignorant you are even in the field you like to be seen as an expert.

Reply to  nyolci
December 2, 2024 12:19 pm

No uncertainty interval is ever perfect for covering all possible outliers. But if the measurements are taken using the same instrument under the same conditions then the uncertainty interval should cover all reasonable values that can be assigned to the measurand.

You’ve basically been reduced to arguing about the number of angels that are on the head of pin. You may as well argue that we can “never” be 100% certain of anything. You are arguing metaphysics and not just plain physical science.

Reply to  Tim Gorman
December 2, 2024 2:58 pm

No uncertainty interval is ever perfect for covering all possible outliers.

At last.

Reply to  nyolci
December 2, 2024 3:57 pm

Malarky.

You are the one that asserted you can never know anything about temperatures.

nyolci: “we can’t say anything about the temperature in Nevada. Not to mention the contiguous United States. Nothing. Apparently the same applies to any other type of measurement. Is this what you say?”

Apparently you think that unless you know all possible values that you cannot estimate an uncertainty interval. That’s just as wrong as it can be.

From the GUM: “Further, in many industrial and commercial applications, as well as in the areas of health and safety, it is often necessary to provide an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the quantity subject to measurement.”

It does *NOT* say that the uncertainty interval must include *ALL* possible values that can result from a measurement.

From the GUM: “uncertainty (of measurement)
parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand”

(all bolding is mine, tpg)

Neither of these say the uncertainty interval should encompass all possible values, only those that can REASONABLY BE ATTRIBUTED TO THE QUANTITY SUBJECT TO MEASUREMENT”.

Thus you *can* know a temperature in Las Vegas subject to an uncertainty interval that is REASONABLE. There is no requirement that the interval include unreasonable outlier values.

Reply to  nyolci
December 2, 2024 1:21 pm

And you’re wrong. You can’t even say that for sure.

You have no idea what an uncertainty interval truly means do you.

From the GUM 0.4:

Further, in many industrial and commercial applications, as well as in the areas of health and safety, it is often necessary to provide an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the quantity subject to measurement. Thus the ideal method for evaluating and expressing uncertainty in measurement should be capable of readily providing such an interval, in particular, one with a coverage probability or level of confidence that corresponds in a realistic way with that required. (bold by me)

The common use of of a simple interval is to use 1 sigma (~68%). Most scientific and industrial use require 2 sigmas (~95%). No one requires the use of 100%. One had just as well quote the entire range of measurements which defeats the purpose of having well proven statistical parameters of different probability distributions.

Reply to  Jim Gorman
December 2, 2024 3:00 pm

You have no idea what an uncertainty interval truly means do you.

Apparently Tim has no idea. He said the “true value” should be in that interval with 100% certainty. He later backpedaled.

The common use of of a simple interval is to use 1 sigma (~68%). Most scientific and industrial use require 2 sigmas (~95%)

Exactly. Good boy.

Reply to  nyolci
December 2, 2024 3:19 pm

He said the “true value” should be in that interval with 100% certainty. He later backpedaled.”

No, that is *NOT* what I said. Stop putting words in my mouth. If you want to quote me then quote me. Don’t make up words for me that I didn’t say.

Reply to  Tim Gorman
December 3, 2024 5:35 am

No, that is *NOT* what I said.

Yeah, this is exactly what you said, quote:

I can know that a change from 1.5 +/- .5 to 2.5 +/- 0.5 is an actual change since it is outside the uncertainty interval

Reply to  nyolci
December 3, 2024 6:15 am

If you understood anything about uncertainty (which you don’t), you might understand Tim’s statement.

Reply to  karlomonte
December 3, 2024 8:44 am

If you understood anything about uncertainty (which you don’t), you might understand Tim’s statement.

Well, what is the meaning of Tim’s statement? I’m eagerly waiting to be entertained. You know, this is the moment when, while we are watching you, we say “wait, he’s gonna do something incredibly stupid”. As a memento, here is Tim’s statement:

I can know that a change from 1.5 +/- .5 to 2.5 +/- 0.5 is an actual change since it is outside the uncertainty interval

Reply to  nyolci
December 3, 2024 10:05 am

Post the entire quote, fool.

Reply to  karlomonte
December 4, 2024 3:10 pm

Post the entire quote, fool.

Trying to sneak out? 😉

Reply to  nyolci
December 4, 2024 3:35 pm

Do you think it goes unnoticed that you *won’t* provide the entire quote? Who’s actually doing the sneaking out?

Reply to  Tim Gorman
December 5, 2024 4:34 am

Do you think it goes unnoticed that you *won’t* provide the entire quote?

Okay, see above, I provided the entire quote. That is, of course, not making the whole thing better 🙂 There’s no some omitted context that would change the meaning of your bs.
BTW if you just say it was just sloppy wording I would say “okay”, ‘cos I’m pretty sure it was. But somehow you guys are so idiotic that you even go into denial here.

Reply to  karlomonte
December 5, 2024 4:32 am

Post the entire quote

You claimed I didn’t understand Tim’s statement. You surely didn’t need the full statement to say that. You were just bsing. But anyway, here it is, the full post, and it is very clear to both of us that it doesn’t actually give some additional context to the original. Actually, it makes it even worse.

I can know that a change from 1.5 +/- .5 to 2.5 +/- 0.5 is an actual change since it is outside the uncertainty interval. I do NOT know if a change from 1.5 */- 0.5 to 1.6 +/- 0.5 is an actual change or merely an artifact resulting from the vagaries of the measuring instrument or environment.

Reply to  nyolci
December 5, 2024 6:00 am

Meaning you are *still* relying on the idiocy that measurement uncertainty must include all *unreasonable* possible values as well as all reasonable values – so you can never know anything for “certain”, i.e. all measurement uncertainty is from negative infinity to positive infinity.

Even after being given quotes from recognized references that contradict that.

And you *still* haven’t learned how to use relative uncertainty.

Reply to  nyolci
December 3, 2024 11:48 am

Well, what is the meaning of Tim’s statement? I’m eagerly waiting to be entertained.

From Experimentation and Uncertainty Analysis for Engineers. Page 8

The confidence specification is necessary because we have made an estimate. We can always be 100% confident that the true value of some quantity will be between plus and minus infinity, but specifying Uₓ as infinite provides no useful information to anyone. It is not necessary to perform an experiment to find that result.

And, as importantly.

In sample-to-sample experiments, measurements are made on multiple samples so that in a sense, sample identity corresponds to the passage of time in timewise experiments. In sample-to-sample experiments, the variability inherent in the samples themselves causes variations in measured values in addition to the random errors in the measurement system. (Bold by me)

If this is unintelligible to you, I suggest you take some time to study measurement theory before you ask questions because of ignorance of the subject.

Reply to  nyolci
December 3, 2024 5:53 pm

Good Lord! Uncertainty intervals are based on possible REASONABLE values. You are arguing that UNREASONABLE possible values have to be considered as well and so you can never be sure of anything. You didn’t understand a word I said about why increasing sample size can actually *increase* uncertainty, did you?

Reply to  nyolci
December 2, 2024 6:24 pm

Neither Tim nor I have ever said the true value is 100% in the uncertainty interval. That is your interpretations.

If you had a clue, you would know that a measurement has no guarantee of even being “the true value”. Unknown systematic uncertainty can make the measured stated value quite different from the real true value and cannot be evaluated with statistical analysis.

Your preoccupation with statistics illustrates your ignorance of measurements. Statistics DO NOT determine a measured value. Statistics are used to define common and standard parameters of a measurement probability distribution. The probability distribution is defined by the measurements that are taken and not by some statistical analysis.

That means when I give a stated value, a standard deviation, and a probability distribution, everyone on the globe knows what my measurements consisted of.

You can’t even give a standard definition of what you are claiming about a measurement like Global Average Temperature.

Reply to  Jim Gorman
December 3, 2024 5:37 am

Neither Tim nor I have ever said the true value is 100% in the uncertainty interval. That is your interpretations.

Well, Tim just said that, quote:

I can know that a change from 1.5 +/- .5 to 2.5 +/- 0.5 is an actual change since it is outside the uncertainty interval

And, by the way, you’re talking about the true value 🙂 how come? You used to avoid it with religious fervor

Reply to  nyolci
December 3, 2024 5:56 pm

No one has said that a true value doesn’t exist. As the GUM points out, there is a difference between knowing that a true value exists and knowing *what* the true value is. The quotes have been provided to you from the GUM on this. As usual, you just ignore anything outside your limited religious dogma.

Reply to  Tim Gorman
December 4, 2024 3:13 pm

No one has said that a true value doesn’t exist.

All right then. But why have you had a brain meltdown every time I used it in an example? (Always as an abstract entity, of course)

Reply to  nyolci
December 3, 2024 6:18 pm

And, by the way, you’re talking about the true value 🙂 how come?

Because a true value DOES exist. However, a measurement is an ESTIMATE of the true value. The estimate is known as the stated value. The uncertainty is an interval characterizing the interval where the true value is likely to lay.

If you are a measurement expert, why are you asking these basic questions? I learned these basics in first semester chemistry, physics, and electronic labs. Learning how to characterize resistance using a wheatstone bridge was enlightening when considering the uncertainty in the voltage supply, in the reference resistances, and in the milliamp meter. Physics was no less enlightening.

How did you skip all this when learning measurement theory?

Reply to  nyolci
December 2, 2024 2:41 pm

And you’re wrong.

And you are blathering about something you know next-to-nothing.

Typical climate science.

Reply to  karlomonte
December 2, 2024 3:01 pm

And you are blathering about something you know next-to-nothing.

Well, I have to point out that it was Tim who was in the wrong here. And I just pointed that out to him. And he had to admit that.

Reply to  nyolci
December 2, 2024 4:07 pm

You people (to include Mann, Stokes, etc.) aptly demonstrate why climate science is really a liberal art and not a quantitative physical science.

Reply to  nyolci
December 2, 2024 4:08 pm

Total and utter bovine excrement. All you have done is exhibit an absolute ignorance about physical science. I just left you two quotes from the GUM stating that uncertainty intervals encompass REASONABLE possible values, not *all* possible values as you assert.

Nor are physical attributes unknowable because of uncertainty growing as you add elements. You don’t even understand the difference between absolute uncertainty and relative uncertainty – which is why you claim that an average value for the temperature in Nevada can’t be known!

In fact, my guess is that you don’t even know that you can’t grow sample size without bound. As Bevington points out in his tome that as the number of sample elements grows so does the possibility of getting unreliable outlier values because of random fluctuations. That alone legislates against growing the number of temperature measurement stations without bound, you’ll just wind up getting more and more unreliable results contaminating the data set!

You *need* to learn something about physical science before you come on here and start lecturing everyone about metrology and measurement uncertainty. One of the purposes behind measurement uncertainty intervals is to allow others repeating your experiment or measurements can judge whether their results are reasonable or not. That does *NOT* mean that the uncertainty interval has to include all possible values, even those that are unreasonable due to random fluctuations, in order to be able to *know* something about that measurand.

Reply to  Tim Gorman
December 2, 2024 5:30 pm

You *need* to learn something about physical science before you come on here and start lecturing everyone about metrology and measurement uncertainty.

This will never happen.

Reply to  Tim Gorman
December 3, 2024 5:40 am

That does *NOT* mean that the uncertainty interval has to include all possible values

I’m happy that you’ve corrected your ways. The last time you were talking about this you told me this gem:

I can know that a change from 1.5 +/- .5 to 2.5 +/- 0.5 is an actual change since it is outside the uncertainty interval

Reply to  nyolci
December 3, 2024 6:18 am

How many more times will you be spamming this nonsense?

Reply to  karlomonte
December 3, 2024 11:57 am

Forever. Willful ignorance is the worst kind of ignorance. He still hasn’t figured out that you must use relative uncertainty with averages and not summed absolute uncertainty.

Reply to  Tim Gorman
December 3, 2024 2:30 pm

He hasn’t gotten past true values and error.

Reply to  nyolci
December 3, 2024 11:54 am

You STILL don’t get it! If the REASONABLE interval of values for measurement two is outside the interval of reasonable values for measurement one them I am REASONABLY sure that there has been an actual change.

Do you have even the slightest clue of the definition of the word “reasonable”?

You are still trying to push the metaphysical garbage that we can never be 100% sure of anything so we can never identify a change in anything. You are just making a bigger and bigger fool of yourself.

Have you figured out what relative uncertainty is yet?

Reply to  Tim Gorman
December 5, 2024 4:42 am

If the REASONABLE interval of values for measurement two is outside the interval of reasonable values for measurement one them I am REASONABLY sure that there has been an actual change.

This is actually correct even if you omit (as you did) the word “reasonable”. Every scientific result is “reasonably” true. If the error (or “uncertainty”) is low, we just say it’s reasonable to believe. This is how “consensus” is based on results. This is when we say this is “settled”. See?

Reply to  nyolci
December 5, 2024 6:02 am

Every scientific result is “reasonably” true.”

No, it isn’t. The global average temperature is *not* reasonably true.

Reply to  Tim Gorman
December 5, 2024 6:39 am

The global average temperature is *not* reasonably true.

When you write nonsensical sentences, don’t you have a bad feeling?

Reply to  nyolci
December 5, 2024 8:43 am

You can’t average intensive properties. You have yet to refute that.

Reply to  Tim Gorman
December 5, 2024 1:56 pm

You can’t average intensive properties.

Yes, you can. You have to weight them properly.

Reply to  nyolci
December 5, 2024 4:15 pm

You *can’t” weight intensive properties. Averaging requires adding property values together into a physically meaningful total. Adding 10C and 20C together does *NOT* result in a physical meaningful total.

You can add masses and get a physically meaningful number. You cannot add temperatures and get a physically meaningful number.

The meme of “numbers is numbers” you are asserting is just as much garbage as the meme “all measurement uncertainty is random, Gaussian, and cancels”. Yet you keep on trying to assert both of them, right along with climate science!

Reply to  Tim Gorman
December 6, 2024 4:52 am

You *can’t” weight intensive properties

Yes, you can. Okay, you have to convert them to an extensive property first. Temperature is just average kinetic energy per molecule per degrees of freedom. If you multiply temperature with the number of molecules and degrees of freedom, you get total internal energy. You add these internal energies and divide by the total number of molecules and the degrees of freedom. This latter is just the same, so it cancels out. The number of molecules is just proportional to the mass, and this latter is broadly proportional to volume, and this latter is broadly proportional to the cell size. So if you just weight with cell size (as they do), you get a very good estimate for the average temp of the two (or N) cells. The cell size thingie is not that straightforward and this is one thing (among many) why climate science is a science, a field on its own. They know how to do these things. These people, by the way, include Roy Spencer because he needs to do these operations for his own series, and he’s regularly doing this weighting. Now I know you’re an idiot, so I don’t expect you to understand this, but at least this is here.

Reply to  nyolci
December 6, 2024 6:20 am

Okay, you have to convert them to an extensive property first.”

And exactly how does climate science convert temperature to an extensive property that can be analyzed?

” If you multiply temperature with the number of molecules and degrees of freedom, you get total internal energy.”

Does climate science do this? Or do they just average the intensive property of temperature?

Spencer doesn’t even average “temperature”. He converts microwave irradiance into a temperature. Yet no where can I find a functional relationship between temperature and microwave irradiance that takes into consideration that microwave irradiance is affected by water vapor in the atmosphere – i.e. commonly known as path loss. This isn’t just clouds because there is water vapor in the atmosphere even when there are no clouds. This was a major factor we had to consider when designing microwave links carrying telephone traffic from point to point. You had to use a water vapor path loss factor based on the worst case, i.e. rain! Yet I can find no where that RSS modifies their samples of microwave irradiance based on whether it is raining over the sample area!

The use of cell size weighting is, once again, to minimize SAMPLING ERROR. It is *not* for estimating measurement uncertainty!

You keep saying you are a metrology expert, trained at the university level, but nothing you ever offer up makes any metrology sense whatsoever!

BTW, ENTHALPY (an extensive property) is what you are talking about converting temperature into. I have advocated for years that climate science should convert to using enthalpy instead of temperature but none of the climate science supporters will even address the point let alone agree with it.

Using enthalpy would allow direct comparison of the environments in Las Vegas and Miami where temperature does not. Nor does climate science even weight the temperatures based on enthalpy, they just average the temperatures and say that is an “average climate”!

You should also note that the satellites *can’t* use enthalpy because the satellites don’t measure humidity and pressure at the sampling points. So they are stuck without the capability of averaging extensive properties.

Reply to  Tim Gorman
December 6, 2024 7:00 am

Does climate science do this?

Yes. With area-weighting.

The use of cell size weighting is, once again, to minimize SAMPLING ERROR. It is *not* for estimating measurement uncertainty!

No. I’ve already told you why they use that. BTW I don’t think you understand much about sampling, your comment has made that clear.

Yet no where can I find a functional relationship between

And you think, in turn, that they don’t take it into account. This is peak Dunning-Kruger 🙂

Using enthalpy would allow direct comparison of the environments in Las Vegas and Miami where temperature does not.

Enthaply is energy. Not some kinda energy per something, just energy. In itself it’s useless in comparisons. Compare the enthalpy of a village to the enthalpy of the state of British Columbia. But if you scale it with, say, area, you get something like joule/m2. That’s a thing that is comparable to other areas. I think you understand where this is going… (Well, the legendary Gorman stupidity is endless so I don’t hold my breath.)

Reply to  nyolci
December 6, 2024 7:17 am

Area weighting is for SAMPLING ERROR, not measurement error!

No. I’ve already told you why they use that. BTW I don’t think you understand much about sampling, your comment has made that clear.”

You told us nothing other than that they weight cell areas. That IS FOR REDUCING SAMPLING ERROR!

The fact that you can’t discern that is just one more brick in the evidence wall that you know nothing of metrology!

And you think, in turn, that they don’t take it into account. This is peak Dunning-Kruger”

No, the proper assumption is that if they don’t address it then they didn’t use it! Science isn’t based on faith, it is based on evidence. Omit the evidence and you require faith – utter fail.

Enthaply is energy.”

YES. And that is what *YOU* described!

nyolci: “you get total internal energy”

“That’s a thing that is comparable to other areas.”

But climate science doesn’t do that! That’s why they consider 70F in Las Vegas to be equivalent to 70F in Miami!

The bottom line, which you refuse to admit, is that you *can* average extensive properties such as enthalpy but you CAN NOT average intensive properties such as temperature!

So you convert to using enthalpy as a red herring. It doesn’t apply to climate science averaging temperature. The proof you had to convert to using a red herring is proof that you really understand that you can’t average temperature but just don’t want to admit it!

Reply to  Tim Gorman
December 6, 2024 10:29 pm

That’s why they consider 70F in Las Vegas to be equivalent to 70F in Miami!

Well, if it’s 70F in both places, these are kinda equivalent 🙂 Anyway, can you show me your comparison using enthalpy? I wanna see some real deranged shxt.

Reply to  nyolci
December 7, 2024 4:45 am

They are *NOT* “kind of equivalent” when it comes to classifying CLIMATE and climate change!

Climate is not just temperature. It is a whole host of factors, including humidity, mass, pressure, wind, terrain, and geography.

The specific heat (h) of moist air is based on the equation

h = h_a + r h_w

where h_a is the specific enthalpy of dry air and h_w is the specific enthalpy of water vapor. r is the humidity ratio.

The enthalpy of dry air is the sensible heat and the enthalpy of water vapor is latent heat. The amount of water vapor in the atmosphere is a significant contribution to the specific heat of moist air. The amount of water vapor in Las Vegas vs Miami is typically very different. Thus the canard about “but it’s a dry heat!”. The latent heat has a biometric impact as well as an enthalpy impact since it has an impact on the ability of sweat to evaporate and cool the body.

I’m not going to try and teach you thermodynamics here. You should have studied that in your physics and engineering courses at university. Open your textbooks and relearn the basics.

Reply to  Tim Gorman
December 8, 2024 11:20 am

I’m not going to try and teach you thermodynamics here

Yeah, ‘cos you’re clueless. Could you answer the question? Compare 30C in Las Vegas, and 30C in Miami using only enthalpy. Use numbers, please instead of bsing.

Reply to  Tim Gorman
December 9, 2024 2:51 pm

Still waiting of your enthalpy based comparison with number.

Reply to  nyolci
December 9, 2024 5:46 pm

Still waiting of your enthalpy based comparison with number.

I’ll answer for Tim. As before, the data are freely available. You do your own work if you feel refutation is warranted.

Requiring others to do your work for you is a dead giveaway that you are unable to perform scientific research on you own.

Reply to  Jim Gorman
December 9, 2024 10:37 pm

As before, the data are freely available

Oh, trying to sneak out, as always 🙂 FYI this is a hypothetical example, with made up numbers, and just a few sentences. I just wonder how you can compare climate using enthalpy. Of course you cannot.

Reply to  Anthony Banton
December 2, 2024 6:05 pm

And you continue to think that nothing can ever be known due your precious uncertainties.

Not just the folks trying to educate you, but scientists and meteorologists globally.

From the NIST TN 1900:

3 Measurement uncertainty is the doubt about the true value of the measurand that re­mains after making a measurement. Measurement uncertainty is described fully and quan­titatively by a probability distribution on the set of values of the measurand. At a minimum, it may be described summarily and approximately by a quantitative indication of the disper­sion (or scatter) of such distribution.

Like it or not, you are bucking an internationally accepted protocol of assessing measurements. If you disagree with “precious uncertainties”, then you need to show your evaluation of uncertainty and propagation calculations. NIST TN 1900 has a good monthly series of Tmax temperatures everyone can agree with. Use those for your evaluation.

Otherwise you are just wallowing in ignorance while trying to to convince everyone that you have some unique interpretation that is better than NIST or JCGM.

Reply to  Jim Gorman
December 3, 2024 6:19 am

Just like the rest of the ruler monkeys, he thinks “error” is measurement uncertainty.

rbabcock
December 1, 2024 5:41 pm

“Mann didn’t just miss the bullseye—he missed the entire dartboard and hit the pub wall.”

Clarence, my dartboard throwing monkey, does this occasionally. Sometimes it’s after I give him a rum spiked banana which he really enjoys. I’m keeping tabs on Clarence’s predictions vs all the weather models and will see if I can get any grant money to consolidate all this into a peer reviewed paper.

Reply to  rbabcock
December 1, 2024 6:45 pm

“Clarence, my dartboard throwing monkey”.. ???

Puzzled… can I ask a question, please.

When Clarence throws a dartboard, does he throw it sort of like throwing a frisbee ?

Rich Davis
Reply to  bnice2000
December 1, 2024 7:30 pm

More like flinging poo I reckon.

Reply to  Rich Davis
December 1, 2024 9:03 pm

Perhaps I should have been more precise.

The throwing of darts to hit certain sections on a dartboard.. that I understand, and quite enjoy.

But throwing dartboards ?????

Bob
December 1, 2024 5:42 pm

Mann needs to follow the same rules as grade school and high school math. Show your work. If he were forced to do that he would likely be making a living as a dishwasher.

Sparta Nova 4
Reply to  Bob
December 2, 2024 9:28 am

Modern “education” no longer requires that. Don’t even have to get the right answer. Just say you know how it works and doing math is not required since it is “related to colonialism” and you get an A.

Bob
Reply to  Sparta Nova 4
December 2, 2024 1:34 pm

I guess I should have said my grade school and high school math classes.

D Sandberg
December 1, 2024 6:52 pm

Wait a minute, we had 18 and Mann predicted 33 so he was more than half right, progressive fact checkers will report his forecast as “mostly true”..

JoeG
December 1, 2024 6:59 pm

Ahh. Seasonal hurricanes pertain to the weather for that year. Mann has proven that weather and climate are not the same.

Rud Istvan
December 1, 2024 7:29 pm

Late to this party as am in the mountains of Colorado visiting my daughter’s family for Thanksgiving. Brought them 10 inches of snow. Last time, brought them 18.

I thought at the beginning of this hurricane season two things would cause it to be above normal altho thought Mann”s alarm prediction was ridiculous

  1. because of a supposed emerging La Niña, and
  2. extra warm surface waters.

Didn’t happen for two reasons

  1. La Niña was late
  2. Abnormally high Sahara dust over central Atlantic. That dust even turned Fort Lauderdale sunsets abnormally red this summer.

which proves only that predicting weather is hazardous, wspecially for climatologists.

Jeff Alberts
Reply to  Rud Istvan
December 1, 2024 9:26 pm

I guess that science is settled after all.

December 1, 2024 7:42 pm

Piltdown Mann, wrong again.

Trump and his new Energy Secretary Wright need to act on day 1 and force a public debate on the data of whether there actually is a climate emergency now.
There isn’t, and we need to put this to bed as it is the font of all bad policy.

force the cockroaches into the light.

Reply to  Pat from Kerbob
December 4, 2024 3:50 am

Trump or the Energy Secretary could start that debate immediately by either one of them stating that they don’t think CO2 poses any danger and they don’t think it needs to be regulated.

There would be debate after that happened.

Will any Congressional Republicans make such a statement publicly?

How many Republicans in Congress believe CO2 is a danger to humans and needs to be regulated?

Where do Republicans stand on this issue?

December 1, 2024 8:19 pm

Hmm, it would appear that Mann and Paul Ehrlich are competing in highly problematic prognostications!!!

Phillip Bratby
December 1, 2024 11:41 pm

State Pen or Penn State.

Coeur de Lion
December 2, 2024 1:37 am

As a Brit I cringe with embarrassment as the Royal Society (est 1660) has elected Mann a member

Ed Zuiderwijk
December 2, 2024 2:45 am

Many of Mann’s critics, including this one, consider his hockey stick a scientific fraud.

December 2, 2024 5:06 am

there’s nothing inherently wrong with making predictions.”

Correct. Anyone can make predictions…like those loons who predicted that space aliens were going to arrive and beam them up onto their starship back in the 90’s. “Heaven’s Gate” or something like that right?

The problem isn’t predictions. I predict that I’ll win the lottery next week. As long as I don’t take stupid actions like quitting my job and blowing my life savings on a Hunter Biden Weekend, my prediction is harmless.

The problem is the idiots who take the predictions of these cult leaders seriously and drink the poison being proffered.

What makes them nuts and not just naïve is that they’ll still drink the poison even after the cult leader’s predictions have been proven wrong over and over and over again.