Charles Rotter
It’s hard not to notice when Michael Mann’s name appears in the author list of a climate paper—after all, he’s about as synonymous with climate alarm as Al Gore is with PowerPoint slides. One could say spotting his name in a paper about “intensification of the strongest nor’easters” is like finding Waldo in a crowd where everyone’s wearing a red-and-white striped shirt—inevitable, but somehow still amusing.
To the meat of the matter: this paper by Chen et al., published in PNAS in July 2025, claims that the strongest nor’easters affecting the U.S. East Coast are not only getting stronger in terms of maximum wind speed, but also producing more precipitation over time, especially since 1940. Naturally, this finding is attributed to—you guessed it—“a warming world,” though, as is tradition, the underlying uncertainties and methodological sleights-of-hand are tucked away in the statistical shadows.
The paper leans heavily on reanalysis data (ERA5, 1940–2025) and cyclone tracking algorithms to cobble together a historical record, touting its “homogeneity” and “comprehensiveness.” Yet, buried within the technical details, there is acknowledgment that models and data sources are patchy at best, especially in the pre-satellite era—a recurring Achilles’ heel in this field. Indeed, the authors admit:
“The precise significance levels vary depending on the choice of statistical test, time interval, and effective storm radius… Of specific potential concern is the sensitivity of the trend to changes in input data sources during the transition from traditional surface and radiosonde observations in the early part of the record to multisensor observations in later years. However, we find that the trends of interest are even greater in magnitude… if confined entirely to the satellite era (1979–2025)…”
In other words: the “clear finding” that nor’easters are becoming more intense is more “clear” the shorter and more satellite-heavy the dataset. It’s a bit like insisting your cooking skills are improving because you swapped out a foggy bathroom mirror for an Instagram filter—suddenly everything looks better, but is it really you that changed, or the tool?
The authors do recognize the ambiguity and the wobbly ground their conclusions stand on. Previous studies, as they admit, have reached everything from “no significant change in median cyclone intensity,” to a decrease, or an increase.
“There is, as a result of these confounding factors, considerable divergence in future projections of ETC intensity in past studies, with findings ranging from no significant change in median cyclone intensity, to a decrease, or an increase.”
This is climate science in a nutshell: if you don’t like the answer, wait for another model run.
Quantile regression, the paper’s statistical hammer of choice, is used to hunt for trends in the upper tail of nor’easter intensity. The median shows no significant trend—no surprise—but the “upper quantiles” (think: the rare, nasty storms) show a “statistically significant” upward blip. Here, “significant” is a term of art, stretched nearly to the breaking point. As the authors write:
“Trends… become statistically significant at P < 0.10 for quantiles above 0.66. A similarly pronounced increasing trend at higher quantiles is also evident when applying the Mann–Kendall trend analysis… the results overall lead to a clear finding: the strongest nor’easters are becoming stronger.”
A P-value of 0.10, in case anyone’s forgotten, means there’s a 10% chance the result is due to random noise. For comparison, most scientific disciplines would require P < 0.05 (or even lower). Here, we’re invited to hang public policy on a confidence threshold that wouldn’t pass muster in most reputable poker games.
Then there’s the matter of reanalysis data, which the authors themselves acknowledge is a stitched-together Frankenstein’s monster of models and sparse measurements, especially in the first half of the twentieth century. If this is the bedrock for billion-dollar policy decisions, it’s no wonder taxpayers feel seasick.
The study’s discussion pivots to a familiar script, predicting more damage, more floods, and (curiously) “the counterintuitive possibility of increased winter cold air outbreaks in regions neighboring the U.S. East Coast.” It seems global warming, much like a Las Vegas magician, can pull any outcome from its hat—hotter, colder, drier, wetter, all roads lead to Rome.
And let’s not overlook the obligatory economic scare numbers:
“The total economic loss from [the Ash Wednesday storm, 1962] was estimated at approximately $3 billion (1962 USD). When adjusted for inflation, a storm of similar magnitude striking today would result in losses exceeding $21 billion (2010 USD)… Accounting for inflation, that would be equivalent to $31 billion, which is in proportion to the typical cost of a major landfalling hurricane.”
One almost expects the next sentence to warn of a biblical plague of frogs, with losses adjusted for inflation.
Now, about Michael Mann: his presence on this author list is not just a punchline, it’s a calling card. Mann, famous for the “hockey stick” graph that gave Al Gore a PowerPoint and generations of schoolchildren nightmares, has become something of a celebrity meteorologist—equal parts scientist, activist, and legal enthusiast. If his name’s on it, you can bet the conclusion will be that weather is getting worse, and humanity is to blame. It’s less a finding than a branding strategy.
Yet, let’s give credit where due. The authors stop short of outright libel or slander against skeptics, which is more than can be said for certain climate “debates” on social media. Instead, the rhetorical force is channeled into statistical acrobatics and economic extrapolations. The real comedy here is not in the intent to deceive, but in the perennial hope that just one more regression, one more reanalysis, will finally clinch the case for “unprecedented” danger.
In summary, this paper offers a case study in climate science as performance art. There’s an obligatory nod to uncertainty, a parade of statistical significance at thresholds so generous even carnival barkers might blush, and a supporting cast led by Michael Mann, the maestro of the climate anxiety industrial complex. For policymakers and the public, the lesson is simple: always read the fine print—and if the numbers look scary, check who’s holding the calculator.
HT/rhs
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

From the paper’s abstract:
’With central pressures that sometimes rival those of tropical cyclones, they represent a significant coastal hazard and are often associated with strong winds, heavy snowfall, disruption, and damage.’
In other word, they can be a nuisance, but nothing like a tropical storm that can really wreak havoc via massive flooding.
The nor’easter on March 7, 1962, put a record six feet of water on the outer coastal town of Chincoteague, Virginia (The Eastern Shore News, March 8, 1962). True that they do not reach the strength but they are contrarily very cold producing dense winds.
“ambiguity and the wobbly ground”
Sounds like a good definition of “climate science”. The topic is so complex nobody understands it- which would be fine if everyone admitted that- rather than pontificating the future with tremendous political/economic effects.
With a recent (7/10) CNN poll showing that the public is as afraid of climate change in 2025 as they were in 2000 (40% saying they are) I’m certain Mann is going to continue to crank up the fear-o-meter with more of these statistical gymnastics to try to keep himself relevant.
cobble together…
And that’s just about all they do. If only nature would play ball.
“ If this is the bedrock for billion-dollar policy decisions, it’s no wonder taxpayers feel seasick.”
Which is why the taxpayers elected a government to drop most of this sort of nonsense into the dumpster. It’s been more than 30 years of this sort of propaganda persuading no one of anything. But these things do take time. It took 40 years for Piltdown Man to be confirmed as a fraud. The supposed evidence for AGW is not much better. Mann should know; he’s another faker of evidence the way Charles Dawson was.
“In summary, this paper offers a case study in climate science as performance art.”
That is an excellent way to put it. May the current administration bring the curtain down, close the theater, and send the actors away.
p=0.10 significance level?
I thought real science used 3-sigma or 6-sigma of deviations before accepting results.
By the term “real science” you are into particle physics and beyond flipping coin probability.
And add it to your legal briefs in reparations claims cases against the climate crusades leadership and political enablers.
Better check for monetary rewards from the insurance industry in the continuous test called follow the money.
So they might have found a slight change in bad storms and have attributed that possible change to warming and supposed that warming was caused by burning fossil fuels. The assumption of cause seems wholly unsubstantiated.
Good to see the Open Threads work, thank you Mr. Rotter!
You sent in a story tip. You used the correct protocol by including the string “story tip”. It would have worked exactly the same on any post. Including those two words automatically sends me an email.
Answer me this: How did “one of the most destructive storms ever to affect the mid-Atlantic states” get wound-up on Ash Wednesday of 1962? The Jeep Wagoneer (an early SUV) didn’t arrive until 1963.
When you have support of the media anything is possible.
I decided years ago to not believe anything with his name above it. The mann is like a bad penny, a corrupting force affecting everything he touches.
“A P-value of 0.10, in case anyone’s forgotten, means there’s a 10% chance the result is due to random noise.”
No it does not. It means that if the null-hypothesis were correct there would be a 10% chance of getting the observed results.
Full disclosure, I’m a statistical luddite. Your statement is more formal and probably the proper textbook definition, but otherwise how is his non-expert working definition any different from yours?
You have to start with what is random noise and what is a null hypothesis. Null hypothesis is a statement that assumes no functionality between variables in a hypothesis test. Random noise is unpredictable variation in data – natural variability.
Bellman, maybe I’m a little more fact oriented than some “researchers”. If my airline announced that my flight had a 10% chance of falling out of the sky, I wouldn’t take it.
All this statistical folderol is just to obscure the fact that “climate science” is nonsense. No experimental support, and its proponents can’t even dream up reproducible experiments to support their fantasies – because they can’t even describe their dreams in any consistent and unambiguous way.
You wouldn’t think that even 95% certainty was worth risking your personal safety on. If someone handed you a pistol with 20 rounds, but only one was live, cocked and ready to fire, telling you that there was 95% confidence that the round in the chamber was inert, would you hold the gun to your head and pull the trigger?
Nothing is absolutely “certain” in science (the uncertainty principle, and chaos theory, put paid to that), but “certain enough” for me. I drive my car in the “certain” knowledge that the brakes will work, as will the steering. Not 95% confidence, or even 99% confidence. I’m close enough to 100% confident to drive, or even to fly in a “fly-by-wire” aircraft, about which I know almost nothing!
The utterances of a faker, fraud, scofflaw and deadbeat like Michael Mann don’t engender the same level of confidence for me. Maybe you are more ignorant and gullible than I am.
And what is the null hypothesis in this case? Surely its that the storm tracks and intensities and so on, were random (noise)?
Don’t very strong winter storms require a sharper temperature and pressure gradient? In a warmer world, we would expect that to decrease, not increase, and so if indeed nor’easters were becoming stronger, more intense, with higher wind speeds and precipitation, wouldn’t that run counter to the idea that the mid latitudes were warming significantly?
I’d like to hear how Mann would explain away the lowest barometric pressure ever recorded in Ohio during “The Great Blizzard of ’78”?
The measured pressure was lower than some of the low pressures he and the MSM yell and scream about (and now “name).
No, because of the geography of the NE US and E Canada.
In winter there will still be deeply cold air masses plunging down through Canada and into the US. That will not change markedly in foreseeable warming at least in extreme events (Polar vortex incursions). Northern landmasses will still be snow-covered and not warm the air in its advection south. The warm-sector airmass of a NE’er on the other hand is sourced from the GoM and is expected to become increasing warm and humid. Therein lies the necessary tightened deltaT gradient and release of LH of condensation that will deepen baroclinic NE’ers with consequent increased wind strengths and precipitation.
Or, if you invert your desktop world globe, the deeply cold air masses will soar “upward” through Canada into the US. Thus showing that contrary to popular belief and “climate science”, that cold air rises!
People sometimes forget we live on a “globe”, and everything on the surface falls toward the centre of the Earth, not towards the poles or the equator.
GOM is now GOA.
Thank you, Mr President.
Geoff S
upvote with a smile. 🙂
I’m not suggesting that nor’easters won’t form or be powerful, I’m saying that they shouldn’t be any stronger because of warming, and using rare events as evidence of AGW would run counter to their theory. Those storms should be gradually weakening and migrating to the north.
No, as I explained above …. In winter.
I’m amazed anyone would want their name to appear as a co-author along with Michael Mann. Just like the New York Times, Washington Post, CNN, MSNBC, any of the alphabet networks…seeing Michael Mann’s name associated with any story immediately invalidates it to me as being a piece of fraudulent garbage, in my own humble opinion of course!
So it sounds like the new measurement methods don’t have the same results as the old methods. If you are going to compare todays storms to those in the past then you must use the same method as they did in the past. Are we somehow incapable of measuring like they did in the past?
An interesting paper, the OP shows his significant bias against Mann however, for example you’d think that all the analysis was only done to a “P-value of 0.10″.
If you read the paper you’d find that’s not the case, for example:
“There is an increasing trend in mean hourly precipitation for an effective storm radius of 750 km (P = 0.055) with warming over the past eight decades. The trend is more significant at larger effective storm radii (SI Appendix, Fig. S9), with a statistically significant increase at the P < 0.05 level for 1,000 km (P = 0.034).”