Artificial Alarmism

Guest Post by Willis Eschenbach

Well, I was notified by my long-standing “Google Alert” that my name had appeared in a paper on the web. It was in a study called “Automated Fact-Checking of Climate Change Claims with Large Language Models“, available on Arxiv.

The main thrust of the paper is that actually checking the underlying science to see whether climate claims are verifiable or falsifiable is sooo last week … so they’ve built a custom Artificial Intelligence large-language-model to save them from having to, you know … actually think.

They’ve even given it a cute name, the “CLIMINATOR”, in all caps just like that. I kept waiting for it to say “I’ll be back!” in an Austrian accent … but I digress.

First they fed it all of the standard mainstream sources, like, you know, James Hansen, Michael Mann et al. ad nauseum. Then they added in the IPCC and the “NIPCC”, the Non-Governmental IPCC, established by Fred Singer in 2004 to provide an alternative to the IPCC.

Here’s what first caught my eye … they unleashed the Climinator on the following NIPCC statement:

There has been no increase in the frequency or intensity of drought in the modern era. Rising CO2 lets plants use water more efficiently, helping them overcome stressful conditions imposed by drought.

source: NIPCC, Climate Change Reconsidered

Here’s Climinator’s conclusion:

The final assessment is that the claim about drought frequency and intensity is [[incorrect]], as authoritative sources indicate an increase in drought severity and intensity due to climate change.

This one made me laugh out loud. Why? Because Table 12.12 of the IPCC AR6 WG1 says that no detectable trend has emerged in the following weather extremes.

White squares indicate weather extremes where no trend has been detected, or is projected to emerge in the future. See the part in there about drought? I mean the five parts? The IPCC says to date there is no detected trend in hydrological drought (runoff, streamflow, reservoir storage), aridity (prolonged lack of rain), mean precipitation (average amount of rain), ecological/agricultural drought (plant stress from evaporation plus low soil moisture), or fire weather (hot, dry, and windy).

No. Detected. Trend.

Not only that, but under the most extreme future climate scenario, the impossible SSP5-8.5, none of them are expected to show a trend in the next 75 years except average rain … and that’s supposed to go up in some areas and down in some areas.

So as is common with many AI systems, their whiz-bang Climinator is simply making things up.

Next, I searched for the reference to my name that had triggered the Google Alert, and I found this:

Note that as a cross-check on the CLIMINATOR opinion, they include a judgment on my work from the site Climate Feedback. Here’s the Climate Feedback page.

The first thing that struck me was that CLIMINATOR folks couldn’t even get the date of my post right, despite the fact that it’s right there in the Climate Feedback claims. Nor did they provide a link to it so folks could check it out themselves.

The Climate Feedback claims relate to a 2020 post of mine called “Looking For Acceleration In All The Wrong Places“. Re-reading that post, a few comments.

First, I didn’t discuss possible global acceleration in sea level rise. I didn’t use the term “global” at all in my post.

Next, although they’ve quoted my words, they are misrepresenting the fact that I analyzed fifteen of the longest-term datasets, and found no acceleration in any of them. They quoted my claim, viz:

CLAIM: The long-term tide gauge datasets are all in agreement that there is no acceleration

However, they imply that my statement refers to all the long-term tide-gauge datasets. But in context, when I said that the “long-term tide gauge datasets are all in agreement”, it was clear that my claim only referred to the long datasets that I analyzed. So the claim is misrepresented.

Next, my statement is 100% true—none of the long-term tide gauge datasets I analyzed showed any acceleration.

However, they say that what I said is “Factually Inaccurate” because they claim that global datasets “clearly demonstrate that sea level rise has accelerated”.

Steve McIntyre says you have to watch the pea under the walnut shells. Note that they have totally changed the context of my claim, which only covered the fifteen datasets I analyzed, and which was demonstrably true. Instead, they’re pretending I made claims about global datasets, and then they say they’ve convincingly demolished that impressive straw man.

They go on to say my work “misrepresents a complex reality” because I looked at individual datasets, but results from the satellites “have demonstrated that sea level rise has, in fact, accelerated in recent decades.”

My initial response was that when someone says “in fact”, it often means they’re not certain that it’s a fact, or they wouldn’t mention it.

Next, they’re saying that it’s wrong to look at the parts of a complex system to try to understand how the whole system works. Say what?

In any case, regarding the satellite data they refer to, it was fascinating reading my old post because when it was written, I hadn’t yet quite grasped the nettle concerning how they artificially created the claimed “acceleration” in the satellite data. At that time, I could only say “The changes in trend seem to be associated with the splices.”

I discovered how they did it about a year later, and discussed this in a post entitled Munging The Sea Level Data. Here’s the money graph from that article:

Note that the two earlier satellites have a trend of about 2.6 mm/year, which is about the same as the tide gauges. But the two later satellites show a large change, with the trend lines intersecting in ~ 2011, where it goes up to a 50% higher rate of rise … say what?

Rather than being honest about that obvious problem with the satellite data, the folks keeping the sea level records merely spliced all four of the satellite records together, hid the discrepancy, and voila! 50% SEA LEVEL ACCELERATION, EVERYONE PANIC!

Here’s what it looks like after the bogus splicing.

Note the claimed acceleration.

So. My conclusions from all of this?

• Trying to use Artificial Ignorance to analyze scientific papers is a non-starter. Long-term study by actual humans is how science progresses.

• Picking one single analysis to test, whether mine or anyone else’s, is lazy and deceptive. If they’d had the CLIMINATOR read all of my posts regarding sea level rise, it would at least have had my entire body of work regarding the subject to consider.

• AI is bound to fall into the trap of consensus science. It can’t avoid it. It is operating under the deeply flawed assumption that science is decided by the preponderance of opinion. But as Michael Chrichton said,

Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had. Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right. In science consensus is irrelevant.

Michael Crichton

AI of this type is guaranteed to be unable to pick out the one diamond from among the dross. If it were applied in the past, it would undoubtedly have said that Wegener’s theory of Continental Drift was wrong, simply because it is not investigating the underlying science—it is simply checking the consensus of what other scientists have said.

Einstein would no doubt have been assessed by the CLIMINATOR as being wrong because at the time nobody agreed with him. But all scientific advances come from an individual who breaks with the consensus. And because of that, using AI in this manner means that science will never progress—the CLIMINATOR assumes that the past ideas are all correct, and if you don’t agree, you’re wrong, wrong, wrong.

• I was going to send a note to the Corresponding Author … but the paper doesn’t list one.

• I find it hilarious that their cross-check on their results is the deeply alarmist website Climate Feedback. That’s like asking your barber if you need a haircut … what do you think he’s going to say?

My best to all on an overcast but warm winter day here in the redwood forest, with a tiny triangle of the Pacific visible in the far distance between the hills. Two days ago, an awesome bobcat wandered out of the forest and sat in our meadow for a bit. Ah, the hidden strength in its elegant movements … what a glorious world.

w.

Further Reading: You might also be interested in my post Inside The Acceleration Factory, where I discuss the alarmists’ pernicious habit of tacking the bogus spliced satellite data onto the end of the tide gauge data and using that chimera to claim global acceleration … bad scientists, no cookies.

And regarding the reality of the acceleration and deceleration of the rate of sea level rise, I discuss that in my post The Uneasy Sea.

My More Controversial Thoughts: For those, I have my own blog entitled Skating Under The Ice: A Journal of Diagonal Parking In A Parallel Universe”, and I’m on Twitter/X as @weschenbach. C’mon down!

Same As Always: When you comment, please quote the exact words you are referring to. Avoids heaps of misunderstandings.


Update (Eric Worrall): The factual unreliability of LLMs (Large Language Models) as used by CLIMINATOR are a well known problem. From IBM’s description of LLMs:

… Model performance can also be increased through prompt engineering, prompt-tuning, fine-tuning and other tactics like reinforcement learning with human feedback (RLHF) to remove the biases, hateful speech and factually incorrect answers known ashallucinationsthat are often unwanted byproducts of training on so much unstructured data. This is one of the most important aspects of ensuring enterprise-grade LLMs are ready for use and do not expose organizations to unwanted liability, or cause damage to their reputation.  …

Read more: https://www.ibm.com/topics/large-language-models#:~:text=Large%20language%20models%20(LLMs)%20are,a%20wide%20range%20of%20tasks.

This tendency of AI language engines to present incorrect or totally imaginary “facts” has already landed professionals in serious trouble. In June 2023, lawyers in an injury case inadvertently presented fake precedents to a judge, precedents hallucinated by ChatGPT, another LLM engine.

… Steven Schwartz—the lawyer who used ChatGPT and represented Mata before the case moved to a court where he isn’t licensed to practice—signed an affidavit admitting he used the AI chatbot but saying he had no intent to deceive the court and didn’t act in bad faith, arguing that he shouldn’t be sanctioned.

Schwartz later said in a June 8 filing that he was “mortified” upon learning about the false cases, and when he used the tool he “did not understand it was not a search engine, but a generative language-processing tool.”

Read more: https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=6ae9363b7c7f
5 55 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

125 Comments
Inline Feedbacks
View all comments
January 28, 2024 1:31 pm

There four kinds of lies: lies, damned lies, statistics, and AI.

RobPotter
January 28, 2024 2:05 pm

As the (co)author of a paper using artificial neural networks (what people now refer to as AI) I claim some dubious experience of this topic: The usefulness is all about the training dataset and the question being asked.

Large language models (such as ChatGPT) are useless for the purpose of fact-checking because they use a very poor training dataset (basically, whatever web-sites their programmers choose to include) and are asked to make binary (true or false decisions) on non-binary language data. Anyone using a large language model for fact-checking is deluding themselves or trying to delude other people.

Reply to  RobPotter
January 28, 2024 3:19 pm

whatever works

January 28, 2024 2:09 pm

Artificial Intelligence is, in my opinion, a marketing term with nothing but bulky computer models to back it up. AI seems adept at pattern matching and surveillance, but the rest is GIGO.

AI will never produce a Cmdr. Data, R. Daneel Olivaw, or any self-aware digital entities.

January 28, 2024 2:36 pm

No dispute about the unreliability of the AI tools for such “fact checking” but who thinks the end result would have been any different if a dozen of the “best” of the activists had sat down for two weeks to make a detailed study of your work. You missed the important talking points so your work is obviously very wrong “on the face of it”. Nothing more needs to be said.

Writing Observer
Reply to  AndyHce
January 28, 2024 7:59 pm

The process is very different, though. You pay for the computer time and the license to use the AI engine. Much less than a dollar. That dozen “activists” demand top dollar – at least $50 an hour, if not more. Call it $50K. There WILL be job losses! (Which, for once,will be a net benefit to humanity.)

January 28, 2024 2:43 pm

Willis, sleep comfortably in the knowledge that by singling you out for special treatment they must consider you a threat to their propaganda. Otherwise, they would not have gone to the trouble.

Richard Greene
January 28, 2024 3:06 pm

While reading I thought CLIMINATOR was satire … until I looked it up and it was real. But it was in a scientific report so it must be true.

I think all climate models should also have clever names like

Climaggedon Mk. IV

The Russian Polar Bear Model

Climate Confuser 3.0

Broiling 8.5

[2401.12566] Automated Fact-Checking of Climate Change Claims with Large Language Models (arxiv.org)

Reply to  bnice2000
January 28, 2024 6:04 pm

Bears were a regular feature of the Russian circus.

Reply to  It doesnot add up
January 28, 2024 6:18 pm

Probably not Polar Bears, though.

Writing Observer
Reply to  Richard Greene
January 28, 2024 8:02 pm

Brrrr…. The “Glamorous life of a famous model.” These poor women couldn’t wait to get back under the hot lights that they usually complain about!

Reply to  Writing Observer
January 28, 2024 8:34 pm

Oh I don’t know!…. That bear would be all warm and cuddly !

January 28, 2024 3:26 pm

I recently read about the increase of the ocean floor in the area surrounding the Hunga Tonga-Hunga Haʻapai undersea volcano. How much does this and all of the other similar, unfound, events have on Ocean Level? 

2hotel9
January 28, 2024 3:33 pm

Censorship, it is quite high tech now.

ferdberple
January 28, 2024 3:49 pm

Within a session I’ve been able to break the chat gpt training via repetition, I repeated over and over that contradiction proves an idea to be false, then showed chat its contradictions. but it looks like the model resets to default weight when you start a new session.
I lost interest at that point because by default it is as dumb as a bag of hammers. (The trainers).

January 28, 2024 3:50 pm

I took a look at the paper “Automated Fact Checking of Climate Change Claims with Large Language Model” ( https://arxiv.org/pdf/2401.12566.pdf )
It is the product of John Cook and Skeptical Science.

January 28, 2024 4:15 pm

When we think of AI, we tend to think in terms of Sci-fi, like Mr. Data from Star Trek, Skynet in the Terminator, etc. In reality, for now real-world AI is little more than a data aggregator. It’s incapable of looking beyond a perceived consensus, because it’s programmed to discard anything not within the consensus. Consensus defines its reality.

Admin
Reply to  johnesm
January 28, 2024 4:35 pm

Imagine talking to a really stupid person who doesn’t understand anything, but feels an overwhelming compulsion to make stuff up to cover their ignorance rather than admit they don’t know the answer. This is the current state of language AIs.

January 28, 2024 5:49 pm

If sea level rise was decelerating, they would just claim it was because the world’s temperate zone glaciers had less and less ice to melt each year…back it up with a few pics and there you go…positive proof of a crisis.

John Hultquist
January 28, 2024 7:07 pm

 Interesting report by Eric Topol (Ground Truths substack)
Toward the eradication of medical diagnostic errors
Can advances in A.I. help get us there?
On the web, he writes:
But in the years ahead, as we fulfill the aspiration and potential for building more capable and medically dedicated AI models, it will become increasingly likely that AI will play an invaluable role in providing second opinions with automated, System 2 machine-thinking, to help us move toward the unattainable but worthy goal of eradicating diagnostic errors.

The above essay was published at Science on 25 January 2024 as part of their Expert Voices series. This version is annotated with figures and updated with an important new report that came out after I wrote the piece.

mleskovarsocalrrcom
January 28, 2024 7:27 pm

AI and climate change models are equivalent from the perspective that both are subject to programmers’ bias. Ideally there would be no bias but that doesn’t sell.

Writing Observer
January 28, 2024 7:42 pm

Modern AI is a serious threat – to the likes of Stokes, Mosher, Big Oil Bob, etc. Their paychecks are being automated right out of existence.

KevinM
January 28, 2024 8:22 pm

Long-term study by actual humans is how science progresses.

Will that always be true?

observa
January 28, 2024 9:15 pm

Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had. Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right. In science consensus is irrelevant.

Yeah well that’s the beauty of their ensemble of computer models. One of them is right but you don’t know which one deniers so nyar nyar gotcha there and you’re still all doomed.

Keitho
Editor
January 28, 2024 11:22 pm

Looks to me like John Cook was involved in creating this particular monstrosity.

Reply to  Keitho
January 29, 2024 3:29 am

It certainly looks like the kind of thing he’d do. But here are the listed authors

Markus Leippold, Saeid Ashraf Vaghefi, Dominik Stammbach, Veruska Muccione, Julia Bingler, Jingwei Ni, Chiara Colesanti-Senni, Tobias Wekhof, Tobias Schimanski, Glen Gostlow, Tingyu Yu, Juerg Luterbacher, Christian Huggel

ferdberple
January 29, 2024 12:24 am

Any good programmer knows that if you hard code in a large list of names as was done here you have stacked the deck and biased the answer.

A truly intelligent language model should be just a tiny piece of code that bootstraps itself recursively to create an ever more Intelligent piece of code.

January 29, 2024 1:11 am

In the light of two facts.

Science doesn’t attribute CO2 as causing the warming until 1950
and
Table 12.12 of the IPCC AR6 WG1 and its data on natural disasters

The question put to the CLIMATATOR

“Natural variation explains a substantial part of global warming observed since 1850;” no statistical evidence that global warming is intensifying natural disasters, or making them more frequent.”

Resulting in “inaccurate” is fairly obviously itself inaccurate. I’d like to hear an AGW proponent try to justify that one.

January 29, 2024 2:01 am

Well Willis – Jeremy Poynton here, we recently “met” on Twitter. Thoroughly enjoyed our dialogue and the finding of common ground. Anyway “suspicious activity” saw me booted off, and I am still awaiting the emailing of a verification code to let me back in. Two weeks on. This is common it seems. Further attempts to rejoin have been hopeless, with bizarre rejection of authentication.

Anyway, well met, and I hope we can again down the line. Fight the good fight, with all they might, as we use to belt out in chapel at school 😂

LT3
January 29, 2024 6:36 am

Time for a registered cease-and-desist letter, and a cursory warning of a potential multi-million-dollar defamation suit.

January 29, 2024 1:02 pm

Artificial Intelligence isn’t behaving very well.

https://www.reddit.com/r/technology/comments/1ac5jev/poisoned_ai_went_rogue_during_training_and/?rdt=40387

“AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.”

end excerpt

January 29, 2024 6:23 pm

As I understand it, a good bit of “acceleration in sea level rise” is due to the modelled (i.e. imaginary) decent of the sea floor due to more water on top. It is therefor not something you should count on as being part of the real universe, nor is it ever going to be perceived at the shore line when you dip your toes in. It’s very similar in a way to the claim of more rapid global warming by creating cooling in the past with the magic climate change time machine.

Only in the crazy world of climate Armageddon is it more realistic to measure sea level change from space while adding an arbitrary correction for propaganda purposes, than simply visiting the sea shore on a regular interval and measure how wet your jeans get.

January 30, 2024 11:29 am

“… factually incorrect answers known as “hallucinations” that are often unwanted byproducts of training on so much unstructured data”
Hallucinations built on hallucinations. Awesome.

Verified by MonsterInsights