Guest Post by Willis Eschenbach
Well, I was notified by my long-standing “Google Alert” that my name had appeared in a paper on the web. It was in a study called “Automated Fact-Checking of Climate Change Claims with Large Language Models“, available on Arxiv.
The main thrust of the paper is that actually checking the underlying science to see whether climate claims are verifiable or falsifiable is sooo last week … so they’ve built a custom Artificial Intelligence large-language-model to save them from having to, you know … actually think.
They’ve even given it a cute name, the “CLIMINATOR”, in all caps just like that. I kept waiting for it to say “I’ll be back!” in an Austrian accent … but I digress.
First they fed it all of the standard mainstream sources, like, you know, James Hansen, Michael Mann et al. ad nauseum. Then they added in the IPCC and the “NIPCC”, the Non-Governmental IPCC, established by Fred Singer in 2004 to provide an alternative to the IPCC.
Here’s what first caught my eye … they unleashed the Climinator on the following NIPCC statement:
There has been no increase in the frequency or intensity of drought in the modern era. Rising CO2 lets plants use water more efficiently, helping them overcome stressful conditions imposed by drought.
source: NIPCC, Climate Change Reconsidered
Here’s Climinator’s conclusion:
The final assessment is that the claim about drought frequency and intensity is [[incorrect]], as authoritative sources indicate an increase in drought severity and intensity due to climate change.
This one made me laugh out loud. Why? Because Table 12.12 of the IPCC AR6 WG1 says that no detectable trend has emerged in the following weather extremes.

White squares indicate weather extremes where no trend has been detected, or is projected to emerge in the future. See the part in there about drought? I mean the five parts? The IPCC says to date there is no detected trend in hydrological drought (runoff, streamflow, reservoir storage), aridity (prolonged lack of rain), mean precipitation (average amount of rain), ecological/agricultural drought (plant stress from evaporation plus low soil moisture), or fire weather (hot, dry, and windy).
No. Detected. Trend.
Not only that, but under the most extreme future climate scenario, the impossible SSP5-8.5, none of them are expected to show a trend in the next 75 years except average rain … and that’s supposed to go up in some areas and down in some areas.
So as is common with many AI systems, their whiz-bang Climinator is simply making things up.
Next, I searched for the reference to my name that had triggered the Google Alert, and I found this:

Note that as a cross-check on the CLIMINATOR opinion, they include a judgment on my work from the site Climate Feedback. Here’s the Climate Feedback page.

The first thing that struck me was that CLIMINATOR folks couldn’t even get the date of my post right, despite the fact that it’s right there in the Climate Feedback claims. Nor did they provide a link to it so folks could check it out themselves.
The Climate Feedback claims relate to a 2020 post of mine called “Looking For Acceleration In All The Wrong Places“. Re-reading that post, a few comments.
First, I didn’t discuss possible global acceleration in sea level rise. I didn’t use the term “global” at all in my post.
Next, although they’ve quoted my words, they are misrepresenting the fact that I analyzed fifteen of the longest-term datasets, and found no acceleration in any of them. They quoted my claim, viz:
CLAIM: The long-term tide gauge datasets are all in agreement that there is no acceleration
However, they imply that my statement refers to all the long-term tide-gauge datasets. But in context, when I said that the “long-term tide gauge datasets are all in agreement”, it was clear that my claim only referred to the long datasets that I analyzed. So the claim is misrepresented.
Next, my statement is 100% true—none of the long-term tide gauge datasets I analyzed showed any acceleration.
However, they say that what I said is “Factually Inaccurate” because they claim that global datasets “clearly demonstrate that sea level rise has accelerated”.
Steve McIntyre says you have to watch the pea under the walnut shells. Note that they have totally changed the context of my claim, which only covered the fifteen datasets I analyzed, and which was demonstrably true. Instead, they’re pretending I made claims about global datasets, and then they say they’ve convincingly demolished that impressive straw man.
They go on to say my work “misrepresents a complex reality” because I looked at individual datasets, but results from the satellites “have demonstrated that sea level rise has, in fact, accelerated in recent decades.”
My initial response was that when someone says “in fact”, it often means they’re not certain that it’s a fact, or they wouldn’t mention it.
Next, they’re saying that it’s wrong to look at the parts of a complex system to try to understand how the whole system works. Say what?
In any case, regarding the satellite data they refer to, it was fascinating reading my old post because when it was written, I hadn’t yet quite grasped the nettle concerning how they artificially created the claimed “acceleration” in the satellite data. At that time, I could only say “The changes in trend seem to be associated with the splices.”
I discovered how they did it about a year later, and discussed this in a post entitled Munging The Sea Level Data. Here’s the money graph from that article:

Note that the two earlier satellites have a trend of about 2.6 mm/year, which is about the same as the tide gauges. But the two later satellites show a large change, with the trend lines intersecting in ~ 2011, where it goes up to a 50% higher rate of rise … say what?
Rather than being honest about that obvious problem with the satellite data, the folks keeping the sea level records merely spliced all four of the satellite records together, hid the discrepancy, and voila! 50% SEA LEVEL ACCELERATION, EVERYONE PANIC!
Here’s what it looks like after the bogus splicing.

Note the claimed acceleration.
So. My conclusions from all of this?
• Trying to use Artificial Ignorance to analyze scientific papers is a non-starter. Long-term study by actual humans is how science progresses.
• Picking one single analysis to test, whether mine or anyone else’s, is lazy and deceptive. If they’d had the CLIMINATOR read all of my posts regarding sea level rise, it would at least have had my entire body of work regarding the subject to consider.
• AI is bound to fall into the trap of consensus science. It can’t avoid it. It is operating under the deeply flawed assumption that science is decided by the preponderance of opinion. But as Michael Chrichton said,
Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had. Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right. In science consensus is irrelevant.
Michael Crichton
AI of this type is guaranteed to be unable to pick out the one diamond from among the dross. If it were applied in the past, it would undoubtedly have said that Wegener’s theory of Continental Drift was wrong, simply because it is not investigating the underlying science—it is simply checking the consensus of what other scientists have said.
Einstein would no doubt have been assessed by the CLIMINATOR as being wrong because at the time nobody agreed with him. But all scientific advances come from an individual who breaks with the consensus. And because of that, using AI in this manner means that science will never progress—the CLIMINATOR assumes that the past ideas are all correct, and if you don’t agree, you’re wrong, wrong, wrong.
• I was going to send a note to the Corresponding Author … but the paper doesn’t list one.
• I find it hilarious that their cross-check on their results is the deeply alarmist website Climate Feedback. That’s like asking your barber if you need a haircut … what do you think he’s going to say?
My best to all on an overcast but warm winter day here in the redwood forest, with a tiny triangle of the Pacific visible in the far distance between the hills. Two days ago, an awesome bobcat wandered out of the forest and sat in our meadow for a bit. Ah, the hidden strength in its elegant movements … what a glorious world.
w.
Further Reading: You might also be interested in my post Inside The Acceleration Factory, where I discuss the alarmists’ pernicious habit of tacking the bogus spliced satellite data onto the end of the tide gauge data and using that chimera to claim global acceleration … bad scientists, no cookies.
And regarding the reality of the acceleration and deceleration of the rate of sea level rise, I discuss that in my post The Uneasy Sea.
My More Controversial Thoughts: For those, I have my own blog entitled “Skating Under The Ice: A Journal of Diagonal Parking In A Parallel Universe”, and I’m on Twitter/X as @weschenbach. C’mon down!
Same As Always: When you comment, please quote the exact words you are referring to. Avoids heaps of misunderstandings.
Update (Eric Worrall): The factual unreliability of LLMs (Large Language Models) as used by CLIMINATOR are a well known problem. From IBM’s description of LLMs:
… Model performance can also be increased through prompt engineering, prompt-tuning, fine-tuning and other tactics like reinforcement learning with human feedback (RLHF) to remove the biases, hateful speech and factually incorrect answers known as “hallucinations” that are often unwanted byproducts of training on so much unstructured data. This is one of the most important aspects of ensuring enterprise-grade LLMs are ready for use and do not expose organizations to unwanted liability, or cause damage to their reputation. …
Read more: https://www.ibm.com/topics/large-language-models#:~:text=Large%20language%20models%20(LLMs)%20are,a%20wide%20range%20of%20tasks.
This tendency of AI language engines to present incorrect or totally imaginary “facts” has already landed professionals in serious trouble. In June 2023, lawyers in an injury case inadvertently presented fake precedents to a judge, precedents hallucinated by ChatGPT, another LLM engine.
… Steven Schwartz—the lawyer who used ChatGPT and represented Mata before the case moved to a court where he isn’t licensed to practice—signed an affidavit admitting he used the AI chatbot but saying he had no intent to deceive the court and didn’t act in bad faith, arguing that he shouldn’t be sanctioned.
Schwartz later said in a June 8 filing that he was “mortified” upon learning about the false cases, and when he used the tool he “did not understand it was not a search engine, but a generative language-processing tool.” …
Read more: https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=6ae9363b7c7f
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Outright fraud is a huge and growing problem in modern academia. AI is now being used by students to write supposedly original essays, conduct research – all without ever having to leave the bedroom. So it’s hardly surprising that intellectual dishonesty is contaminating the field of climatology and global warming either. This is hardly much more than a cheap advertising stunt. Two centuries ago, this sort of thing was bottled as some miraculous patent medicine. Now we know it’s merely snakeoil.
“Intellectual dishonesty Is contaminating the field of climatology and global warming”?
AI is only making it faster and cheaper. See above – the shills will find themselves out of a job.
It seems possible that standard plagiarism software will detect AI compositions.
A “prefect model test” would have an AI produce an essay with one minor variation in the question. How different is the answer. And does the same answer occur repeatedly?
It seems likely, also, that an AI will weight information to reflect the authoritative standing of the source. So it will always present the consensus narrative as the best and most reliable answer.
A platform for stasis. Zero critical thought, zero creativity.
“”Authoritative sources””
An appeal to authority is baked in
“”Authoritative sources””
I caught that also in the CLIMASTERBATOR’s response.
Who programed into it just who is an “authoritative” source?
(Is it’s hard drive a tree ring?)
CLimate
Attribution
Predictive
TRending
AlgorithmIc
Program
Or
CLAPTRAP
A fact pointed out by tide expert Nils Axel Moerner. There are globally about 65 long record (you need >60 years to remove the lunar nodal cycle) tide gauges with sufficiently close dGPS to correct for vertical land motion. They show about 2.2mm/yr and no acceleration.
Climinator climinated.
Volume of water lets say is constant…volume of Earth rock, magma and core not counting oceans lets say is constant. So really if the ocean floor, 70% of the planet moves “up” 1 mm on average, continents must go down by 7/3= 2.2 mm. raising the apparent sea level. Sure, its more complicated than that, but the point is there’s a fairly big elephant in the room.
If a very large volcanic region rising up over a few decades, in say, the middle of the Southern Pacific by say 2-3m…
… would we even know?
The legal blogger Eugene Voloch “The Voloch Conspiracy” has been regularly able to prompt ChatGPT to libel him, going so far as to hallucinate legal references of proceedings demonstrating Voloch’s guilt.
GIGO!
Interesting. Voloch and his gang of contributors have posted a number of thought provoking essay’s, so ChatGPT’s libeling of Voloch’s blog sounds like something out of Orwell’s 1984. That is reinforcing the party line.
IMHO, it is very important to allow posting of provocative opinions, as it requires the reader to think and for the reader to re-evaluate their own view of the world. Even some of the most far out crazies, e.g. Lyndon LaRuche, often make some very insightful remarks.
Erik, you say:
Indeed, Erik. In fact, that’s one of the big functions of WUWT, not to provide any kind of “truth”, but to expose interesting ideas to public scrutiny. See my post below.
https://wattsupwiththat.com/2020/12/30/a-new-years-look-at-wuwt/
w.
Willis,
Thanks for the kind words. I wholeheartedly agree that WUWT is great at exposing interesting ideas to public scrutiny as a useful adjunct to science and I’m of the opinion that peer reviewed journals are in danger of becoming obsolete.
Don’t get me started on “journalists” – such as one journalist stating that nitrogen is a noxious gas. To be fair, NOx is noxious, but N2 is mostly benign.
As for “truth”, I’m follower of Kuhn’s definition of science as the ongoing process of getting a better understanding of nature.
I know the movie 1776 took lots of … uh … liberties in it’s portrayals of the people and events involved but it did succeed in communicating that adopting The Declaration of Independence was not a “done deal”.
Having said that, one of my favorite quotes from the movie referring to whether or not to allow Independence to be debated,
“Hopkins : Well, in all my years I ain’t never heard, seen nor smelled an issue that was so dangerous it couldn’t be talked about. Hell yeah! I’m for debating anything. Rhode Island says yea!”
Does anyone else follow astrophysics? It is going through multiple controversies and major problems. In that field contradictory theories based on the same observations are published. Highly credentialed people publish weird theories in respected referred journals. Actual calculations of 8 sigmas probability are questioned with no problem. In other words it is a reasonably rational and reasonably honest field. What a contrast.
Anyone who was paying attention should have been able to notice that more than a few previously well credentialed and well respected medical people, with many years of research and publications under their belt, were suddenly exposed as kooks and brain deteriorating bigots as soon as they pointed out anything supposedly not in agreement with the “experts” who had the political and social connections to direct the flow of events in the Covid fubar. These crazies were clearly taking large bribes in order to see that millions of people would be killed by figments of imagination.
All of which is to say there is nothing special or unusual going on in cosmology, the process is common, but much less money is at stake in some of the controversies.
Or geology where multiple working hypotheses are common.
Mr. Magnuson: I don’t recall ever hearing somebody claim that they got on the wrong side of “Big Geo.” Or “Big Cosmo.” Must be a money thing.
Mr. Halla,
I think it is worse than you actually state.
GIGO should not result in AI committing libel against The Volokh Conspiracy”. I think it is MORE likely that somebody has the means to feed the AI deliberate misinformation to train it.
However, I have zero expertise in AI. If I am certainly wrong, I am happy to be corrected.
I think it is a matter of the “teaching” sources. If one has no idea of the controversy over Michael Mann’s “hockey stick”, he can come across as a normal academic researcher, not a vindictive advocate
I think you spelled *sshole wrong.
Yeah I don’t think it has a “d” in it
Periodically, I ask it for the last 10 digits of π. Today’s answer is 1242883556, and it refers me to Wikipedia’s article on calculations of the digits of π. Which reports:
The last 100 decimal digits[1] of the latest 2022 world record computation are:[2]
So it can’t even work out which the last digits are in that block.
Odd that it doesn’t know what an irrational number is…
What is your exact question?
FWIW, I asked Google AI (in Chrome) this question “what ae the last 10 digits of π”. This is what it replied:
[Bold emphasis mine]
So it kind of makes sense, in that one could never print out all of the digits of π. We see that the AI seems to “understand” that π is infinite, so it is indeed correct to say that if you want to know what the last 10 digits are, then you merely have to look at the last 10 digits that were printed out. A ridiculous tautology, but entirely correct.
I believe “AI” is a kind of human intelligence, because it is exclusively trained on examples of human intelligence. So it generates a response by transforming the input question into a generated response, harmonized with the training database, using using parameters such as attention.
Yes, a weird response, but useful I think, if we can manage the dangerous pitfalls.
AI is such a beautiful invention. Let’s assume you say something bad about Dr. Mann, he will sue you. Now you can have an AI say it, and he is welcome to sue AI..
Let’s remember this about AI (artificial intelligence) –
the definition of “artificial” is –
1
: not natural or real : made, produced, or done to seem like something natural
So to be relying on the products of such a tool that are, by definition – not real – why would any rational adult vest any scintilla of credence in such products?
AI is a computer program that has a programmer.
My pocket calculator can give me the square root of 4,589 a lot faster than I could figure it out.
The the program it runs is just math calculations. Fixed rules, no bias built in. The results can apply in the real world.
With AI, the only “rules” it uses are up to the programmer.
(Sort of like looking up anything related to “Climate Change” or other controversial subjects on Wikipedia. The results will depend on who edited the article last.)
Problem is that the answer depends on the number of places of the numbers used in the calculations for the evaluation of arcsin (arccos (arctan (tan (cos (sin (9) ) ) ) ) ). The results from the evaluation of this equation in degrees mode will also be different in the radians mode and on a calculator with sufficient memory for more than 7 places and significantly better with 16 and better with 32, then 64, etc., etc. Worse, each of the algorithms for calculations for arcsin, arccos, arctan, tan, cos, sin, Log, ln, etc. will be different. Using “canned” values is only a band aid. I have even gotten different answers using the same base algorithm with different programing languages.
Side note.
On the first day of my class in Statics & Dynamics, the professor
said that the dean has approved the use of the newly available hand
calculators. I went directly to the bookstore and bought an HP-21. More than
half of calculations I made on my first home work assignment were. I then did
the problems with my trusty K&E DECILoN slide rule, the same one I used for
determining “Critical Rod Height” for dozens of Reactor startups and got the
answers that were in the book. All students that used a Hand calculator had the
same problem.
I took my TI-35 (powered by a 9-volt battery) and started with integers only, squared them, and then took the square root of that number. I think it was 7, I squared it and of course the number that appeared was 49, then I took the square root and saw 6.99999999999… (I forget how many decimals it could show.) Rounding problems. But, highlighted the fallacy of relying exclusively on machine-generated answers.
Has artificial rubber not found a valid place in technology?
Artificial Intelligence → BcS
I’ve never understood why it is called “artificial”.
All the fancy words used shouldn’t hide the computer.
Likewise, it is not intelligent.
The results have to be summaries of existing material.
Writers have biases.
So, we get “Biased computer Summaries” – BcS
Central Washington State has dense fog, about 40°F, with a
chance of rain or snow – depending on elevation.
Deer, but no bobcats or lions in sight.
Could that acronym be shortened to just BS
(Purely in the interests of brevity of course. And accuracy)
It’s called Artificial Intelligence because the term came from science fiction.
Calling modern programs AI sounds smart and sciency and hints at the qualities of DATA from Star Trek but it is not what science fiction portrays.
I sometimes tell my elementary students to simulate a line. And explain that ‘simulate’ means ‘pretend.’ I think the smart ones understand what I want, but there are others….
Personally, I reject the term ‘artificial intelligence’ in favor of ‘simulated intelligence.’
(Great weather in Los Angeles today. Many animals driving on the freeway.)
Computers aren’t intelligent. They only think they are.
Excellent article Willis😁 – as usual.Thanks for exposing all the climate alarmist crap that establishes what AI really represents.
Regarding the acceleration of sea level rise, here’s the link to Dave Burton’s trend table of over 1200 tide gauges listed in the PSMSL . A histogram for acceleration from 91 long term tide gauges from that table shows that the mode is a very strong 0 mm/yr² see below:
Nice data set. It looks like it stops around 2015. Is there a more recent compilation? If I use it something I would rather not leave it open to any easy dismissal. Thanks in advance.
The PSMSL is up to date, but it’s a lot of work to download. There’s a down load page somewhere but as I recall it wasn’t any easier than opening up the annual data page on individual tide gauges and copy & paste it in to your Excel spreadsheet.
Here’s the link to Dave Burton’s Sea Level info home page
https://www.sealevel.info/
If you download annual data from the PSMSL, create a graph & choose a 2nd order polynomial trend and select [Display equation on the chart] then multiply the x² from the equation by 2 to get the acceleration in mm/yr² Unfortunately Excel doesn’t have a formula that does those last two steps for you. You have to type the x² value in manually.
I knew a guy that could get it to print those values into a cell in the spreadsheet. I think he used Visual Basic.
I’ve been down the road with MrExcel.com on that question. I gave up. If you find out let me know, stacase hot mail.
“Heretic! Burn the witch!”
— Future CLIMINATOR posts regarding Mr. Eschenbach
Personally, I really enjoy reading your posts and always learn something new. No wonder the
“Eye of Sauron”CLIMINATOR has fixed its gaze upon you.Artificial intelligence sounds like a great idea. Until you notice that it has an IQ of 25 or less.
Basically, it looks for consensus, and regurgitates it.
It is essentially a check that pal review is keeping skeptical papers out of the literature. It’s simply reporting on the available published.
so Piltdown Mann keeps skeptical papers out of the journals and AI confirms that all published science is alarmist.
Actually its been measured to have a verbal IQ of over 150
‘verbal’? As in ‘blather’?
Verbal as in: the LLM was incapable of analysing images.
A bad case of verbal diarrhea ?
Willis, their targeting your analysis is truly a modern Red Badge of Courage, so keep it up. Thanks.
Willis, this may be of interest.
Voortman, H.G. (2023). Robust validation of trends and cycles in sea level and tidal amplitude in the Dutch North Sea. Journal of Coastal and Hydraulic Structures, 3. https://doi.org/10.59490/jchs.2023.0032
https://journals.open.tudelft.nl/jchs/article/view/7068
works better & links to the pdf.
I disagree with this sentence: “Trying to use Artificial Ignorance to analyze scientific papers is a non-starter.”
Your case is a special one where the AI system is trained on alarmism. The more general claim does not follow. I expect AI systems to be very useful in looking at science. For example in a lot of fields many more papers are published than a human can read but an AI system can look at all of them and tell us what is going on.
Another example is where specific math is used in many fields and advances in it are made here and there that need to be known about more broadly.
Here is an article of mine on the math example:
https://scholarlykitchen.sspnet.org/2011/10/11/my-utopian-vision-for-communication-of-scientific-methods/
Dave, the problem is what I pointed out down further. It has nothing to do with being trained on alarmism.
Any kind of AI would reject both Wegener’s theory of Continental Drift and Einstein’s Theory of Relativity because they would disagree with every single piece of information they were trained on.
Science progresses because people break with the consensus … but everything the AI knows is based on the consensus.
Yes, they can assist with math … but that is a tiny universe with well-understood rules, which is totally different from looking at the natural world.
Best to you,
w.
WE, two fun facts about Wegener’s Continental drift theory.
Geologists rejected it in part because he was a meteorologist, not a geologist—despite the four separate categories of geological evidence he assembled from 2012 to 2021.
And it wasn’t accepted until the 1970’s when the magnetic pole reversal striping at the mid Atlantic spreading ridge showed there was a mechanism than eventually became called plate tectonics.
Of course you meant 1912-1921.
I remember attending a Peninsula Geological Society meeting at Stanford in the 1970s, at which petroleum geologist Dr. Arthur Meyerhoff was in attendance. He was still adamantly opposed to Plate Tectonics and able to articulate his objections to the hypothesis.
Canadian J. Tuzo Wilson played an important role in promoting Plate Tectonics, acting much like Darwin’s Bulldog (Thomas Huxley) had, to advance Wegener’s ideas.
Wegener’s work was in disagreement with that of influential gate keepers. Nothing else about it was relevant during the first half of the 20th century.
It was accepted by many (including me) in the 60’s as the magnetic reversals were mapped in the “geophysical year” in the late 50s.
Sorry Will,but I have no idea what you are saying. The kind of AI applications I am talking about do not reject things. They summarize them, including the debates. For that matter I recently got ChatGPT to correctly explain how Happer disagrees with alarmism. Nothing was rejected.
The math I refer to is that used to do science. Almost all published science uses math so it is universal. In the article I reference I use Monte Carlo as an example. There is an advance in Monte Carlo method published in a forest management journal that needs to get to all the other fields that use that method, which are legion.
For that matter you know how Google now suggests related and refined searches. That is AI and it works well.
Does AI work for some things? Sure. As you say, I can ask it to get me all the articles on some subject.
But if I ask it which of them are true and which are false, IT CAN’T THINK, so all it has to go on is what humans have said on the subject … and that, as we know, is a very unreliable guide.
As I said before, if you were to train your AI on all previous Newtonian physics, and then you asked it if Einstein was right, I can’t imagine it would say “Yep! All of established physics is wrong!”.
I say that in part because that’s what the scientists of the time said, that Einstein was wrong. So that would be all that the AI would know.
AI works well for some things, like the ones you mentioned.
But for settling contentious scientific debates?
Not likely … yes, it could probably summarize the arguments for both sides. But IT CANNOT THINK!
We know this in part because all of the current AIs make up what in a human would be called “flat-out lies” … and they don’t even notice that they’ve done that.
w.
What do you think thinking actually is?
IMO it can, but it isn’t yet. Well not effectively and not publicly in what we have access to.
Thinking is nothing more than asking a question, coming up with an answer and then questioning whether the answer is a good one. Then go around again and again until you’re “happy” with the answer – whatever that means. AI can do that, at least in principle.
How does AI know to ask the question you speak of.
AI couldn’t come up with F=ma by itself. It has no input as to why mass and acceleration lead to something else.
IT CANT THINK.
How did you know to suggest F=ma ? Because the prompt you gave yourself was to think of a left field example of thinking. Why did you do that? Because of the prompt of my post.
Why do you think AI can’t do similar?
That remains to be seen for future AI. Wegener came up with the theory in the context of all the things he knew, not all of them consistent with observations. Once AI has gotten to the point where it can do reflection on what its been taught, its entirely possible it will be able to identify inconsistencies and come up with alternative possibilities.
The problem is, AI is incapable of telling is something is real or concocted.
It just takes what is written.. period. No thinking or intelligence involved at all.
The problem is people are incapable of telling something is real or concocted.
It would seem you are just describing an aggregator looking for pattern in papers not actual science.
Seems like it would be useful for searching terms like, “how many times was there mentioned in 2023 of the Thwaites Glacier falling off the planet.”
It’s not science, but a way to see what the climate/insane are up to.
You can spot the trends and factors they have agreed to focus on that year.
You can find trends in what is being allowed to be published, that is allowed past the gatekeepers.
Useful, but not science.
Artificial Intelligence = Artificial Ignorance.
There\s nothing artificial about the ignorance of AI on many subjects.
It’s demonstrably as real as anything ever gets.
(Which is ably demonstrated by Willis’ article here)
I was happy to see my intuitions and prejudices about AI confirmed by this presentation: I feel no need to think about it any further. Large language models are like a close-magic trick: indisputably impressive and entertaining, but not real and going nowhere soon.
In other words training for the answers you want.
100%
At this point in its ‘life’ AI appears to be a computerized version of activist websites such as DeSmogBlog which routinely bashes ‘deniers’ without a plausable argument or space for a rebuttal.
https://www.desmog.com/climate-disinformation-database/
Use that page as a guide as to who might be MOSTLY CORRECT about climate. 🙂
And people guided by science, rather than mantra.
Another example of “consensus science” is J Harlen Bretz’s theory of catastrophic floods forming the scablands in eastern Washington. When he proposed that in the 1920s, the established geologists laughed. Bretz led field expeditions all through out eastern Washington, into British Columbia and around the Columbia River valley all the way out to the Pacific Ocean. He had the data to back up his theories.
Nick Zenter of Central Washington University has been exploring Bretz’s activities.
https://www.geology.cwu.edu/facstaff/nick/gFLOODS/
Always found that area interesting. At first I couldn’t think of what I was looking at then it occurred to me it was like looking at a stream bottom just with much bigger features.
and big rocks.
there is a similar area east of Medicine Hat just over the Saskatchewan border, clearly looking at something created by a sudden massive flood, one of those ice lakes burst through and draining suddenly.
Very nice Willis. I can’t see how AI can possibly be more trustworthy than what you find on the internet. When I search the internet I usually have to go in five or six pages to find something that isn’t a shameless repeat of the first article. It is bad.
With you there Bob.
From IBM’s description of LLMs (Large Language Models), as used by the creators of the AI:
But if what is a “hallucination” is defined by those high on CO2 …?
😎
At least your numerate with R. I’m a minor variant with economic data.
The UNIPCC in the TAR report commented that they could not predict climate as they were dealing with a mathematically chaotic system.
The worst of this skulduggery is the affect it is having on our young people. Irresponsible “science” can be a crime against humanity.
“The UNIPCC in the TAR report commented that they could not predict climate as they were dealing with a mathematically chaotic system.”
Add in people that will use anything as a lever to power …
No different than the pronouncements of the shamans and tribal chiefs for past 10,000 years or so, and for exactly the same reasons.
All good points. This is not a good use of AI.
All caps will get it banned on Yahoo along with stock symbols etc., unless of course it’s given a special climate exception.
The lawyer pleads ignorance, after getting caught of course.
Many lawyers put off working on their client’s cases until the last moment, just like college students with their end of term papers. Undoubtedly there was simply no time left to verify the AI produced brief, so how could he be at fault?