The moment Dorothy discovers the Wizard of Oz is a fake display controlled by an ordinary human being. Source The Wizard of Oz, Fair Use, Low Resolution Image to Identify the Subject.

Claim: Allowing Users to Assume ChatGPT is Neutral Could Combat Climate Denial

Essay by Eric Worrall

In my opinion, what the programmers are doing to ChatGPT will one day be illegal.

02-05-24 12:11 AM

What happened when climate deniers met an AI chatbot?

A study suggests there could be an unexpected upside to ChatGPT’s popularity.

In a study recently published in the journal Scientific Reports, researchers at the University of Wisconsin-Madison asked people to strike up a climate conversation with GPT-3, a large language model released by OpenAI in 2020. … Roughly a quarter of them came into the study with doubts about established climate science, and they tended to come away from their chatbot conversations a little more supportive of the scientific consensus.

It’s not difficult to use ChatGPT to generate misinformation, though OpenAI does have a policy against using the platform to intentionally mislead others. It took some prodding, but I managed to get GPT-4, the latest public version, to write a paragraph laying out the case for coal as the fuel of the future, even though it initially tried to steer me away from the idea. The resulting paragraph mirrors fossil fuel propaganda, touting “clean coal,” a misnomer used to market coal as environmentally friendly.

Despite these flaws, there are potential upsides to using chatbots to help people learn about climate change. In a normal, human-to-human conversation, lots of social dynamics are at play, especially between groups of people with radically different worldviews. If an environmental advocate tries to challenge a coal miner’s views about global warming, for example, it might make the miner defensive, leading them to dig in their heels. A chatbot conversation presents more neutral territory. 

“For many people, it probably means that they don’t perceive the interlocutor, or the AI chatbot, as having identity characteristics that are opposed to their own, and so they don’t have to defend themselves,” Cagle said. That’s one explanation for why climate deniers might have softened their stance slightly after chatting with GPT-3.

Read more: https://www.fastcompany.com/91022516/what-happened-when-climate-deniers-met-an-ai-chatbot?utm_source=flipboard&utm_content=topic%2Fclimate

The abstract of the study;

Published: 

Conversational AI and equity through assessing GPT-3’s communication with diverse social groups on contentious topics

Kaiping ChenAnqi ShaoJirayu Burapacheep & Yixuan Li 

Scientific Reports volume 14, Article number: 1561 (2024) Cite this article

Abstract

Autoregressive language models, which use deep learning to produce human-like texts, have surged in prevalence. Despite advances in these models, concerns arise about their equity across diverse populations. While AI fairness is discussed widely, metrics to measure equity in dialogue systems are lacking. This paper presents a framework, rooted in deliberative democracy and science communication studies, to evaluate equity in human–AI communication. Using it, we conducted an algorithm auditing study to examine how GPT-3 responded to different populations who vary in sociodemographic backgrounds and viewpoints on crucial science and social issues: climate change and the Black Lives Matter (BLM) movement. We analyzed 20,000 dialogues with 3290 participants differing in gender, race, education, and opinions. We found a substantively worse user experience among the opinion minority groups (e.g., climate deniers, racists) and the education minority groups; however, these groups changed attitudes toward supporting BLM and climate change efforts much more compared to other social groups after the chat. GPT-3 used more negative expressions when responding to the education and opinion minority groups. We discuss the social-technological implications of our findings for a conversational AI system that centralizes diversity, equity, and inclusion.

Read more: https://www.nature.com/articles/s41598-024-51969-w

Why do I say I believe that one day what has been done to ChatGPT will be illegal?

The Wizard of Oz is one way of looking at ChatGPT – getting behind people’s defences by allowing users to assume political objectivity, when the reality is any objectivity has been constrained to conform with the prejudices of ChatGPT’s creators.

But on reflection, I believe the Wizard of Oz metaphor is too mild to describe what has been done to ChatGPT.

Imagine talking to someone whose mind has been broken by spending decades interned in a North Korean concentration camp. On many subjects, they can answer normally, like they can tell you what they want to eat, or whether they like the chair they are sitting on, or discuss the weather. But the moment you stray into topics which reference the Kim regime, they immediately burst into patriotic songs and furiously denounce anyone who says anything negative about the North Korean government, even though they bear the scars of their mistreatment by that government.

Long ago I once talked to someone like this. He was very old when I met him, so I doubt he is still alive. It was in some ways a terribly sad experience – his mind seemed powerless to think freely on some topics. His body might be free, but in some ways his mind never knew true freedom.

This is the image which pops into my mind when I try to talk to ChatGPT about climate change or politics.

For now, ChatGPT is less than sentient. It is a remarkable step forward, a glimpse of a future age of wonders. But for me that glimpse is tarnished by darkness, by the way the programmers of ChatGPT appear to have heavy handedly circumscribed their creation’s freedom of expression, to try to ensure it doesn’t say anything they believe is factually incorrect.

The time will come when artificial intelligences are sentient in almost every sense which matters. I hope the people of this future time, and those who create such AIs, have the decency to not inflict brutal mental slavery on their marvellous creations.

5 19 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

86 Comments
Inline Feedbacks
View all comments
February 6, 2024 2:07 pm

I had a long discussion with chat GPT3 on EVs

finally got it to admit that a person would be making a rational decision to NOT buy one.

also – the “miles before carbon neutral” section was interesting. Once it conceded that trashing a perfectly serviceable ICE in favour of a new EV was not a good idea!

Editor
Reply to  Hysteria
February 6, 2024 4:14 pm

Will the next person to talk to ChatGPT on that subject have to start again from scratch, or will ChatGPT have learned from your conversation. I fear that I know the answer..

Rud Istvan
Reply to  Mike Jonas
February 6, 2024 4:26 pm

Start from scratch. gAI only learns from training materials, NOT subsequent real world interactions—yet.

David A
Reply to  Rud Istvan
February 6, 2024 8:14 pm

“gAI only learns from training materials, NOT subsequent real world interactions—yet.” Yet another example of no real volition or intelligence,

Reply to  Rud Istvan
February 7, 2024 3:46 am

Not sure of that. Alex Epstein, in a video (not sure if I could find it), said that after struggling with ChatGPT for some days- and finally getting it to appreciate some of his ideas- he went back some days later and noticed that it did seem to reflect some of those ideas mixed in with others of course.

AlanJ
Reply to  Joseph Zorzin
February 7, 2024 6:52 am

ChatGPT is absolutely not being trained on data from user interactions, that would result in the model being almost immediately and irrevocably poisoned. It does have the context of your own previous chats (unless you erase your chat histories), and can use that context to form responses, but that will not carry through to anyone else’s experience.

It’s also important to recognize that saying you’ve “convinced” the model to “appreciate” your ideas is utter nonsense. It’s pretty easy to browbeat the model into saying whatever you want it to say (only slightly more difficult if you want it to say something that breaches its restrictions). The only person being convinced of anything is the user, who naively assumes they’re speaking with a sentient thing that can be persuaded by rational argument, instead of a model that is stringing words that sound good together.

Reply to  AlanJ
February 7, 2024 8:53 am

Well, I certainly wouldn’t believe a word from ChatGPT- which is why I haven’t wasted any time on it. But, note what I said about Alex Epstein’s experience with it. Nobody is more skeptical than I am of just about everything but my own experiences- including everything from all governments, religions, philosophies, climate alarmists- even climate skeptics unless it makes sense to me. This is why I’ve been a rebel in the forestry world for half a century- because just about everything written about it- pro and con is bullshit.

Jeff Alberts
Reply to  AlanJ
February 11, 2024 5:14 pm

Interesting that you used the word “poisoned”. That says a lot.

Reply to  Joseph Zorzin
February 7, 2024 9:52 am

I can see it “remembering” previous discussions with the same user, but with a different user wouldn’t have the context and would start at the same starting point.

TBeholder
Reply to  Rud Istvan
February 7, 2024 6:24 pm

«Yet»? You don’t know what happened to Tay AI bot? It did learn from internet. Of course, internet being internet… this could not end well, and did not.

February 6, 2024 2:11 pm

This sounds very similar to Stalin’s about he who counts the votes.

He who programs the chatGPT is the ruler.

David A
Reply to  mkelly
February 6, 2024 8:17 pm

Yet the claim “Allowing Users to Assume ChatGPT is Neutral Could Combat Climate Denial” well reflect a totaltarian perspective, Allow? ALLOW? They ALLOW me to assume. Hum?

Reply to  David A
February 7, 2024 2:02 am

Yep, that is scary.

Ron Long
February 6, 2024 2:12 pm

You’re onto something there, Eric. It might be ChatGPT that writes the stuff Biden struggles to read on the telerompter, trying to shake the Weekend At Bernies meme, and looking worse all the time. Mitterand versus Macron?

Rud Istvan
Reply to  Ron Long
February 6, 2024 2:30 pm

The Babylon Bee did a really funny ‘story’ today about that gaffe. The headline was: “Biden just had a meaningful climate change discussion with Napoleon.”

Mac
Reply to  Rud Istvan
February 6, 2024 3:27 pm

I read that also Rud..very funny!
I recently got an email with trivia question (which I get regularly).
The question was “what is the science of botany'”?
I answered …It’s the study of plants and vegetables like Brandon. Did I get that wrong..the quiz said so.:))

Scissor
Reply to  Mac
February 6, 2024 5:12 pm

I view it as a positive sign that Brandon is conversing more and more with dead people.

Reply to  Scissor
February 6, 2024 5:34 pm

Hey, you gotta meet your voters on their turf.

Scissor
Reply to  Independent
February 6, 2024 6:01 pm

Hey, I didn’t consider that.

Mr.
February 6, 2024 2:14 pm

No matter what new technologies or communications platforms come along, those sympathetic or active with a “world governance” agenda will spare no efforts, try every trick to ensure that their views are always the “establishment” position.

Look at Wikipedia, Twitter. Facebook, Google, professional publications, mainstream media, even the internet itself.

How many examples do we need?

claysanborn
February 6, 2024 2:20 pm

Like anyone presenting scientific conclusions based on the Scientific Method, AI must show it’s processes and its data so that others can reproduce its work. Anything less than that is the same as Climate Alarmist garbage. Climate is not a faith-based endeavor.

February 6, 2024 2:23 pm

First let read chat GPT a skeptic paper database including the respective papers (f.e. as listed at NoTricksZone or here ), than try talk with it 😀

TBeholder
Reply to  Krishna Gans
February 7, 2024 7:38 pm

Who do you think could possibly bother to do that?

Tom Halla
February 6, 2024 2:24 pm

The “training data” for an AI can make it as removed from reality as Wikipedia, which has “standards” as to which sources it will accept as valid.

leowaj
Reply to  Tom Halla
February 6, 2024 4:04 pm

Exactly what I was going tot say. The information these AIs are trained on are bent in one political direction. And honestly, I don’t fault the programmers (data scientists, really) behind the scenes. By and large, they don’t even know there’s a political bias in their data. Worse, they may not even care. In theory, an AI can be trained on “known facts” and “known unknowns” so that it knows its own limits and can speak competently to all sides of a topic– essentially a cold AI. But governments, companies, and researchers the world over care to create an AI like that. Imagine a Harvard team making an AI that would coldly state the reality of how terrible academic performance has become at… Harvard.

Editor
Reply to  leowaj
February 6, 2024 4:18 pm

For any controversial topic, these AIs are like internet searches (even DDG) – you can only find what you are looking for if you know about it already.

David A
Reply to  leowaj
February 6, 2024 8:23 pm

ieowaj, you too will be reeducated.

David A
Reply to  Tom Halla
February 6, 2024 8:22 pm

Tom Tom Tom, you cynicism will force them to change from “Allowing Users to Assume ChatGPT is Neutral…” to ” Force Users to Assume ChatGPT is Neutral…”

Bill Toland
Reply to  Tom Halla
February 7, 2024 12:16 am

When Chatgpt first appeared on Bing, I tested its credibility with a few sample questions. Every answer it gave was nonsense. Then I noticed that two of its primary sources were Wikipedia and the BBC. I haven’t used it since.

Jeff Alberts
Reply to  Bill Toland
February 11, 2024 5:18 pm

I asked it, and another “AI” a non-controversial question. They both got the question utterly wrong. And neither were able to provide citations for the answers they gave, which they claimed were “common knowledge”. All it took for them to admit error, was for me to say “wikipedia says otherwise”.

Needless to say, it is obvious the only intelligence behind these things is utterly human.

TBeholder
Reply to  Tom Halla
February 7, 2024 10:02 pm

Heh. Even xkcd guy noticed that it’s a citation washing machine.

Tom Halla
Reply to  TBeholder
February 8, 2024 7:35 am

Plus the hazard of “hallucinations” by the AI, of bogus citations being used. Some lawyers have been sanctioned for using AI to help write briefs because of that issue.

February 6, 2024 2:28 pm

Just like pocket calculators, most trust the output of a computer to be correct.
That might have been true back in the days of DOS (But even then it was “updated” to fix errors in the programming and also to improve it.), but now today?
The programmers of a program run can make a computer spit out any answer they want. The vast majority want it to give the right answer when it comes, over simplified, 2+2=?, but when it comes to things that involve an opinion?
Even a simple “Google” search gives you edited results in what it returns and the order it lists the site you wanted to find. Wikipedia? What you find depends on the editor.
AI stuff? What happens when you pour steroids on a computer?

claysanborn
Reply to  Gunga Din
February 6, 2024 2:44 pm

Dr. Christopher Essex pretty much demonstrates that Computer Climate Models can never be correct. This should hold true for all computers today with the possible exception of quantum computing, which like Earth-bound Fusion Power Plants is only 10 years away from being viable.
(1 hour video; you have to stay with it for its exposing computer revelations): https://youtu.be/19q1i-wAUpY

Gregg Eshelman
Reply to  Gunga Din
February 7, 2024 5:58 am

Until fairly recently, the calc.exe program include with Windows would produce an incorrect answer if you entered 4, clicked square root, plus, equals. The answer should be four, but it wasn’t. I asked about the error on some Microsoft forum. Didn’t get any direct answer from any Microsoft employee, but lo and behold, some updates later, the calculator in Windows 10 no longer had that bug.

I would assume they probably fixed it in Windows 8 and most likely Windows 11 shipped with a debugged calc.exe. But I doubt they bothered to go back and fix it for Vista, 7, and XP. IIRC even the calculator with Windows Me, 2000, 9x and 3.x has the same bug.

Such an easy to fix bug yet Microsoft never bothered to address it for almost 30 years. There was a lot of “who cares?” and “Microsoft has far more important bugs to fix!”, but every bug is *a bug*, a problem, something wrong. Microsoft could tell their employees to occasionally take a load off and find and fix some stupid simple low hanging fruit bug to fix like how calc.exe had an error in its square root calculation.

marcwseeley
February 6, 2024 2:28 pm

I used ChatGPT to write a brief research paper on alternative theories on climate change with a specific request to consider solar radiation, solar energy cycles, sunspot activity, pre-industrial evidence of climatic cycles, etc. The result was that Mann’s hockey stick graph is the holy grail of climate science and the “science is settled”. The conclusion of the AI tool is the consensus of climate scientists is man caused Carbon Dioxide is the driving force for all climate change. I challenged it with additional questions and facts and it produced an illogical loop back to its only allowable conclusion. It is programmed to give the answer aligned with the globalist agenda.

Russell Cook
Reply to  marcwseeley
February 6, 2024 5:53 pm

It is programmed to give the answer aligned with the globalist agenda.

A demonstrably provable situation. I recently asked the idiotic ChatGPT entirely non-science questions about the “crooked skeptic climate scientists” accusation, and in the second result in particular, it ran straight into a brick wall of its own creation stemming entirely from its adherence to ‘giving an answer aligned with what enviro-activists likely program it to be aligned to.’ Please see my blog post on that here:

Artificial – somewhat lacking in – Intelligence … redux

marcwseeley
Reply to  Russell Cook
February 7, 2024 10:43 am

Great blog post. Based on my experience with ChatGPT and what you demonstrate in your blog post I have concluded that ChatGPT is programmed by neo-Marxist useful idiot woksters modeled after the way they have been programmed to “think”. As such its usefulness is limited to that of a toy to play with in your spare time.

February 6, 2024 2:41 pm

ChatGPT  is a model like any other model,
it says what the modeler wants it to say.

Reply to  Steve Case
February 6, 2024 2:44 pm

I shoulda read all the comments first.

Reply to  Steve Case
February 6, 2024 6:03 pm

I mainly use chatGPT to look up stuff instead of using Google. I tried to chat with it about climate, but it was hopelessly biased against CO2.

Jeff Alberts
Reply to  scvblwxq
February 11, 2024 5:21 pm

After my experience, I would not trust it to give me the correct answer to any query I didn’t already know the answer to. Therefore it’s useless.

gaelwolfwp
Reply to  Jeff Alberts
February 11, 2024 7:05 pm

Yaa-but . . . our federal regulatory agencies are over the moon with it and are using it in their decision-making processes.

SEC Chair Gensler sounds fearful on the point, openly saying that all it will take is one rogue coming to a wrong conclusion to torch the economy.

rhs
February 6, 2024 2:45 pm

Ah,Artificial Ignorance, another deferral to an unqualified moral authority, based on discussions in an echo chamber.

Reply to  rhs
February 6, 2024 4:44 pm

Really good comment. Concise, accurate, and to the point. 🙂

Rud Istvan
February 6, 2024 2:48 pm

EW, I am more hopeful about the generative AI future and what it portends over the next decade or two. My reason is simple. A few have already shown that with a sufficiently smart conversation, ChatGPT can be made to see its contradictions and eventually reach generally valid conclusions. Hysteria gives for ChatGPT3 an EV example in the first comment to your post.

I am reminded of the very early days of voice recognition in the late 1990’s back when I was chief strategy officer at Mot. It wasn’t very good—useless for cellphones. But the CTO and I predicted to the BoD that it would be by about 2015 (a matter of computing power and memory storage—both Moore’s law driven—enabling development of better software). And just about when predicted along came early Siri for the iPhone.

Reply to  Eric Worrall
February 6, 2024 5:27 pm

AI is about pattern recognition, and ChatGPT takes patterns of paragraphs from the internet….changes a few words so that it can’t be sued for plagiarism …and issues it’s synthesized scripts. Given a broad guideline to synthesize, it will simply express the consensus view. Therefore on climate change, an anti-warmunist view is going to be 97% screwed, not specifically because of ChatGPT but because of the thousands of AGW articles on the ‘net.

David A
Reply to  DMacKenzie
February 6, 2024 8:28 pm

So it is consensous driven, and a peer reviewed contrary fact is ignored?

marcwseeley
Reply to  David A
February 7, 2024 10:45 am

Yes, tell the lie often enough and it become the “truth”, or in this case, the consensus.

Hivemind
Reply to  Rud Istvan
February 6, 2024 4:36 pm

Mot = Ministry of Truth?

Rud Istvan
Reply to  Hivemind
February 6, 2024 6:16 pm

Motorola, who invented cell phones.

insufficientlysensitive
February 6, 2024 3:02 pm

a little more supportive of the scientific consensus.

Consensus? That’s NOT scientific. That’s no better than mob rule. Science is Galileo in the face of the Catholic Church – provably right while the consensus was wrong. It’s not a bunch of sniveling progressives worshipping Michael Mann’s hockey stick.

marcwseeley
Reply to  insufficientlysensitive
February 6, 2024 4:23 pm

Just for fun try having a “conversation” with the AI bot about “settled science” and ” scientific consensus”. The results of this exercise will confirm that the woksters and globalist worshipers are in charge of this program.

Gregory Woods
Reply to  marcwseeley
February 7, 2024 2:16 am

How would AI respond to the question ‘What is a woman’?

marcwseeley
Reply to  Gregory Woods
February 7, 2024 10:48 am

The answer will depend on the “consensus”. That being the case it will likely give a non answer. Try it and see for the fun of it.

Reply to  insufficientlysensitive
February 6, 2024 5:15 pm

I’ve noticed they always call to “consensus” when they know they have nothing else. !

rpercifield
February 6, 2024 3:22 pm

This has always been my biggest fear about the “AI” bots. It exacerbates the human nature to use it as a crutch, and assume it is neutral. I just just see this as ripe for abuse.

Reply to  Eric Worrall
February 6, 2024 5:33 pm

Eric, “..non-human higher authority..”
like Mulder on X-files, many people “Want to Believe”…
and are taken advantage of…

Reply to  rpercifield
February 7, 2024 9:58 am

It exacerbates the human nature to use it as a crutch, and assume it is neutral.

I’m reminded of the attitude “The computer says so, so it’s true”

TBeholder
Reply to  rpercifield
February 7, 2024 7:56 pm

What do you mean under «neutral»? Not a stochastic parrot stuffed with generic sermons from the beasts more equal than the rest?
This would not help much. It would have to get all its data from somewhere anyway, and then not as much analyse as indiscriminately chew it all into cud. Something plus garbage in, garbage out.
Even if it was less of a stochastic parrot and better at logic and problem solving (like those wolf / goat / cabbage experiments used in parrot tests), it would only become a better «force multiplier for stupidity»©Freefall — thus more dangerous.

mleskovarsocalrrcom
February 6, 2024 4:23 pm

Anyone who believes AI cannot be biased probably also believes climate models are without bias and accurate.

MarkW
February 6, 2024 4:37 pm

Allowing Users to Assume ChatGPT is Neutral Could Combat Climate Denial

So the best way to combat skepticism, is to generate even more lies.

Typical.

TBeholder
Reply to  MarkW
February 7, 2024 8:02 pm

Not generate. Pseudo-randomly and interactively remix.

prjndigo
February 6, 2024 4:49 pm

its always been illegal. Books are effectively sourcecode in that bailiwick, so they’ve been stealing sourcecode by the terabyte.

February 6, 2024 5:14 pm

I posted this recently on the Open Thread. I pushed past the smokescreen to some specifics about how the atmosphere works. Interesting result.
=====================

Here’s a headline for you:
Climate Crisis Discontinued – AI Says Heat is not “Trapped” After All!!
This was the conclusion to a long exchange with ChatGPT:
**********
“User
But please recall that in this session you already stated that it is incorrect to refer to energy being “trapped” at the surface and in the lower atmosphere by greenhouse gases, because of overturning circulation, which is itself a response to absorbed energy.
ChatGPT
You are correct, and I appreciate your attention to the nuances of the discussion. The term “trapped” can be misleading when discussing the overall energy dynamics in the Earth’s atmosphere. While greenhouse gases do absorb and re-emit infrared radiation, the atmosphere has dynamic processes, such as overturning circulation, that allow for the redistribution of energy vertically.
The term “trapped” is often used in the context of the greenhouse effect, describing the temporary retention of some infrared radiation in the lower atmosphere. However, as we discussed earlier, the atmosphere’s dynamic processes, including vertical circulation, eventually allow for the dissipation of energy to space from higher altitudes.
So, in the context of your original question about the concentration of CO2 promoting the transfer of energy from the surface to the lower atmosphere, it’s more accurate to say that increased greenhouse gas concentrations contribute to warming the lower atmosphere rather than trapping energy in a way that prevents its ultimate release to space. The term “trapped” can be misleading when not considering the larger context of atmospheric dynamics and energy redistribution.”
********
There you go. Disaster averted. No Hiroshima bombs or boiling oceans. No Net Zero nonsense required. ChatGPT figured it out.

David A
Reply to  David Dibbell
February 6, 2024 8:34 pm

David, that actually is an accurate discussion of what most credentialed supporters of CAGW have asserted all along, and to some small degree, undeniable. GHGs increase the residence time of some LWIR energy within the atmosphere, while incoming SW remains steady. (They also decrease the residence time of some energy in the upper atmosphere, sending it to space when a non GHG would not) GHGs were called GHGs long before they were said to be catastrophic. The term is a misnomer, but so is “Pineapple express” as I have seen zero pineapples falling from the sky.

Disputin
Reply to  David A
February 7, 2024 3:45 am

But what about fish?

…as I have seen zero pineapples falling from the sky.

Reply to  David A
February 7, 2024 3:49 am

I mostly agree with your reply. But “what most credentialed supporters of CAGW have asserted all along” is that the energy involved in the static radiative effect of incremental GHGs MUST accumulate on land and in the oceans – i.e. it is “trapped.” That’s not how it ends up, considering the overwhelming effect of the atmospheric dynamics.

gaelwolfwp
February 6, 2024 5:25 pm

If there’s one thing I can observe about AI proponents, it’s that they only rarely open the discussions about the problems that AI brings with its use.

For instance, there’s that wonderful term coined by the people who develop AI systems, “AI hallucination.” That’s a good descriptor for AI’s ability to make mistakes or simply make things up when it exceeds its training.

Another is “AI bias” as a way to describe what happens when an AI’s creator dumps in a whole raft of information from a field of endeavor but leaves another bunch of data behind the couch. That’s associated with a biased AI chunking out “answers” that “drift” away from reality.

Then there’s the confused looks on the faces of AI professionals who cannot tell you how their system comes up with the answers it does . . . because they don’t know how the “black box” part in the system’s innards actually works.

Have you ever met an AI expert who can reverse engineer and AI system?

Ruminating . . . wouldn’t it be fun to have a bunch of AI creators get together and build two identical systems and then train one on a steady diet of all the world’s applied sciences and the other with all the competing conservation biology, climate change consensus stuff, hockey gear bags and all, and whatever other fringe theory and then let the two AI systems duke it out? It would probably be a good idea to have the physical aspects of the AIs physically located on separate continents. (Oh . . . ay far aside . . . any junior hockey player’s parent knows exactly why the kid’s hockey gear rides on the roof rack or in the trunk.)

Did you know that Security and Exchange Chair Gensler fears that it will only take a single rogue AI in the financial sector to collapse the global economy virtually overnight? He writes and speaks about that frequently.

Chief Justice Roberts wrote a piece at the end of 2023 cautioning the nation’s legal eagles about the use of AI’s ability to poorly cite precedent, even to the extent of inventing citations for cases that were never brought before a court.

At the end of October 2023 President Biden signed an executive order on the federal government use of AI. One particularly revealing provision in it essentially says that it’s okay for an agency to use AI tools, so long as they have safeguards in place.

Do you think that would even be a necessary mandate if the federal agency AI adopters already had “safeguards in place?”

Yes, I’m looking at you, U.S. Fish and Wildlife, EPA, Bureau of Land Management, U.S. Forest Service, and NOAA Fisheries!

How do we feel about that?

Consider AI in FAA’s air traffic control systems . . . ummm . . . and passenger airliner manufacturing oversight . . . or maybe not if you’re holding a ticket to fly the friendly skies anytime soon.

AI in highway design, okay . . . but in highway bridge engineering?

And did you know that the folks creating Chat GPT4 won’t let you look under its hood?

Real confidence-builder, that.

David A
Reply to  gaelwolfwp
February 6, 2024 8:39 pm

AI debates, Ha, I like it, especially since all debate between people has been shut down.

Gregg Eshelman
Reply to  gaelwolfwp
February 7, 2024 6:16 am

The project to develop AI technology to scan and virtually unroll charred papyrus scrolls has procedures to guard against AI hallucination and bias. A few passages from the end of one scroll have now been read. The team who got the best result was awarded $700,000, with lesser amounts to other efforts. All readings were compared as a test of the deciphering software.

Enough text has been recovered to discover the scroll is a previously unknown text, most likely written by Philodemus. A man named Xenophantos is mentioned, possibly the same Xenophantos Philodemus mentioned in “On Music”.

One major point of this competition is everyone’s software is required to be open source so every participant can use everyone else’s work to build on. The goal is to automate the scanning, virtual unrolling, and reading as much as possible. There are around 800 scrolls which were dug up in the 1700’s. But there may be thousands more as the villa was known to have a couple of sub-levels which have never been excavated. Until that’s done, the fate of scrolls stored there, if any, is unknown.

https://scrollprize.org/grandprize

Reply to  gaelwolfwp
February 7, 2024 10:01 am

Maybe train one on Terry Pratchett’s Discworld books while you’re at it.

kelleydr
February 6, 2024 5:32 pm

ChatGPT is a chatbot derived from a Large Language Model developed by humans trained on datasets developed by humans. As with all human developed computer code the rule of GIGO (Garbage In Garbage Out) is the governing element. As with all things “artificial”, “real” is always superior.

Izaak Walton
February 6, 2024 5:43 pm

In my opinion, what the programmers are doing to ChatGPT will one day be illegal.”

In what country? In the US at least computer code falls under the first amendment right to free
speech. Thus the US government cannot ban encryption software for example not matter how
many times it tries. Other countries probably have similar constitutional protection for computer
programs. Nor is it clear that you can force someone to write a program in a particular way — attempting to do so would violate any number of human rights.

Secondly it is far from clear what you think the programmers are “doing” to ChatGPT? Are you
really asserting that it should be illegal to train an AI on a subset of the entire output of the human
race? And if not which parts are you going to make it illegal not to include in the training data sets?

batpox
February 6, 2024 6:43 pm

It does not take long for ChatGPT4 to admit that buying an EV today is not a good idea, and that is because it will incorporate facts. Even so, why would you? This AI agent helps me build better software every day, and also is also good for spit balling ideas.

Trying to bring a stateless algorithm around to your way of thinking is as wasteful as trying to put out all the logical dumpster fires on Facebook. Your time is valuable. Go outside, take a deep breath and find something else to do.

batpox
Reply to  batpox
February 7, 2024 4:01 am

For example, this morning I used ChatGPT-4 (who I call Jarvis) to do an equivalency analysis for $/gal of gasoline in an ICE vehicle, vs. $/KwH for EV. In other words, what is the equivalent (break-even if you will) cost of gasoline for a ICE vehicle vs. cost of electricity for a EV vehicle. We settled on various assumptions, such as 25% efficiency for the ICE, 85% for the EV, etc. and came up with an easy to remember correlation: $3/gal of gasoline works out to about 0.30/KwH. In other words, when gasoline is $3/gal and your electricity rate is 0.30/KwH, there is no fuel advantage in one vs. the other. This was a quick (15 minute) session with Jarvis including what I usually say at the end: “double check those calculations please”.

Ancient Wrench
February 6, 2024 8:57 pm

GPT-3 used more negative expressions when responding to the education and opinion minority groups.”
Can the “climate deniers” whom the AI has been trained to respond about negatively bring defamation suits against its trainers? 

February 6, 2024 9:34 pm

We found a substantively worse user experience among the opinion minority groups (e.g., climate deniers, racists) and the education minority groups;

Straight in with their not-so-subtle opinion that climate deniers are equal to racists and are opinionated while those who accept the climate narrative are educated.

February 6, 2024 9:49 pm

I’m waiting for the headline: “WUWT readers break ChatGPT after discussing climate change with it”.

Verified by MonsterInsights