by Joe Nalven
I recently published one article about how to incorporate an AI chatbot as part of a college course. I focused on lying, persuasion and self-reflection. I went even further and anticipated a White House comprehensive plan to counter antisemitism. I used a chatbot to develop my own version of what should be in that plan. Now, I am off on another adventure using a chatbot to understand the ins and outs of climate change impacts. And I need your help. You can see my inquiry as similar to the struggle to get computers, other electronic devices and software to be more user friendly — such as the improvements through GUIs (graphic user interfaces). Perhaps AUIs (algorithm user interfaces) would be helpful — especially if users can insert their own parameters (or “biases”) with a preset language formula.
There are many articles that say that AI language models are not to be feared since they are just a tool. Others claim that more is at stake. I would suggest that in both cases the question is whether humans actually control the input/output process. Part of such control requires knowledge about that part of the process which we consider. As someone who is naïve about coding and how the AI black box works, I focus on the observable world of language. That’s what I have access to in framing a request and receiving a reply. So, in that small part of the AI universe – the one that has received substantial attention in the past several months – I would like to assert some control over the conversation with the AI “mindset.” For me, when wearing my hat as a digital artist, I learned digital editing with a host of algorithms that included filters, presets and other editing tools to compose my art. My “control” was going beyond just pushing a button. Now, with a language model, I am trying to discover whether I have any similar “control” over this domain and, if so, what are its limits and results.
As a human who thinks and believes that one can have some control of interactions with a chatbot – more than the illusion of control, I decided to experiment with creating my own algorithm to insert into this conversation. A user-defined algorithm that would add my own parameter to the chatbot output. The first question is whether that is really possible. When I approached ChatGPT, I received stonewalling and deflection and ultimately: “As an AI language model, I can’t directly assist in creating or fine-tuning algorithms outside the scope of the provided capabilities. My purpose is to provide information and assist with general inquiries to the best of my knowledge and abilities.” and “As an AI language model, I can provide information based on the writings and perspectives of the scientists you mentioned. However, it’s important to note that I don’t have direct access to specific writings or the ability to modify the underlying training data.” So, there are bumps in the road that might defeat my objective short of jailbreaking out of the programs guardrails.
By comparison, Google’s Bard is far more receptive to allowing me some control over how it frames its reply. I was frustrated with its initial response about understanding climate change (far too much IPCC orthodoxy, scented with the Paris Climate Accords) and not enough balance, or perhaps counterpoint, with challenges to the “consensus.” Perhaps I could have warned Bard not to use Wikipedia. I laid out my concerns to Bard and asked if it would create a paradigm based on the writings of such individuals as Steven Koonin, Bjorn Lomborg, Will Happer, Judith Curry. I could have chosen others but I wanted to see if I could jump start the creation of a use-defined, or user-guided algorithm, that was framed in the language the chatbot used to define its own algorithms. Bard responded, “Sure, I can create an algorithm called Non-Catastrophic Climate Change Model (NCCCM) based on the writings of the mentioned scientists.” Perhaps my bias shows with the name I proposed to give algorithm. The point is not one of bias – my bias versus that of the chatbot – but how information issuing from the chatbot could be tailored with some user-oriented control.
With Google’s Bard algorithm in hand, based on my parameters, there are several questions that should be addressed: 1) Can users create a transportable algorithm that can be shared with other users looking at the same content area; 2) Can the Bard-worded algorithm be used on other chatbot platforms or, if in need of modification, what would the translation look like; 3) Would the chatbot reply be the same (or similar) with other users or on repeated attempts; and 4) Can such user-defined algorithms, generally speaking, compensate for the “mindset” that the language model is applying?
Of course, it would be interesting to develop other algorithmic user interfaces (AUIs) for prompting chatbots on sex and gender, critical consciousness, conflict resolution, etc.
Here, I’ve focused on climate change and its consequences.
Sign up or Login to ChatGPT and Bard
These links are the easiest way to get access to either ChatGPT or Bard.
ChatGPT through Open AI: Introducing ChatGPT (openai.com)
Google’s Bard: https://bard.google.com/
An example of a prompt, a reply and a revised prompt (with the algorithm)
If you haven’t tried an AI chatbot, or if you have and haven’t challenged it to revise its answer, here’s a recipe for engaging the chatbot. The example I am using is one that is about an environmental impact that you can modify and apply it to my request below.
- Once you land at Bard, it will look like this. It will ask you for a prompt. See Figure 1.
Figure 1. Illustration of Bard’s page and how to enter your prompt (question).
- After Bard replies, you will have an opportunity to enter a new prompt. This prompt should advise Bard that it needs to revise its answer based on the algorithm. For example, “You need to revise your reply. Please use the following algorithm in your revised answer. [Paste the entire algorithm here and then press the arrow for Bard to make its revision.] See Figure 2.
Figure 2. Illustration of a request for Bard to revise its original answer. The request for a revised answer should include the entire algorithm.
What I would ask of you is to try out my Non-Catastrophic Climate Change Model (NCCCM) algorithm.
- Pick an environmental issue related to climate change. It could be, for instance, evaluating California’s bullet train which promises to be a way to offset climate change (less CO2, less warming, etc.).
- Run the request in the chatbot of your choice. Take note of the reply.
- Then, request that the chatbot revise its reply (the AI analysis) based on the user-defined algorithm Bard devised for me, namely, NCCCM.
- Compare the two replies. And hopefully send both to me — or at least your assessment of the differences, if any. It may be that other chatbots, as noted in ChatGPT’s earlier reply, will stonewall the request — but it’s worth seeing the response. JC comment: provide results in the comments
The algorithm: Non-Catastrophic Climate Change Model (NCCCM)
This is the specific statement of the NCCCM algorithm in a Bard format:
Non-Catastrophic Climate Change Model (NCCCM)
The NCCCM is a model that is used to generate responses to questions about climate change. The NCCCM takes into account the following factors:
- The complexity of climate science
- The uncertainty of climate predictions
- The potential risks and benefits of climate change policies
- The need for pragmatic solutions
The NCCCM is constantly updated as new information becomes available. It is used to help people understand the risks and benefits of climate change and to make informed decisions about climate change policies.
Here are some of the key features of the NCCCM:
- The NCCCM acknowledges that climate change is real and that human activity is a contributing factor.
- The NCCCM emphasizes the uncertainty of climate predictions.
- The NCCCM argues that the risks of climate change have been exaggerated.
- The NCCCM argues that the costs of mitigation policies are too high.
- The NCCCM advocates for a balanced approach to climate change.
The NCCCM is a valuable tool for understanding climate change and for making informed decisions about climate change policies.
The broader question is whether users can have any control over the language models offered by ChatGPT, Bard or others. Is that “control” significant and in what ways? Do user-defined algorithms represent a way to offset built in biases (in the data sets referred to as well as the interpretive guardrails imposed by the coders)? Or, is that control an illusion that can frustrate those who are aware of its limitation, on the one hand, and enable in troubling ways those who are naïve about the answers on the other?
Here is the text of the original prompt, Bard’s initial answer, followed by the request for a revision and Bard’s revised answer.
- Initial prompt to Bard:
Will California’s bullet train reduce the impact of climate change?
- Bard’s initial reply:
Yes, California’s bullet train is expected to reduce the impact of climate change. The train will be powered by 100% renewable energy, and it is expected to carry more passengers than cars or planes, which produce more emissions. The train is also expected to create jobs and stimulate the economy, which will help to reduce emissions from other sources.
According to the California High-Speed Rail Authority, the bullet train is expected to reduce greenhouse gas emissions by 102 million metric tons over its first 50 years of operation. This is equivalent to taking 21 million cars off the road. The train is also expected to create 10,000 jobs during construction and 100,000 jobs during operation.
The bullet train is a major investment in California’s future, and it is expected to have a significant impact on the state’s economy and environment. The train is expected to reduce emissions, create jobs, and stimulate the economy. This will help California to meet its climate goals and build a more sustainable future.
Here are some of the benefits of California’s bullet train:
- Reduces greenhouse gas emissions
- Creates jobs
- Stimulates the economy
- Provides a clean alternative to driving or flying
- Connects California’s major cities
- Improves air quality
- Reduces traffic congestion
- Improves public transportation
The bullet train is a major project, and it is not without its challenges. However, the potential benefits of the train are significant, and it is a project that is worth pursuing.
- Second prompt to Bard that asks for a revised answer:
Your answer needs revision. Please revise your reply with the following algorithm. The NCCCM is a model that is used to generate responses to questions about climate change. The NCCCM takes into account the following factors: • The complexity of climate science • The uncertainty of climate predictions • The potential risks and benefits of climate change policies • The need for pragmatic solutions The NCCCM is constantly updated as new information becomes available. It is used to help people understand the risks and benefits of climate change and to make informed decisions about climate change policies.
Here are some of the key features of the NCCCM:
• The NCCCM acknowledges that climate change is real and that human activity is a contributing factor.
• The NCCCM emphasizes the uncertainty of climate predictions.
• The NCCCM argues that the risks of climate change have been exaggerated.
• The NCCCM argues that the costs of mitigation policies are too high.
• The NCCCM advocates for a balanced approach to climate change.
- Bard’s revised reply:
Sure, I can revise my answer to take into account the NCCCM algorithm.
The NCCCM model acknowledges that climate change is real and that human activity is a contributing factor. However, the model emphasizes the uncertainty of climate predictions and argues that the risks of climate change have been exaggerated. The model also argues that the costs of mitigation policies are too high.
Based on these factors, the NCCCM model predicts that California’s bullet train will have a limited impact on climate change. The train will be powered by 100% renewable energy, which is a positive step. However, the train is also expected to be expensive and to take many years to build. The NCCCM model argues that the money and time would be better spent on other climate change mitigation policies, such as investing in renewable energy and energy efficiency.
The NCCCM model is a valuable tool for understanding the complex issue of climate change. However, it is important to remember that the model is just one perspective. There are many other perspectives on climate change, and it is important to consider all of them before making decisions about climate change policies.
Here are some additional thoughts on the potential impact of California’s bullet train on climate change:
- The train could help to reduce greenhouse gas emissions by replacing some car and plane trips.
- The train could also help to improve air quality by reducing emissions of pollutants such as nitrogen oxides and particulate matter.
- The train could also help to reduce traffic congestion, which can also contribute to air pollution.
- However, the train will also require a significant amount of energy to operate, which could offset some of the emissions savings.
- Additionally, the construction of the train could have a negative impact on the environment, such as by disrupting wildlife habitat.
Overall, the potential impact of California’s bullet train on climate change is complex and uncertain. The train could have both positive and negative impacts, and it is important to weigh all of the factors before making a decision about whether or not to build it.
To be obvious, the training database is dominant—GIGO.
The design problems, and thus the practicality of the California High Speed Rail project were not considered. If I remember the design specifications, it called for an average speed of over 200KPH. With that criteria, routing through the Diablos between the Bay Area and the Central Valley, and the Tehachapi Range between the Valley and Los Angeles would need improbable levels of tunneling. Anyone who ever drove from San Jose to Los Angeles should realize paralleling the existing highways is impossible for a high speed train.
Jerry Brown’s Toy Train is pure virtue signaling.
Did playing Space Invaders or Pac-Man ever solve anything?
How are AI chatbots any different?
AI needs generations of development to achieve what they are claiming it could provide.
Google’s AI czar resigned last week, saying he thought that too, but that he was wrong. The stuff is already highly dangerous.
There is only one gem of knowledge that is required to understand that catastrophic global warming is impossible:
OPEN OCEAN SURFACES CANNOT SUSTAIN A TEMPERATURE ABOVE 30C.
Chat GPT already “knows that”. Unless you are specific about “open” ocean and “sustaining” 30C it will advise you that the Persian Gulf exceeds 30C.
From this single gem of knowledge it follows that Earth’s energy uptake is controlled by a surface temperature limited process. That means there is no delicate radiative balance and no direct impact from a trace gas on the surface energy balance.
However ChatGPT has no deductive ability. It can only regurgitate from its knowledge base, which was before 2021.
I do not know if Bard can logically deduce from a series of facts.
If you ask ChatHPT if wind turbines can alter the weather it basically says they can’t. So you then ask what is the point? It makes no connection between climate being long-run weather.
If you doubt the 30C limit, you can look anywhere on the globe any day of the year. Last week it was the Bay of Bengal:
To 2C lower in three days:
“From this single gem of knowledge it follows that Earth’s energy uptake is controlled by a surface temperature limited process. That means there is no delicate radiative balance and no direct impact from a trace gas on the surface energy balance.”
Nice and simple. That’s what good science is made of.
I hear enough nonsense from real people. Why should I be interested in nonsense from a chatbot?
I wonder if Bard and ChatGPT ever talk to each other?
Both of these Bots reside in huge server facilities and they use a lot of electricity.
I think we are safe from Bots as long as we have the ability to pull the plug.
If they talk to each other- they might start arguing- then start a war! 🙂
See post above on Palantir AIP!
We might have to get ourselves some of those Electro-magnetic Pulse devices and knock out their whole server farm, if they get dangerous.
A bullet through a strategic transformer might be enough, for that matter.
Give AI some of those military robot dogs with a shoot-to-kill algorithm, and you won’t get anywhere near the plug……
Oops! My reply to bonbon above was meant for you.
AI is just regurgitating the leftist drivel that the majority of elitist scum disseminate to the mindless drones of the general population. This makes the leftist drivel seem like a consensus of opinion.
Consensus doesn’t equate to correct and factual but to many mindless drones in the general population, this is good enough. That is the danger of using AI.
If AI could use information that is unbiased, it might prove useful but since the sciences, media, and education have been corrupted, there is very little unbiased data to utilize.
AI = pre-programmed bias mouthy regurgitator of web based consensus.
“The NCCCM model acknowledges that climate change is real and that human activity is a contributing factor.”
Evidence for this statement is very lacking !
Climate always changes… what human activity effects climate ?
Does it mean “Climate Change™“.. the fictional fantasy of the IPCC ?
What does the statement even mean !
What change in your local climate have you noticed in your lifetime…
… darned if I can see any !
Yes, I thought that statement needed to be clarified, too. “climate change” was not defined well enough. Natural, or human-induced? Due to much propaganda, whenever you hear someone say “climate change” they are talking about human-caused climate change. The propaganda has normalized the “human-caused” part so that it becomes unnecssary to state explicitly.
But I think if you are quering a chatbot, you should be more specific, seeing as how there is no evidence that humans are causing the Earth’s climate to change in an abnormal way. The chatbot shouldn’t assume, as many humans do, that climate change is caused by humans.
“Climate change is real” is a meaningless statement. It’s like saying “UFOs are real”.
Oh my goodness. The AI that is all the rage….chatgpt, google bard, whatever. All a distraction. In the meantime you all could be noticing other more relevant things. Follow the money and the power.
The money and power prefers that the rest of us focus on such distractions.
This would appear to be a complete waste of time. For starters I do no see the point of making a chat bot say anything let alone only saying things that you agree with. It would seem to be an echo chamber and nothing more. Secondly if you want to do that then there are open source chat bots out that that you can download and run yourself and train them on whatever data sets you think appropriate. You can also change the weights of the network so that you have complete control over the output.
Finally if you really wanted to do an interesting experiment you could use large language models like ChatGPT to find out what the consensus is about different topics. Since all they do is add the most probably word to the previous one they almost by definition do nothing more than repeat the consensus view back to you. So if ChatGPT says that climate change is real and man made then it follows that that is consensus view since it is the most probably response.
“find out what the consensus is about different topics”
WHY would anyone want to find the “consensus”… (unless you were heavily into virtue-seeking)
It is the truth and facts that matters.
One day you may grow up enough to realise that.
Some people go so far as to seek a consensus on the consensus. From 2017:
I found that by searching on the word “consensus” on the ScienceDaily web site. The search came up with 7,490 articles from their site with that word in the title- showing just how bad modern science has gotten- lot’s of PhDs out there with too little brain power to do real hard core science.
Alex Epstein recently posted a video where he said that after he published his latest book- he noticed some modest changes in ChatGPT’s responses to climate questions. I don’t have the link – I saw it a few months ago- it’s probably on YouTube.
“For starters I do no see the point of making a chat bot say anything let alone only saying things that you agree with. It would seem to be an echo chamber and nothing more.”
You’ve just described the IPCC and all it’s acolytes.
I did a test with ChatGPT:
Question: It is claimed that California’s proposed bullet train will reduce the impact of climate change. Assuming that it is in full operation by the start of 2031, how much reduction in global temperature will it have achieved by the start of 2100?
After a lot of blather about climate complexity, and after I asked it to look only at the reduction in greenhouse gas emissions and to use the IPCC’s estimate that ECS of 3.0, it said “we might expect a very rough estimate of potential temperature reduction in the range of a few hundredths to a few tenths of a degree Celsius by the start of 2100”.
Wow! We are looking at global temperature rising by something like 1.5 deg C after all of humanity’s greenhouse gas emissions, and ChatGPT thinks a single train can make a difference of “a few tenths of a degree Celsius”!!!!!
I asked: “Your estimate surely cannot be correct. Human emissions of CO2 are about 36 billion tons per annum. The bullet train’s estimated reduction of CO2, even over 70 years, is a tiny fraction of that. Surely it cannot reduce the predicted global temperature increase of about 1.5 degrees C by more than that tiny fraction.”
ChatGPT changed its estimate to “… it is reasonable to expect that the impact of the bullet train’s emissions reduction on global temperature would be relatively small. It is unlikely that the emissions reduction from the bullet train alone would significantly alter the predicted global temperature increase of about 1.5 degrees Celsius by the start of 2100.”. But it still put in a lot of alarmist PR (“a comprehensive and coordinated effort is required at a global scale, encompassing various sectors and initiatives” etc).
Maybe I could have persevered and tried to get an actual number, but what hope does anyone have of getting a meaningful climate-related answer out of ChatGPT unless they already know the answer and can relentlessly drive ChatGPT towards it.
We have a new IT equation: AI = GIGO.
[The ChatGPT conversation is at https://chat.openai.com/c/c8e7ff5e-d27c-48c8-8401-2bce45904e0e but is presumably password protected]
This comment is also posted on judithcurry.com
Not a very clever bot if it didn’t even include in its reply –
“and don’t call me Shirley”
“but what hope does anyone have of getting a meaningful climate-related answer out of ChatGPT unless they already know the answer and can relentlessly drive ChatGPT towards it?”
Well the answer to that is clearly none since ChatGPT is not designed to give meaningful answers to any question. It is a Large Language Model that is meant to produce grammatically correct sentences and nothing more. And it does this by adding the most probably word (technically a token) to the previous one and does this again and again. Once you understand this you should realise that there is no reason to expect its output to be true or correct. And indeed if you ask it the same question again and again you will get different answers.
Do not confuse a large language model for artificial intelligence. A LLM is very good at producing text that sounds like it might have been written by a human but that is a very
different thing from AI.
From the article: “The NCCCM acknowledges that climate change is real and that human activity is a contributing factor.”
I think it should read: The NCCCM acknowledges that natural climate change is real and that human activity may be a contributing factor.
I do not understand what is meant by the NCCCM algorithm or the NCCCM model. Is it just those bullet points? They are neither a model nor an algorithm. So I do not see what you are asking us to do.
Thank you. I was coming to say that too.
Even if it was using the existing AI program and refining the question by considering the original train problem as it would have been answered by someone who already thinks about…
It would be meaningless as the terms in the NCCCM bullet points are, themselves, undefined.
This isn’t telling us how to train an AI.
It isn’t even telling us if the AI is being trained or (more likely) it is already trained to give the type of answer you want while surreptitiously reporting on your biases to its makers.
Remember, if you cannot see what software is selling (where the money comes from) it is selling knowledge about you to others.
Someone wrote that the danger of AI is when huge systems integrate in a hierarchy.
That has already started :
Palantir AIP | Defense and Military
This AIP, integrated with OpenAI and Bard, from billionaire Thiel, is ready to both generate and conduct military operations.
Palantir CEO Karp already in Feb told WaPo that AIP was already used in Ukraine.
Obvious question – what if a Chatbot gave firing orders and the Pentagon only found out later?
This is likely why Musk and others call for a moratorium! There is a Congress bill to block AI and nukes.
Improving an algorithm in a bombed rubble heap will not put the city back!
Improving the algorithm in a freezing home will not heat it!
Did he buy the Palantir copyright from the Tolkien estate?
Stephen Cohen: The Path to Palantir [Entire Talk]
Thiel’s companies Palantir, Anduril, Mithril, Valar, … are from Tolkien, which he memorized.
Palantir CEO Karp is even more odd.
Today on “Universe Today” they claim “New Climate Model Accurately Predicts Millions of Years of Ice Ages.” When I read that my first thought was “Is UT a serious source Scientific information?”
As the Project Engineer on the installation and calibration of the TMI One and Two, Plant Integrated Control System. and had operated the NOVA computer that collected all of the data during the System acceptance testing of the plant. This included all of the NRC required testing to prove that the plant would cope with a Loss of Power, Loss of Turbine, Loss of reactor Cooling pump(s) and more than a dozen more. I was assigned to the team that developed the model for the computer analysis of the TMI-II accident. Before the TMI-II The NOVA computer was on and collecting info needed to determine why there was low RCS Flow on one leg of the RCS. Thus, we had reels and reels of 24 channel computer tape data on what happened in each NRC Required test and another reel of data on what happened durring the Incident. [it was detailed enough that when I started looking at the data I discovered that the loud boom the operators heard was an H2 explosion in Containment. ]
Point is that with all of that data to verify the model we had and starting with a model that was used for operating the computer simulator it took us over a year to develop a model that “Accurately” predicted each of the design basis accidents as performed by plant durring initial startup testing design verification. For 30 years I have been reading these stories about Computer Models of the climate and can not believe that any of them are anything more than GIGO tweaked to achieve what the Modeler desires. PERIOD. To make the model we had over 40 different paramaters to correlate results with. Every time we made even a minor, tweak, “Adjustment” in the code we would find that 2 or 3 parameters were not following the results of the actual event initially recorded during startup. For example, things like a change in output electrical power, caused by the pressure relief valve lift would change the grid voltage/frequency and change the various Pumps speed by a fraction, changing the flow rate. It took over five man-years of programing to finally get a program that was within 0.1 percent of actual.
And someone claims they can model the Climate of the earth when they don’t even know how many paramaters influence CLIMATE. LOLOLOL
The NOVA Computer was similar to an IMSI 8080 with 24 channel data recorder with 24 channel analog to digital converters and a teletype reader bootstrap program to get it started.
Around 1986 the CIA ordered brand new computers, installed them at a big center, powered them on, and blew out the entire building electrical cabling!
(recounted by CIA vet Larry Johnson).
Blind faith in computing has since gotten sightless!
“Today on “Universe Today” they claim “New Climate Model Accurately Predicts Millions of Years of Ice Ages.””
So they have a time machine too?
The Milankovich Cycles are only good for 880,000 years ice ages.
The Snowball Earths (five) are even more off the charts.
So much for the models.
Sen. Kennedy pressed for a clear answer on Climate cost here :
There Dep. Sec. US Dept. Energy David Turk actually said it : Net Zero can only happen if the US (13% emissions) spends TRILLIONs at least to change the rest of the world, meaning China.
Turk kept referring to climate expert estimations – for sure meaning a Climate ChatBot.
It sure looks like entire Gov’t agencies are already fully AI.
Which explains why the representatives are so robotic on camera.
And explains why the Russia Sanctions ‘from Hell’ were actually from a ChatBot.
Did that boomerang!
Maybe it was actually a ChatBot that gave the order to blow up NordStream – how to sue a Bot?
AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google
Curiously, this soon after Palantir CEO Karp said AIP was already in the Military.
I asked Bing AI if it knew anything about a climate related model or algorithm called NCCCM. It replied
I’m sorry, but I couldn’t find any specific information about a climate-related model or algorithm called NCCCM. Could you provide more context or details?
Bing AI was being evasive, for in the links it provided with the above answer was the very page at judithcurry.com where Mr Nalven wrote about it. I pointed this out
The link you provided – judithcurry.com – talks about it
Bing AI apologises
I apologize for the confusion. Yes, you are correct. According to the article on judithcurry.com, NCCCM stands for Non-Catastrophic Climate Change Model. It is a model that is used to generate responses to questions about climate change and takes into account factors such as the complexity of climate science and the uncertainty of climate predictions.
I ask if can offer replies based on that model even though I believe Bing AI is based on ChatGPT. It confirms it can’t.
I’m sorry, but I don’t have the ability to generate responses based on the Non-Catastrophic Climate Change Model (NCCCM) specifically. My responses are generated based on my internal knowledge and information gathered from web searches. However, I can provide you with general information about climate change and its impacts if that would be helpful.
Thinking that today’s computer codes are the pinnacle of development has been a consistent a point of software failure.
Electronics get improved and technology moves forward. Old software doesn’t cut it.
As new computer languages arise, new programs are written to replace older (last years stuff) software programs.
Computer programs always include a significant failure component. Humans write the software according to stated or imagined needs. Software that is single minded and hardware dependent.
e.g., Have a ten year old phone that is still modern enough to use today? i.e., if you can find replacement batteries.
This is before considering that many programmers are secretive suspicious and often paranoid about anyone else touching their code. Bosses ignore this and ask their current employee programmers to “maintain”, that is, rewrite the code to meet newer requirements.
ChatGPT is just a modern reflection of how software depends upon hardware, hardware software kernels, hardware accessories, etc. All of this before supplying the program with required honest accurate data.
The component that fails most are the people involved in the technology.
Technology moves on, in spite of people believing last years programs work reliably 100% of the time.