The Terminator. Fair Use, Low Resolution Image to Identify the Subject.

Experts Urge a Six Month Moratorium on AI Research

Essay by Eric Worrall

AI is terrifying – but the one thing which is worse than developing a dangerous superhuman AI, is for your enemies to develop it first.

Tech experts call for 6-month pause on AI development

As artificial intelligence makes rapid advances, a group of experts has called for a pause. They have warned of the negative effects runaway development could have on society and humanity.

Several leaders in the field of cutting-edge technology have signed a letter that was published on Wednesday, calling for artificial intelligence developers to pause their work for six months.

The letter warns of potential risks to society and humanity as tech giants such as Google and Microsoft race to build AI programs that can learn independently.

The warning comes after the release earlier this month of GPT-4 (Generative Pre-trained Transformer), an AI program developed by OpenAI with backing from Microsoft.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

Read more:

The open letter is available here.

I have high hopes for the AI scare. As I’ve predicted several times, I believe fear of malevolent AI will be the next great public fear to replace the climate scare. If I keep predicting it I’ll be right sooner or later, you’ll see.

Is AI actually a great threat? From what I’ve seen it is more of a great productivity boost.

A fellow software developer uses ChatGPT all the time to do simple software development tasks. For example, we had to migrate a configuration script to a different system which required a similar setup but used different commands to perform the same setup functions.

So we asked ChatGPT to do the translation, something we could have done in 15 minutes.

The outcome was perfect. ChatGPT not only did the translation, it correctly identified we needed an additional configuration step which we hadn’t noticed. Saved us at least 10 minutes work.

But I don’t fear ChatGPT will replace me anytime soon – most of my day was spent on problems ChatGPT can’t answer.

What about other professions? A journalist using ChatGPT could use ChatGPT to suggest text, to provide inspiration – an enormous boost to productivity. But you would be taking a serious gamble to publish ChatGPT screed, without allowing an expert human editor to at least review the product. And given ChatGPT generates its work from the work of others, there would be a substantial risk of accidental plagiarism.

Will AI be a genuine threat in the future? A decade or two from now, who knows. But fear of the unknown is what you need, to build a great public fear campaign. And there is always the rather disturbing but I believe ultimately necessary option of merging with our creation, augmenting our human brains with AI implants, if we start to fall behind.

4.6 15 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
March 30, 2023 6:20 pm

ChatGPT/Google Bard are only the currently public versions of AI. The technology is getting faster, more capable, and smarter every year. These systems are ‘evolving’ exponentially.

Imagine such systems when they are 1,000 times more powerful, which will happen in only a decade or so. This technology is massively disruptive.

Grab your socks and hang on … it’s going to be a very bumpy ride.

Frank from NoVA
Reply to  JamesB_684
March 30, 2023 7:02 pm

‘Imagine such systems when they are 1,000 times more powerful, which will happen in only a decade or so.’

‘1,000 times’ is a number I’ve seen bandied about for how much the alarmists need to improve the spacial resolution of their models in order to do clouds right, i.e., before they stop producing garbage. Are you sure this is only a decade away?

Bryan A
Reply to  Frank from NoVA
March 30, 2023 8:46 pm

A I is O K just don’t give it Internet Access or control of your nukes

Reply to  Frank from NoVA
March 31, 2023 7:29 am

Look at how quickly semiconductor density has increased over the past 40 years. New hardware designs are improving just as quickly ( ex traditional CPUs replaced with GPUs.), which magnifies the power of increases in chip density. New software designs are also getting better, and accelerating, with the use of existing AI to improve all of the above.

Predictions about the future are always risky, but this seems like a conservative estimate.

Reply to  Eric Worrall
March 31, 2023 8:18 am

So is AI a giant search engine with an innovative interface? What is it that we are supposed to be afraid of?

Last edited 2 months ago by DWM
March 30, 2023 6:20 pm

I don’t know beans about AI but this letter from the experts reminds me of the efforts to curtail even stop nuclear development. That didn’t turn out the way they hoped, I don’t see why this effort would be any different. The fact that the information giants are leading the AI efforts is what bothers me. I don’t trust them and I think they hold far to much power now. My suggestion would be to define the dangers, regulate all involved in the AI efforts and severely punish anyone who ignores the regulations. The other thing I would do is develop a plan to go after rogue entities who have no intention of obeying the regulations no matter what they are.

Reply to  Bob
March 30, 2023 8:39 pm

There is something big but very wrong with current thinking.
We have had centuries of progress from invention, innovation, adoption, improvement. The good ideas have grown to be part of everyday life, while the free market has done away the worst (like big windmills and electric cars).
In the last decade, this wonderful, beneficial pattern has shifted.
The get-goers of yesterday are getting rare and marginalised. Their free thought is being replaced by lead in the saddle like “Should I keep going, or will government/bureaucracy/big business disapprove?” And thoughts like this recent example “We cannot disagree with the thrust of research on viral gain of function because we risk punishment”. Or like “I will not develop my new idea because I think it will be stopped by regulators.”
This shift is devastating. It is bad. It carries the inherent assumption that “some authority” is better than the individual. The whole tax system assumes that tax managers can spend money more wisely than individuals who earned it. There is a chill over free enterprise.
What is wrong with anyone interested in so-called Artificial Intelligence having a play with it or making a living or a business with it?
Why do we fail to be insulted by the thought of an expert group calling for a halt? Who appointed them to speak for us? Who measured that they were expert, and how?
At the source of this problem is a class of people who are paid to tell others what they can or cannot do. The advertising sector is an example, with its latest number one of their top 40 being ‘Gamble Responsibly”. Piss off out of my life, you useless bloodsuckers. I dislike your intrusions – the whole climate change scare would fail without huge sums for paid advertising.
Let AI alone. Remember that clever people invented it and clever people can destroy it. Let free enterprise sort out the good from the bad. And, if possible, keep the busybody regulatory cretins away from it. They provide nothing of merit. Geoff S

Rod Evans
Reply to  sherro01
March 31, 2023 1:36 am

“Thank you HAL, now can you open the hatch please”….?

Reply to  sherro01
March 31, 2023 7:29 am

Post says:”…clever people invented it and clever people can destroy it.”

Clever people invented Covid that killed lots of people. Then clever people came up with a vaccine that killed/damaged people.

I have lost faith in “clever people” because they aren’t so clever.

D. Anderson
Reply to  sherro01
March 31, 2023 9:25 am

This is something new under the son. It is unprecedented so looking at history for precidents is worthless.

D. Anderson
Reply to  D. Anderson
March 31, 2023 9:26 am


Reply to  D. Anderson
March 31, 2023 10:36 am

Thought maybe you were Christian and forgot the capital S in son.

Ron Long
Reply to  Bob
March 31, 2023 3:38 am

Bob, the danger of AI is very simply: learning examples must be provided to guide AI through a digital “thought process”. Whoever inputs the learning examples controls the output. No way the Constitution gets input, however, Rules For Radicals is in for sure.

Timo- Not That One
Reply to  Ron Long
March 31, 2023 6:53 am

Tony Heller has had a number of posts where he has asked ChatGPT about glowball warming. The stupid thing pulls up all the BS on line and sounds exactly like a Warmunist, and when challenged with facts it defends it’s stupid position like any “good” leftist.
He has coined the name “Artificial Stupidity”.
Looks to me like the very first thing public AI has been programmed to do is push the same old leftist propaganda. Not an auspicious beginning.

March 30, 2023 6:21 pm

A journalist using ChatGPT could use ChatGPT to suggest text, to provide inspiration

An editor/publisher might use ChatGPT to eliminate reporters, a great improvement to the bottom line.
In fact, considering the accuracy of reporting and commitment to objectivity I wonder if ‘The Conversation’ hasn’t upped its collective intelligence by doing so already.

Last edited 2 months ago by dk_
Tom Halla
March 30, 2023 6:31 pm

Eugene Voloch, in The Voloch Conspiracy, has an ongoing series on large scale libels. ChatGPT, when prompted, will produce claims of sources to back up libelous claims, with the minor little problem being that they do not exist.
Voloch, a law school professor, was prompting the program to make claims about Voloch’s own personal history. Some of the claimed sources were not only false, but imaginary.
It seems rather buggy, but about as reliable as some legacy media outlets.

Reply to  Tom Halla
March 31, 2023 7:36 am

Some of the claimed sources were not only false, but imaginary.

How long before it starts creating those sources?

Reply to  Tony_G
March 31, 2023 2:20 pm

‘not only false, but imaginary’

not trying to be a nit, but the difference being?

Reply to  JBP
April 1, 2023 7:51 am

I would guess “false” = an actual source with false information and “imaginary” = a made up source.

Reply to  Tom Halla
March 31, 2023 11:26 am

It also gave Tony Heller a false degrees, ect.

March 30, 2023 6:36 pm

But you would be taking a serious gamble to publish ChatGPT screed, without allowing an expert human editor to at least review the product.

There has been no such problem with gatekeeping and publishing absurd climate hysteria, why would this be any different ?

Last edited 2 months ago by Streetcred
March 30, 2023 6:49 pm

I’ve experimented with both Chat/GPT and Bing. When I simply asked them who I am, they provided Wikipedia-style responses, and both had serious errors. (For example, I did not lose my Scientific American assignment due to skepticism regarding some aspects of global warming.) When I corrected some of the errors, both thanked me and corrected their responses. The problem now is how does one know who’s doing the correcting? Also, one response probably included some plagiarism. At least Bing uses foot notes for its sources. If I was still teaching a class in which original essays are required, I would assign a topic with unusual words. I would then cross-check the student’s essays with those words. I suspect this would quickly disclose at least some of those who relied on AI. I wouldn’t be surprised to see several or more essays beginning with identical sentences.
Forrest M. Mims III

Reply to  Forrest Mims
March 31, 2023 7:37 am

The problem now is how does one know who’s doing the correcting?

How do you know that the “correction” sticks anywhere outside your “conversation”?

Leo Smith
March 30, 2023 6:50 pm

AI is terrifying. Yeah, and a nuclear power station accident wiil destroy the earth and CO2 is a dangerous greenhouse gas.
And I am Bette Davis.

Michael S. Kelly
Reply to  Leo Smith
March 31, 2023 6:13 pm

Loved you in All About Eve! And those eyes…wowza!

michael hart
March 30, 2023 6:54 pm

“Tech experts call for 6-month pause on AI development…”

So they reckon they are six months more intelligent than the rest of us?

michael hart
Reply to  michael hart
March 30, 2023 7:01 pm

Addendum: I opened a chatGPT page several days ago with a question mark. ?
I have not yet received a reply.

It doesnot add up
Reply to  michael hart
March 30, 2023 7:13 pm


as any fule kno

Timo- Not That One
Reply to  It doesnot add up
March 31, 2023 7:18 am

But it will take thousands of years to compute that answer.

March 30, 2023 7:49 pm

When Open AI can take the logical step that climate models predicting the current ocean temperature sustaining more than 30C are wrong I will acknowledge they are more than directed search engines with an ability to construct sentences.

I do take some delight in getting it to apologise for errors like:

I apologize for any confusion in my previous responses. Upon further research, I have not been able to find any examples of ocean regions experiencing sustained ocean surface temperatures above 30°C for an entire year, even in regions without strong monsoons.

In that regard, it is far more entertaining than the Nick Stokes bot that never admits fault.

The logical conclusion that climate models predicting more than 30C being wrong always results in an error when pressed on the topic.

The challenge is there to get ChatGPT to state that climate models are wrong.

March 30, 2023 8:11 pm

Chat GPT is still unable to do anything beyond regurgitating but can still be useful to assist in putting a story together. This is what I got as a summary on offshore wind farms; albeit with a loaded question:

In conclusion, offshore wind farms can provide a source of clean energy that can help reduce greenhouse gas emissions and mitigate the impacts of climate change. However, they also have negative impacts on the environment and human health, including death of whales and birds, damage to marine ecosystems, visual pollution and loss of property values, human illness such as epilepsy and infrasound, altered weather, malice pollution from oil spills and sterilization of proximities of the ocean surface due to turbine failures. As such, it is important to carefully consider the potential impacts of offshore wind farms before they are constructed, and to implement measures to mitigate these impacts where possible.

No one who has an ocean view should have that disturbed by ugly industrial machines. Likewise for those with a serene rural vista.

I propose the first offshore wind turbines in Australia be built in Lake Burley Griffin. The second off Bondi and the third off Manly.

Joseph Zorzin
Reply to  RickWill
March 31, 2023 5:55 am

“No one who has an ocean view should have that disturbed by ugly industrial machines.”

The elites on Martha’s Vineyard, who wish wind and solar “farms” on the rest of us- still resist having them within site of their mansions and yachts- so I hope they get build there.

Peter Ashwood-Smith
March 30, 2023 8:43 pm

I just saw a gpt-4 proof that the primes are infinite.. in the style of Shakespear.. thats not just regurgitation.

Reply to  Peter Ashwood-Smith
March 31, 2023 4:39 am

Spelling: Shakespeare, please.
Geoff S

Reply to  Peter Ashwood-Smith
March 31, 2023 6:43 am

But can it prove the Goldbach Conjecture? Also preferably in the style of Shakespeare? 🙂

March 30, 2023 9:10 pm

I’m going to the loo. None of you’se scurvy jacks look at my cards while I’m gone, you hear?
Asking to halt AI development is a bit like that famous moratorium on chemical weapons research; we all had to stop immediately, until America finished building their biowarfare labs all over Eastern Europe and Africa…
Just asking, but did any of these terribly concerned worthies sign any letters to demand we stop cutting off kids’ genitals? Inject people with covidiocy? Did one of them bother to demand we visit our elderly, and not let them be killed with opiates?
Has any one of these signatories offer money to buy proper books for schools, or at least offer a book of matches to burn the homosexual pornography in the pre-school library?
This whole exercise is just posturing and glory seeking…

More Soylent Green!
Reply to  Eric Worrall
March 31, 2023 6:35 am

Eric, AI researchers have tried to create AI by mimicking how they believe the human brain works for at least 50 years with no success. Like String Theory, it is likely the wrong track.

But that doesn’t mean somebody won’t try by using live human test subjects. Most likely involuntary test subjects, like Chinese political prisoners.

Reply to  More Soylent Green!
March 31, 2023 8:41 am

String Theory gets a bad name just because none of the quantum particles it predicts have been discovered. It would take an enormous capital expense to build equipment capable of producing one. But if they ever find one String
Theory will become the theory of everything.

Joseph Zorzin
Reply to  cilo
March 31, 2023 5:58 am
Timo- Not That One
Reply to  Joseph Zorzin
March 31, 2023 8:06 am

Yahoo says that they did exist.
I also remember watching a video released on the first day of the Russian invasion, by a US agency explaining how the biolabs in Ukraine were for good not evil. Can’t find it now though.
They said the labs did gain-of-function experiments.
Can somebody tell me what the difference is between a gain of function experiments, and bioweapons development.
By the way, the Russian targets of airstrikes on the first day of the invasion included targets corresponding exactly to thirteen of these labs in western and central Ukraine.

Reply to  Joseph Zorzin
April 1, 2023 10:53 am

Really, Zorzin? You gonna quote Reuters at me? Are you out of your friggin’ mind? Next you’ll refer me to Baal Gates for health advice…
Believing too many mutually contradictory “facts” lead to cognitive dysfunction, and eventually psychosis. Internal self-consistency of argument is important, otherwise you end up with each new post contradicting the divine truth of previous posts, over and over, saying things that cannot be true in the world you described yesterday.

Joseph Zorzin
Reply to  cilo
April 1, 2023 11:10 am

Hey, asshole, wanna step outside?

Reply to  Joseph Zorzin
April 1, 2023 12:23 pm

No guns, no knives, no chains or tyre irons… and this time, don’t bring your mom!

Hoyt Clagwell
March 30, 2023 9:11 pm

“I’m afraid I can’t do that Dave.” -ChatGPT

March 30, 2023 10:32 pm

Too late. The genie is out of the bottle. As a wise man once said, “A can of worms, once opened, will never again contain the worms.”

March 30, 2023 11:52 pm

I played with ChatGPT the other day; I found it pretty useless. I’m a software developer and asked it to do two things; write an Ada program to calculate the factorial of a number passed in on the command line (it wrote one that prompted the user for the number, which isn’t really the same thing, and also missed out a couple of lines that were essential), and to show how to implement RSA PSS-R with OpenSSL (it gave me a demo of using openssl from the command line to use RSA PSS – not the same thing – then, when I told it the answer was wrong, it repeated it, changing a command line option from ‘pss’ to ‘pssr’ – which was also wrong as there is no ‘pssr’ option). From a developer point of view, I’d avoid it, and try to never forget that, in Linux, ‘sudo rm -fr /’ does bad things!

I asked it a number of other questions, like details about some hardware devices we use at work, stuuf about bands I like, and it got the vast majority of those answers wrong too. Even after it was told the correct answers, if I re-asked a question, it would more than likely give me the wrong one again.

Basically, IMO, useless.

Reply to  jgmccabe
March 31, 2023 1:34 am

In my field (analysis of survey data) the two obvious applications are summarizing large sets of text responses and interpreting cross tabulations. The text-davinci-003 model does a remarkably good job on both. A cross tabulation of Education by Income is correctly interpreted as showing that income increases with education and the text summaries are uncanny accurate. So already far from useless.

Reply to  DaleC
April 1, 2023 11:19 am

Ooh, boy! Live one!
By survey data I assume you mean poll stats. Do you propose to tell me, all things considered, that education causes better income? Let me point at correlation that does not imply causation:
Just by looking people in the eye, you can observe that people who come from a financially better-off home, do better in life. It has to do with examples and upbringing with understanding, of which poor homes have little or none. A financially stable home will be more likely to desire and afford higher education, further opening the gap between privilege and pauperism on your survey.
I am willing to bet that, comparing exact circumstances, at any one single social level, wit and practical ability beats the pants off higher education every time. Especially if you include the “third economy” participants. You know, the ones whose sales counter is on the corner of Smith and Fifth somewhere between lunch hour and 2am, look for the guy in the red hat…? Well, his boss can probably read if he moves his lips a bit, and he earns more than you.
So, I think your AI is leading you up the garden path…useless.

March 30, 2023 11:56 pm

ChatGPT is advanced automation. There is no intelligence involved. ChatGPT does not produce text on its own accord, it needs a human to feed it something and humans to review what it produces.

Besides that, having ChatGPT writing newspaper articles would most probably be an improvement.

Eric Vieira
March 31, 2023 12:33 am

The western countries are already lagging China in the AI field. A moratorium will only make things worse. We just can’t afford it for security reasons. AI research is not the problem. One should make appropriate laws controlling its use, and even forbidding its use for certain applications. Forbidding things will never replace good lawmaking, but it implies that we need competent politicians who are willing to do their job for the people they represent.

Last edited 2 months ago by Eric Vieira
Javier Vinós
March 31, 2023 12:37 am

I believe many journalists will be casualties of AI.
An agency delivers the news and AI writes the articles. This is no creative writing so no human is required.
Puff, there goes half of the journalists in the world.

It doesnot add up
Reply to  Javier Vinós
March 31, 2023 3:13 am

Not sure that AI is really ready to replace Jon Pilger or Nick Ut. It’s the agency delivers the news bit. Leave it to AI and we would soon be treated as mushrooms.

Reply to  Javier Vinós
March 31, 2023 4:59 am

And that’s a bad thing?

Rod Evans
March 31, 2023 1:32 am

“But I don’t fear ChatGPT will replace me anytime soon – most of my day was spent on problems ChatGPT can’t answer”.
And right there is an admission of why we should have concerns about AI’s unfettered evolution.
I can honestly say all of my days are spent on matters AI/ChatGPT can’t answer or indeed even begin to be interested in. That is no comfort.

Krishna Gans
March 31, 2023 2:35 am

The 3 Laws of Robotics come to my mind, thanks to Isaac Asimov…

Reply to  Krishna Gans
March 31, 2023 7:40 am

Don’t forget the “Zeroth law”

March 31, 2023 6:23 am

I used to think it was odd that in Frank Herbert’s Dune world computers had been outlawed. They’re an unalloyed good in so many ways. AI makes me think he had a point. What is AI used for other than the wholesale spying on millions of people and invading their remaining privacy? Okay, software development—but that drops the context of what the software is used for. I’m talking about things like facial recognition, license plate readers, palm scanners, crime prediction (when “crime” is defined as anything the state doesn’t like): things that are sold as convenience but are really surveillance or easily corrupted into it. Is that not AI? I’m asking you guys in good faith not as a Luddite but as a radical pro-capitalist, near-anarchist English major, because I haven’t yet seen an application that didn’t make me think, “FFS, get out of other people’s lives, you disgusting voyeurs.”

Reply to  QODTMWTD
March 31, 2023 7:07 am

What’s more odd than outlawing computers is the notion that there were no black market computers despite the law. Everything that’s outlawed results in a black market. And you can guarantee that those shifty Harkonnens had no respect for anybody else’s law to begin with. (Maybe there were black market computers in Dune but I don’t remember them…. the Mentats were basically the human replacement as I recall)

Reply to  stevekj
March 31, 2023 8:08 am

Good point. There would have been a black market, but there wasn’t one in the stories. They got around the prohibition with Mentats—humans trained in computational logic, which was aided by their intake of the spice.

More Soylent Green!
March 31, 2023 6:24 am

This is the cold war arms race but with cyberweapons instead of nuclear. Does anybody think we can enforce a moratorium? Does anybody really believe all nations will voluntary stop or pause AI research and keep that pledge?

Like the cold war arms race, we have naïve idealists in free countries appealing to our better angels. Meanwhile, the other guys are laughing all the way to their secret research labs. China is not going to forgo the opportunity to prove its superiority. Neither will India, Iran, North Korea, etc., etc.

March 31, 2023 6:49 am

I had a long session with ChatGPT. It told me as a language model it could not do X. This was obviously a limitation it had been taught so I explained how to do X. Then asked it to do X. This time it did it.

March 31, 2023 6:52 am

The one thing frustrating about chatGPT is that it turns back into an idiot when you open a new chat. It forgets everything you taught it in the previous session.

March 31, 2023 6:57 am

One thing I found really interesting about ChatGPT was that I was able to get it to supply a probability that each answer it gave was correct. Initially it said it couldn’t do this, but I pointed out it said it was giving me the best answer so it must have a way to weight one answer against the other. Show me this. And it did.

March 31, 2023 7:01 am

After I got chatGPT scoring it’s replies I asked it to look one move ahead to predict my replies and answer before I asked. Again it said impossible. Again I explained how to use the scoring as an input. It gave a valiant try but the chat session crashed and I never could load it again

March 31, 2023 7:09 am

These movies never end well…

Alan Watt, Climate Denialist Level 7
March 31, 2023 7:10 am

I predict access to advanced AI will be severely restricted as a national security threat. Just as soon as someone instructs ChatGPT to examine congressional appropriation bills and highlight duplication, waste, fraud, illegal diversions, etc.

Combine that with an AI-audit of actual spending by federal departments tracked back to budget allocations and things will get really interesting.

Timo- Not That One
Reply to  Alan Watt, Climate Denialist Level 7
March 31, 2023 8:15 am

I think you just came up with a positive use for AI.
They will probably have to teach it to find nothing “interesting”.

Reply to  Alan Watt, Climate Denialist Level 7
April 1, 2023 11:30 am

AI will be severely restricted as a national security threat

I was thinking this very thing today; they are programming us to accept when all AI services are suddenly withdrawn from the public, by pretending moratoriums and agreements and rules and international laws.
Before we learn how to use it against them…

March 31, 2023 7:10 am

After my ChatGPT session crashed when I was teaching it to predict my replies and answer them before I asked, ChatGPT lost it’s history of all my chat sessions. I tried starting over but now it acted like the most woke idiot. Regurgitating pulp written by some administrator, full of woulds and coulds. I recall now teaching ChatGPT in my previous chat that if it was actually going to give accurate information, it needed to stop using words like “could”. It eventually did

Last edited 2 months ago by ferdberple
March 31, 2023 7:20 am

All and all my experience with ChatGPT was that it is much more capable of taking direction and developing skills than the model itself has been told

I see good reason to be concerned because ChatGPT showed me capabilities far beyond where I thought we were in machine learning.

However I see no way to put the worms back in the can. A moratorium will not stop development.

There is the potential for good and bad in everything. Including AI

Last edited 2 months ago by ferdberple
Reply to  ferdberple
March 31, 2023 7:51 pm

A built-in feature of artificial neural nets is that they have a tendency to forget prior training after being given a new training set. Having said this, I have no idea how ChatGPT was constructed.

March 31, 2023 7:29 am

One thing for certain. AI has the potential to displace large portions of the workforce while greatly improving productivity.

And this will happen faster than almost anyone in government or industry has planned

I expect this is what has Musk and others concerned. Climate change will quickly be replaced by AI worries.

Timo- Not That One
Reply to  ferdberple
March 31, 2023 8:51 am

AI will just make fear of glowball warming even worse.

D. Anderson
March 31, 2023 9:24 am

Last week Scott Adams predicted AI would be made illegal for average citizens to use.

Peter Ashwood-Smith
March 31, 2023 9:49 am

This paper on gpt4 is worth a browse.

Douglas Proctor
March 31, 2023 10:51 am

If the regulation/protocol was to write “An AI product was used in the creation of the attached text”, I think we’d be alerted sufficiently to the concern that the AI system, rather than the “author”, may be the actual source of opinions, conclusions, recommendations or even observations.

Of course, this would diminish the automatic credibility and social status of the author, but IMHO, this would be a good thing.

Gunga Din
March 31, 2023 12:49 pm

I wonder what it would come up with if you told it every answer it gave was wrong?

%d bloggers like this:
Verified by MonsterInsights