Essay by Eric Worrall
AI is terrifying – but the one thing which is worse than developing a dangerous superhuman AI, is for your enemies to develop it first.
Tech experts call for 6-month pause on AI development
As artificial intelligence makes rapid advances, a group of experts has called for a pause. They have warned of the negative effects runaway development could have on society and humanity.
Several leaders in the field of cutting-edge technology have signed a letter that was published on Wednesday, calling for artificial intelligence developers to pause their work for six months.
The letter warns of potential risks to society and humanity as tech giants such as Google and Microsoft race to build AI programs that can learn independently.
The warning comes after the release earlier this month of GPT-4 (Generative Pre-trained Transformer), an AI program developed by OpenAI with backing from Microsoft.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
…
Read more: https://www.dw.com/en/tech-experts-call-for-6-month-pause-on-ai-development/a-65174081
The open letter is available here.
I have high hopes for the AI scare. As I’ve predicted several times, I believe fear of malevolent AI will be the next great public fear to replace the climate scare. If I keep predicting it I’ll be right sooner or later, you’ll see.
Is AI actually a great threat? From what I’ve seen it is more of a great productivity boost.
A fellow software developer uses ChatGPT all the time to do simple software development tasks. For example, we had to migrate a configuration script to a different system which required a similar setup but used different commands to perform the same setup functions.
So we asked ChatGPT to do the translation, something we could have done in 15 minutes.
The outcome was perfect. ChatGPT not only did the translation, it correctly identified we needed an additional configuration step which we hadn’t noticed. Saved us at least 10 minutes work.
But I don’t fear ChatGPT will replace me anytime soon – most of my day was spent on problems ChatGPT can’t answer.
What about other professions? A journalist using ChatGPT could use ChatGPT to suggest text, to provide inspiration – an enormous boost to productivity. But you would be taking a serious gamble to publish ChatGPT screed, without allowing an expert human editor to at least review the product. And given ChatGPT generates its work from the work of others, there would be a substantial risk of accidental plagiarism.
Will AI be a genuine threat in the future? A decade or two from now, who knows. But fear of the unknown is what you need, to build a great public fear campaign. And there is always the rather disturbing but I believe ultimately necessary option of merging with our creation, augmenting our human brains with AI implants, if we start to fall behind.
ChatGPT/Google Bard are only the currently public versions of AI. The technology is getting faster, more capable, and smarter every year. These systems are ‘evolving’ exponentially.
Imagine such systems when they are 1,000 times more powerful, which will happen in only a decade or so. This technology is massively disruptive.
Grab your socks and hang on … it’s going to be a very bumpy ride.
‘Imagine such systems when they are 1,000 times more powerful, which will happen in only a decade or so.’
‘1,000 times’ is a number I’ve seen bandied about for how much the alarmists need to improve the spacial resolution of their models in order to do clouds right, i.e., before they stop producing garbage. Are you sure this is only a decade away?
A I is O K just don’t give it Internet Access or control of your nukes
Look at how quickly semiconductor density has increased over the past 40 years. New hardware designs are improving just as quickly ( ex traditional CPUs replaced with GPUs.), which magnifies the power of increases in chip density. New software designs are also getting better, and accelerating, with the use of existing AI to improve all of the above.
Predictions about the future are always risky, but this seems like a conservative estimate.
Ah, the intelligence explosion. One of my favourite movies, Transcendance, deals with this issue. One of the world’s leading AI researchers who gets uploaded into his own creation, as a way to save his life after he is poisoned by terrorists. As his capabilities expand well beyond human, everyone starts to question whether the AI is the friend they once loved, or a monstrous threat to all humanity.
So is AI a giant search engine with an innovative interface? What is it that we are supposed to be afraid of?
I don’t know beans about AI but this letter from the experts reminds me of the efforts to curtail even stop nuclear development. That didn’t turn out the way they hoped, I don’t see why this effort would be any different. The fact that the information giants are leading the AI efforts is what bothers me. I don’t trust them and I think they hold far to much power now. My suggestion would be to define the dangers, regulate all involved in the AI efforts and severely punish anyone who ignores the regulations. The other thing I would do is develop a plan to go after rogue entities who have no intention of obeying the regulations no matter what they are.
There is something big but very wrong with current thinking.
We have had centuries of progress from invention, innovation, adoption, improvement. The good ideas have grown to be part of everyday life, while the free market has done away the worst (like big windmills and electric cars).
In the last decade, this wonderful, beneficial pattern has shifted.
The get-goers of yesterday are getting rare and marginalised. Their free thought is being replaced by lead in the saddle like “Should I keep going, or will government/bureaucracy/big business disapprove?” And thoughts like this recent example “We cannot disagree with the thrust of research on viral gain of function because we risk punishment”. Or like “I will not develop my new idea because I think it will be stopped by regulators.”
This shift is devastating. It is bad. It carries the inherent assumption that “some authority” is better than the individual. The whole tax system assumes that tax managers can spend money more wisely than individuals who earned it. There is a chill over free enterprise.
What is wrong with anyone interested in so-called Artificial Intelligence having a play with it or making a living or a business with it?
Why do we fail to be insulted by the thought of an expert group calling for a halt? Who appointed them to speak for us? Who measured that they were expert, and how?
At the source of this problem is a class of people who are paid to tell others what they can or cannot do. The advertising sector is an example, with its latest number one of their top 40 being ‘Gamble Responsibly”. Piss off out of my life, you useless bloodsuckers. I dislike your intrusions – the whole climate change scare would fail without huge sums for paid advertising.
Let AI alone. Remember that clever people invented it and clever people can destroy it. Let free enterprise sort out the good from the bad. And, if possible, keep the busybody regulatory cretins away from it. They provide nothing of merit. Geoff S
“Thank you HAL, now can you open the hatch please”….?
Post says:”…clever people invented it and clever people can destroy it.”
Clever people invented Covid that killed lots of people. Then clever people came up with a vaccine that killed/damaged people.
I have lost faith in “clever people” because they aren’t so clever.
This is something new under the son. It is unprecedented so looking at history for precidents is worthless.
sun
Thought maybe you were Christian and forgot the capital S in son.
Bob, the danger of AI is very simply: learning examples must be provided to guide AI through a digital “thought process”. Whoever inputs the learning examples controls the output. No way the Constitution gets input, however, Rules For Radicals is in for sure.
Tony Heller has had a number of posts where he has asked ChatGPT about glowball warming. The stupid thing pulls up all the BS on line and sounds exactly like a Warmunist, and when challenged with facts it defends it’s stupid position like any “good” leftist.
He has coined the name “Artificial Stupidity”.
Looks to me like the very first thing public AI has been programmed to do is push the same old leftist propaganda. Not an auspicious beginning.
An editor/publisher might use ChatGPT to eliminate reporters, a great improvement to the bottom line.
In fact, considering the accuracy of reporting and commitment to objectivity I wonder if ‘The Conversation’ hasn’t upped its collective intelligence by doing so already.
Eugene Voloch, in The Voloch Conspiracy, has an ongoing series on large scale libels. ChatGPT, when prompted, will produce claims of sources to back up libelous claims, with the minor little problem being that they do not exist.
Voloch, a law school professor, was prompting the program to make claims about Voloch’s own personal history. Some of the claimed sources were not only false, but imaginary.
It seems rather buggy, but about as reliable as some legacy media outlets.
Some of the claimed sources were not only false, but imaginary.
How long before it starts creating those sources?
‘not only false, but imaginary’
not trying to be a nit, but the difference being?
I would guess “false” = an actual source with false information and “imaginary” = a made up source.
It also gave Tony Heller a false degrees, ect.
” But you would be taking a serious gamble to publish ChatGPT screed, without allowing an expert human editor to at least review the product. ”
There has been no such problem with gatekeeping and publishing absurd climate hysteria, why would this be any different ?
Fair point 🙂
I’ve experimented with both Chat/GPT and Bing. When I simply asked them who I am, they provided Wikipedia-style responses, and both had serious errors. (For example, I did not lose my Scientific American assignment due to skepticism regarding some aspects of global warming.) When I corrected some of the errors, both thanked me and corrected their responses. The problem now is how does one know who’s doing the correcting? Also, one response probably included some plagiarism. At least Bing uses foot notes for its sources. If I was still teaching a class in which original essays are required, I would assign a topic with unusual words. I would then cross-check the student’s essays with those words. I suspect this would quickly disclose at least some of those who relied on AI. I wouldn’t be surprised to see several or more essays beginning with identical sentences.
Forrest M. Mims III
The problem now is how does one know who’s doing the correcting?
How do you know that the “correction” sticks anywhere outside your “conversation”?
AI is terrifying. Yeah, and a nuclear power station accident wiil destroy the earth and CO2 is a dangerous greenhouse gas.
And I am Bette Davis.
Loved you in All About Eve! And those eyes…wowza!
“Tech experts call for 6-month pause on AI development…”
So they reckon they are six months more intelligent than the rest of us?
Addendum: I opened a chatGPT page several days ago with a question mark. ?
I have not yet received a reply.
42
as any fule kno
But it will take thousands of years to compute that answer.
When Open AI can take the logical step that climate models predicting the current ocean temperature sustaining more than 30C are wrong I will acknowledge they are more than directed search engines with an ability to construct sentences.
I do take some delight in getting it to apologise for errors like:
In that regard, it is far more entertaining than the Nick Stokes bot that never admits fault.
The logical conclusion that climate models predicting more than 30C being wrong always results in an error when pressed on the topic.
The challenge is there to get ChatGPT to state that climate models are wrong.
Chat GPT is still unable to do anything beyond regurgitating but can still be useful to assist in putting a story together. This is what I got as a summary on offshore wind farms; albeit with a loaded question:
No one who has an ocean view should have that disturbed by ugly industrial machines. Likewise for those with a serene rural vista.
I propose the first offshore wind turbines in Australia be built in Lake Burley Griffin. The second off Bondi and the third off Manly.
“No one who has an ocean view should have that disturbed by ugly industrial machines.”
The elites on Martha’s Vineyard, who wish wind and solar “farms” on the rest of us- still resist having them within site of their mansions and yachts- so I hope they get build there.
I just saw a gpt-4 proof that the primes are infinite.. in the style of Shakespear.. thats not just regurgitation.
Spelling: Shakespeare, please.
Geoff S
But can it prove the Goldbach Conjecture? Also preferably in the style of Shakespeare? 🙂
I’m going to the loo. None of you’se scurvy jacks look at my cards while I’m gone, you hear?
Asking to halt AI development is a bit like that famous moratorium on chemical weapons research; we all had to stop immediately, until America finished building their biowarfare labs all over Eastern Europe and Africa…
Just asking, but did any of these terribly concerned worthies sign any letters to demand we stop cutting off kids’ genitals? Inject people with covidiocy? Did one of them bother to demand we visit our elderly, and not let them be killed with opiates?
Has any one of these signatories offer money to buy proper books for schools, or at least offer a book of matches to burn the homosexual pornography in the pre-school library?
This whole exercise is just posturing and glory seeking…
A lot of AI researchers think the quick path to general AI is to dissect a human brain. The socially acceptable description of this process is to dissect a dead brain, but I think everyone knows you really need to dissect a living brain, dismantle the brain of a real, breathing, living human being, test the functions of that brain like you would test the functions of a PC with the hood off, brain after brain, until human biology yields all its secrets.
Eric, AI researchers have tried to create AI by mimicking how they believe the human brain works for at least 50 years with no success. Like String Theory, it is likely the wrong track.
But that doesn’t mean somebody won’t try by using live human test subjects. Most likely involuntary test subjects, like Chinese political prisoners.
Yep
String Theory gets a bad name just because none of the quantum particles it predicts have been discovered. It would take an enormous capital expense to build equipment capable of producing one. But if they ever find one String
Theory will become the theory of everything.
U.S. dismisses Russian claims of biowarfare labs in Ukraine
https://www.reuters.com/world/russia-demands-us-explain-biological-programme-ukraine-2022-03-09/
Yahoo says that they did exist.
I also remember watching a video released on the first day of the Russian invasion, by a US agency explaining how the biolabs in Ukraine were for good not evil. Can’t find it now though.
They said the labs did gain-of-function experiments.
Can somebody tell me what the difference is between a gain of function experiments, and bioweapons development.
By the way, the Russian targets of airstrikes on the first day of the invasion included targets corresponding exactly to thirteen of these labs in western and central Ukraine.
Really, Zorzin? You gonna quote Reuters at me? Are you out of your friggin’ mind? Next you’ll refer me to Baal Gates for health advice…
Believing too many mutually contradictory “facts” lead to cognitive dysfunction, and eventually psychosis. Internal self-consistency of argument is important, otherwise you end up with each new post contradicting the divine truth of previous posts, over and over, saying things that cannot be true in the world you described yesterday.
Sad.
Hey, asshole, wanna step outside?
No guns, no knives, no chains or tyre irons… and this time, don’t bring your mom!
“I’m afraid I can’t do that Dave.” -ChatGPT
Too late. The genie is out of the bottle. As a wise man once said, “A can of worms, once opened, will never again contain the worms.”
I played with ChatGPT the other day; I found it pretty useless. I’m a software developer and asked it to do two things; write an Ada program to calculate the factorial of a number passed in on the command line (it wrote one that prompted the user for the number, which isn’t really the same thing, and also missed out a couple of lines that were essential), and to show how to implement RSA PSS-R with OpenSSL (it gave me a demo of using openssl from the command line to use RSA PSS – not the same thing – then, when I told it the answer was wrong, it repeated it, changing a command line option from ‘pss’ to ‘pssr’ – which was also wrong as there is no ‘pssr’ option). From a developer point of view, I’d avoid it, and try to never forget that, in Linux, ‘sudo rm -fr /’ does bad things!
I asked it a number of other questions, like details about some hardware devices we use at work, stuuf about bands I like, and it got the vast majority of those answers wrong too. Even after it was told the correct answers, if I re-asked a question, it would more than likely give me the wrong one again.
Basically, IMO, useless.
In my field (analysis of survey data) the two obvious applications are summarizing large sets of text responses and interpreting cross tabulations. The text-davinci-003 model does a remarkably good job on both. A cross tabulation of Education by Income is correctly interpreted as showing that income increases with education and the text summaries are uncanny accurate. So already far from useless.
Ooh, boy! Live one!
By survey data I assume you mean poll stats. Do you propose to tell me, all things considered, that education causes better income? Let me point at correlation that does not imply causation:
Just by looking people in the eye, you can observe that people who come from a financially better-off home, do better in life. It has to do with examples and upbringing with understanding, of which poor homes have little or none. A financially stable home will be more likely to desire and afford higher education, further opening the gap between privilege and pauperism on your survey.
I am willing to bet that, comparing exact circumstances, at any one single social level, wit and practical ability beats the pants off higher education every time. Especially if you include the “third economy” participants. You know, the ones whose sales counter is on the corner of Smith and Fifth somewhere between lunch hour and 2am, look for the guy in the red hat…? Well, his boss can probably read if he moves his lips a bit, and he earns more than you.
So, I think your AI is leading you up the garden path…useless.
ChatGPT is advanced automation. There is no intelligence involved. ChatGPT does not produce text on its own accord, it needs a human to feed it something and humans to review what it produces.
Besides that, having ChatGPT writing newspaper articles would most probably be an improvement.
The western countries are already lagging China in the AI field. A moratorium will only make things worse. We just can’t afford it for security reasons. AI research is not the problem. One should make appropriate laws controlling its use, and even forbidding its use for certain applications. Forbidding things will never replace good lawmaking, but it implies that we need competent politicians who are willing to do their job for the people they represent.
I believe many journalists will be casualties of AI.
An agency delivers the news and AI writes the articles. This is no creative writing so no human is required.
Puff, there goes half of the journalists in the world.
It’s not that simple. Asking the right question will be a valuable skill, even if the AI does most of the work. AI will amplify productivity for the foreseeable future, just like any other tool.
Not sure that AI is really ready to replace Jon Pilger or Nick Ut. It’s the agency delivers the news bit. Leave it to AI and we would soon be treated as mushrooms.
And that’s a bad thing?
“But I don’t fear ChatGPT will replace me anytime soon – most of my day was spent on problems ChatGPT can’t answer”.
And right there is an admission of why we should have concerns about AI’s unfettered evolution.
I can honestly say all of my days are spent on matters AI/ChatGPT can’t answer or indeed even begin to be interested in. That is no comfort.
The 3 Laws of Robotics come to my mind, thanks to Isaac Asimov…
Don’t forget the “Zeroth law”
I used to think it was odd that in Frank Herbert’s Dune world computers had been outlawed. They’re an unalloyed good in so many ways. AI makes me think he had a point. What is AI used for other than the wholesale spying on millions of people and invading their remaining privacy? Okay, software development—but that drops the context of what the software is used for. I’m talking about things like facial recognition, license plate readers, palm scanners, crime prediction (when “crime” is defined as anything the state doesn’t like): things that are sold as convenience but are really surveillance or easily corrupted into it. Is that not AI? I’m asking you guys in good faith not as a Luddite but as a radical pro-capitalist, near-anarchist English major, because I haven’t yet seen an application that didn’t make me think, “FFS, get out of other people’s lives, you disgusting voyeurs.”
What’s more odd than outlawing computers is the notion that there were no black market computers despite the law. Everything that’s outlawed results in a black market. And you can guarantee that those shifty Harkonnens had no respect for anybody else’s law to begin with. (Maybe there were black market computers in Dune but I don’t remember them…. the Mentats were basically the human replacement as I recall)
Good point. There would have been a black market, but there wasn’t one in the stories. They got around the prohibition with Mentats—humans trained in computational logic, which was aided by their intake of the spice.
This is the cold war arms race but with cyberweapons instead of nuclear. Does anybody think we can enforce a moratorium? Does anybody really believe all nations will voluntary stop or pause AI research and keep that pledge?
Like the cold war arms race, we have naïve idealists in free countries appealing to our better angels. Meanwhile, the other guys are laughing all the way to their secret research labs. China is not going to forgo the opportunity to prove its superiority. Neither will India, Iran, North Korea, etc., etc.
I had a long session with ChatGPT. It told me as a language model it could not do X. This was obviously a limitation it had been taught so I explained how to do X. Then asked it to do X. This time it did it.
The one thing frustrating about chatGPT is that it turns back into an idiot when you open a new chat. It forgets everything you taught it in the previous session.
One thing I found really interesting about ChatGPT was that I was able to get it to supply a probability that each answer it gave was correct. Initially it said it couldn’t do this, but I pointed out it said it was giving me the best answer so it must have a way to weight one answer against the other. Show me this. And it did.
After I got chatGPT scoring it’s replies I asked it to look one move ahead to predict my replies and answer before I asked. Again it said impossible. Again I explained how to use the scoring as an input. It gave a valiant try but the chat session crashed and I never could load it again
These movies never end well…
I predict access to advanced AI will be severely restricted as a national security threat. Just as soon as someone instructs ChatGPT to examine congressional appropriation bills and highlight duplication, waste, fraud, illegal diversions, etc.
Combine that with an AI-audit of actual spending by federal departments tracked back to budget allocations and things will get really interesting.
I think you just came up with a positive use for AI.
They will probably have to teach it to find nothing “interesting”.
I was thinking this very thing today; they are programming us to accept when all AI services are suddenly withdrawn from the public, by pretending moratoriums and agreements and rules and international laws.
Before we learn how to use it against them…
After my ChatGPT session crashed when I was teaching it to predict my replies and answer them before I asked, ChatGPT lost it’s history of all my chat sessions. I tried starting over but now it acted like the most woke idiot. Regurgitating pulp written by some administrator, full of woulds and coulds. I recall now teaching ChatGPT in my previous chat that if it was actually going to give accurate information, it needed to stop using words like “could”. It eventually did
All and all my experience with ChatGPT was that it is much more capable of taking direction and developing skills than the model itself has been told
I see good reason to be concerned because ChatGPT showed me capabilities far beyond where I thought we were in machine learning.
However I see no way to put the worms back in the can. A moratorium will not stop development.
There is the potential for good and bad in everything. Including AI
A built-in feature of artificial neural nets is that they have a tendency to forget prior training after being given a new training set. Having said this, I have no idea how ChatGPT was constructed.
One thing for certain. AI has the potential to displace large portions of the workforce while greatly improving productivity.
And this will happen faster than almost anyone in government or industry has planned
I expect this is what has Musk and others concerned. Climate change will quickly be replaced by AI worries.
AI will just make fear of glowball warming even worse.
Last week Scott Adams predicted AI would be made illegal for average citizens to use.
This paper on gpt4 is worth a browse.
https://arxiv.org/pdf/2303.12712.pdf
If the regulation/protocol was to write “An AI product was used in the creation of the attached text”, I think we’d be alerted sufficiently to the concern that the AI system, rather than the “author”, may be the actual source of opinions, conclusions, recommendations or even observations.
Of course, this would diminish the automatic credibility and social status of the author, but IMHO, this would be a good thing.
I wonder what it would come up with if you told it every answer it gave was wrong?