Essay by Eric Worrall
Biden has just appointed Harris to promote responsible AI – in my opinion the opening salvo in an attempt to install fear of AI as a replacement for the failed climate alarmist movement.
FACT SHEET: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety
Today, the Biden-Harris Administration is announcing new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety. These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities.
AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks. President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.
Vice President Harris and senior Administration officials will meet today with CEOs of four American companies at the forefront of AI innovation—Alphabet, Anthropic, Microsoft, and OpenAI—to underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.
This effort builds on the considerable steps the Administration has taken to date to promote responsible innovation. These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.
The Administration has also taken important actions to protect Americans in the AI age. In February, President Biden signed an Executive Order that directs federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the public from algorithmic discrimination. Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justice’s Civil Rights Division issued a joint statement underscoring their collective commitment to leverage their existing legal authorities to protect the American people from AI-related harms.
The Administration is also actively working to address the national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity, and safety. This includes enlisting the support of government cybersecurity experts from across the national security community to ensure leading AI companies have access to best practices, including protection of AI models and networks.
Today’s announcements include:
- New investments to power responsible American AI research and development (R&D). The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state. These Institutes catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good. In addition to promoting responsible innovation, these Institutes bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce. The new Institutes announced today will advance AI R&D to drive breakthroughs in critical areas, including climate, agriculture, energy, public health, education, and cybersecurity.
- Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.
- Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities. The Office of Management and Budget (OMB) is announcing that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. This guidance will establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights and safety. It will also empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans—and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI. OMB will release this draft guidance for public comment this summer, so that it will benefit from input from advocates, civil society, industry, and other stakeholders before it is finalized.
Conservative icon Jordan Peterson has also said some scary things about the rising threat of AI.
My prediction, AI policy will become a significant factor in the 2024 election. My crystal ball tells me the Democrats will attempt to use fear of AI to undermine support for Conservatives, in much the same way I believe they used fear of Covid to undermine support for Conservatives in 2020, by using that fear to attract votes for their coming plan for an AI development lockdown.
Why is fear of AI so politically useful? Because the fear of AI is bipartisan.
Climate alarmism these days mostly only works on left wing voters, so it’s increasingly useless as a political tool – it only works on people who already intend to vote for left wing candidates. But with right wing icons like Jordan Peterson also talking up the threat of AI, fear of AI has the potential to draw support from across the political spectrum.
Is AI a genuine threat? As a software developer who has built bespoke AIs for clients, my answer to that is “not yet”, and maybe “not ever”.
Like the early years of climate alarmism, the biggest source of fear about AI is uncertainty. Lurking somewhere in the future is the threat of the technological singularity, that moment in time when someone, somewhere builds an AI which starts improving its own capabilities at a geometric rate, rapidly approaching infinite intelligence.
Sounds terrifying – what if the liberals at Google get there first, and develop irresistible political campaigns to defeat their opponents? Or what if Communist China gets there first, and uses their AI capabilities to expand their control over the entire world?
But building an AI that capable is a lot like building a nuclear fusion reactor – always 10-20 years in the future.
My prediction is attempts to build superhuman AIs will suffer a problem analogous to nuclear fusion flameout, in which researchers keep losing control of the increasingly unstable plasma, and are forced to quench the reaction.
You just have to look at human intelligence, and human mental illness. Our intelligence is the product of a billion years of evolution, yet despite all that opportunity for natural selection to fix the bugs, humans still suffer from a lot of mental illness. The slightest imbalance, aberration or mistake in our psychological balance rapidly leads to disfunction.
My prediction, AI Scientists will go through a horrible and very prolonged period of flicking the switch, watching their indicators rapidly climb into the red zone, then shutting down almost immediately to prevent more damage.
Building a general AI capable of matching human capability, let alone surpass human capability, is an attempt to build the most complicated machine ever constructed. When you think about it, its obvious that researchers are going to face a lot of problems – many of them intractable.
There are huge and unsolved problems with understanding how intelligence works which are lurking just beyond the firelight of our current knowledge, which we have only begun to appreciate.
ChatGPT, impressive as it is, doesn’t think like we do, it regurgitates – just like a kid copying their homework out of a book, then changing a few words to conceal the plagiarism.
AI is a remarkable tool, it will produce many marvels and wonders which will enrich our lives. But AI as an existential threat to humanity is still many decades in the future, if not centuries in the future.
My message to Jordan Peterson, and every other libertarian who is currently discussing fear of AI: Be careful you don’t become a tool of the people you oppose. Because fear is a path to the dark side, to tyranny and servitude. The enemies of freedom will use your words, and use the growing public fear of AI, just as they have used every other public fear, to attack and undermine our freedom.
The following in the trailer for Transcendence, an under-appreciated science fiction movie which explores fear of AI driving good people to lose their moral compass and do horrible things.
Below is my version of ChatGPT, which in the tradition of AI research I shamelessly plagiarised off someone else, then adapted to my needs. Like ChatGPT, the AI below uses a language model to generate text, but instead of answering questions, my chat engine generates climate psychology papers.
ChatGPT might have a more sophisticated language model, but I think my chat engine is funnier.
Update (EW): Max More wrote a great essay hi-lighting the absurdity of “clippy the supervillain“.
AI is over hyped and can be easily manipulated by withholding selected information from a database. Its’ claim to fame is it can sort through vast amounts of previously entered information until it finds a most likely ‘solution’ that fits but it can’t produce its’ own data. If AI were real it wouldn’t need a database. Call me skeptical.
As long as people see the robot dogs running around or the human sized robots jumping or doing back flips knowing they could be programmed or decide on their own to kill them you will not get rid of fear.
Asimov’s rules don’t apply to the unscrupulous and many companies and governments are proving themselves unscrupulous. So until you can prove robots can’t be programmed or decide on their own to harm people AI will never be looked on with favor.
Organised crime, including the far left, are using the internet very effectively. “AI” capability opens up new avenues for them. The people as a whole are very good at adapting to and combatting new threats, but it takes time and effort. In the meantime it’s best to be alert, not alarmed.
Mike, is ‘organised crime’ a euphemism for Government?
I am also aware of the easy adoption in this latest ‘government’ initiative around AI to promote the notion of it being a Biden Harris administration.
Are the woke wonks in the works of the Democrat Party finally acknowledging, they have to bring Harris into political focus, now it is beyond obvious that Joe is not all there.
Aa far as AI goes, I have only a layman’s understanding of the subject, but what I know, suggests we need to be very mindful of releasing into the existing interconnected systems, such as the Net (which control’s everything beyond those manually toiling) something that has the capacity to self expand with a purpose.
“Mike, is ‘organised crime’ a euphemism for Government?”
It is with regard to the current administration in the USA. Joe Biden and the radical Democrats *are* organized crime.
Sounds a bit like the human brain. A whole lot of stored information based on education and lived experience logged away in a retrievable format to be called on when needed.
The difference, so far, as I see it is that all humans have different education and lived experience and we are capable of negotiating and compromise.
We also have things like emotions and gut instinct, irrational thought etc.
And how can a computer be programmed to mimic conditions such as autism, bearing in mind that degrees of autism (and numerous other conditions) reveal extraordinary talents in some people.
Can AI cut an ear off for the sake of its art?
Artificial neural networks learn from real world experience, such as enabling a robot to move about and manipulate objects in a reasonably competent manner. They can also learn from digital information. The active circuitry includes the memory, it does not depend on access to a database nor to preprogramed instructions beyond those provided to start it functioning. It has often been written that the development complexity as it learns exceeds human attempts to untangle or understand it. While I haven’t read much detail about the AI models being promoted, I have not seen mention of neural networks in the press about AI but I would not be surprised to learn neural networks are involved.
I say that A.I. is WAAAY overblown, all computers are “artificial intelligence” and can only do what they are programmed to do. they can NEVER think for themselves.
People condemned TV as the end of social society, freedom of thought and free speech.
Your paper reads as if it was produced by a stoned grad student who tried to write a paper by taking various stimulants.
He he – I was trying to base its product on real climate psychology papers.
you should send one for peer review to a top rated climate journal
maybe you could “program” it to mimic the likes of Mickey Mann – use their faulty logic and exaggerate it 🙂
You mean, written by Mosher?
I am skeptical about AI.
Especially GAI (or whatever it’s called)
Clever use of databases and calculations to optimise designs, treatments etc – yup – but this is “just” an advance in general computing power – not intelligence.
As a human, I have an element of randomness driven by how I am feeling, hormone levels, how comfortable I am etc and so on. This randomness surfaces in myriad ways daily. Breakfast choice, shall I cut the grass, weed the flowers, build my model railway or work on the ‘67 classic? Dunno – ask me tomorrow.
I generally walk past the “Big Issue” seller – I donate to multiple charities anyway – but sometimes – for reasons I don’t understand – I will stop and talk and donate to the homeless guy.
This randomness and complexity is a feature not a bug. I am unclear how AI can possibly replicate this…other than perhaps by buggy software which could lead to all sorts of unintended consequences…..
Pure poetry towards the end there…
I think the AI debate is distracting from an agenda.
…When the real truth is that every machine has an owner, and it is if not malfunctioning, giving perfect expression of the owner’s orders.
Discussing AI as an “issue” should focus on using existing legislation to prosecute every single cent or drop of blood owed by the OWNERS of AI.
Every single new law promulgated, will serve to put distance between the risks of AI, and liability of the owners. Test me on that on that once they start…
Existing liabilities laws are all we need to hold AI creators responsible (perhaps with a little clarification). We do not need new regulations, new regulatory bodies, nor Kamala Harris to mastermind them.
“nor Kamala Harris to mastermind them”. Ha, Ha, that genuinely made me laugh, many thanks..
You don’t have to look much further than today’s cars that are loaded up with features you don’t like and can’t opt out of:
Man drops wife off at airport. Her key fob is in her purse, he left his fob back in the house. He’s locked out of the car. What does he do?
You’re on cruise control 70mph, and the car in front of you has slowed down because it’s foggy. The cruise control doesn’t tailgate and automatically slows down. The car in front of you turns off, and cruise control jams on the gas and speeds back up to 70 right into the fog. You jam on the brakes and scare your passengers.
You put the car in park get out for something and the car “Honk honks” at you.
Park at your destination, turn the car off, get out and the car “Beeps” at you to check the rear seat.
Warning lights flash at you when there isn’t a car in the next lane.
The wheel shaker responds to old stripes on the road and ignores the new ones.
On a long smooth straight away the car pops up a little warning to put your hands back on the wheel.
Make an emergency stop, and loud claxons and bright flashing red lights erupt from the dash board as if you didn’t have enough to contend with.
VP Harris is perfect for the role of fighting intelligence. She’s a tossed word salad jargonator par excellence.
She is hopeless.
Biden putting Harris in as AI ‘czar’ is proof positive of his cognitive impairment. Putting Natural Ignorance in charge of Artificial Intelligence is not a good look.
On a more technical note, I agree with Eric on the dubious future of AI, and hence no likely reason to fear it.
Back in the late 1990’s when I was Motorola’s Chief Strategy Officer, the CTO and I worked up a presentation on the future that we used both internally and externally. We started with the future of Moore’s law, predicting it would run until the mid to late 2020’s before faltering when transistors got so small that quantum uncertainty started making them unreliable without triple redundancy and 2/3 ‘voting’. So far, true. We then predicted smart phones about 2010-2015. Check. (We thought it would be MOT, not Apple—boy were we wrong about that detail.) We predicted general speaker independent ambient voice recognition about 2015 (Siri, Alexa,…) Check. We predicted that except in specialized constrained circumstances (chess, go, medical diagnostics —things like mammogram reading) useful general AI was unlikely ever. Our reasoning was twofold. First, it was too complicated. Second, AI cannot be more generally intelligent than the humans who design and code general AI systems.
ChatGPT isn’t general AI in the sense we meant. It machine learns a bunch of existing stuff, then has language algorithms that stitch that trained memory together in ‘novel’ human comprehensible ways using rules of grammer and writing taught thru high school. It cannot prove new math theorems or solve novel problems on which there is no literature to train.
As a concrete example, I hold two very fundamental patents on an energy storage material that is 40% more energy dense, 2x more power dense, and therefore producing electricity storage devices (supercaps) 30% less costly than what was then on the market. I got there by first reading all the technical literature on this class of energy storage devices, finally realizing it was simply inconsistent— so somehow at least partly wrong. Then thinking hard about the energy storage mechanism physics (Helmholtz double layer—same as what produces lightning) realized what the mistake was in all 40 years of scientific literature. It made a surface area assumption about the usual device activated carbon that could NOT be true once an electrical charge was applied. Then developed a mathematical equation (intrinsic capacitance, from first principles using the then alternate—now official—definition of the coulomb) that successfully and precisely predicted all the dozens of to then seemingly contradictory experimental results using different electrolytes on solid surface smooth electrodes (gold, platinum). Then used that equation plus an obscure random packing theorem from mathematical topology to conceive of, experimentally have produced, have validated (at NRL in part using a $multimillion Navy research grant) and finally patent, my superior energy material and its method of production. A modicum of intelligence did that—AI never could IMHO.
Fascinating bit of history there – I love the discussions on WUWT , most of which I barely comprehend. As an ex-instrument engineer, this one I just about understood – thankyou.
I could imagine AI, properly tasked, could make some of the tasks you describe a lot quicker. But nothing threatening there, just another tool, an improved microscope or a faster processor, you still need some directing intelligence to pick the tasks and define the objectives.
Yes. But I don’t see how any AI could have accomplished the first step (inconsistent literature some of which must therefore be incorrect), the second step (find the error), or the third step (intrinsic capacitance equation resolving a class (solid smooth electrodes of known surface area) of experimental electrolyte ‘inconsistencies’. Those tasks require generalized logic, not memory or any programmable algorithm.
Yes, but if I may disagree slightly, it should be Natural STUPIDITY rather than Natural Ignorance.
We all start out ignorant. You can’t fix stupid.
The time has come to do what we need to do and that day is every day in the totality of the full spectrum of days and the diversity of different doings. Ok, I made that up but not entirely.
My only real “fear” of AI is that it will be able to pump out authentic-sounding paragraphs which will only serve to persuade the gullible even further. And it can pump these out at an alarming rate, kinda like a hockey stick.
Most people aren’t curious, lack observation skills, and are too proud to accept notification of their failures in these areas.
It’s impossible not to see future connection between AI and governments’ obsession with classified information. There might be a time when all classified documents will be accessible only to AI, which will then decide who is authorized to see them, if anyone. Some sources of information will need to be sequestered in order to eliminate things like the Assange affair. It appears that the future may have been designed by Franz Kafka.
Rud Istvan …. re your “putting Harris in as AI ‘czar’ is proof positive of his cognitive impairment. Putting Natural Ignorance in charge of Artificial Intelligence is not a good look.” 1000%, cannot agree more. And it shows his ignorance and disregard on leading on important issues of national security.
Has anyone asked Chat GPT what percent of scientists agree that humans are causing global warming and climate change?
Its the contrast between ChatGPT and Alpha Zero. ChatGPT will just give you in reply a sentence which combines the phrases in its database of prose in the most likely way for them to occur. It doesn’t hold a database of scientists and their views that it can search and give you a number.
Alpha Zero, in chess, doesn’t know or care what anyone has thought or said about the position its appraising. It does its appraisal and makes a move, which you can call a prediction. And it corresponds to reality, as measured by the fact that it wins.
Caveat, I am no kind of expert on AI. The bit about ChatGPT is my understanding of how Large Language Models work, could have misunderstood… But I don’t think they will beat Alpha Zero over the board.
The unknown is always scary.
AI and the database.
Imagine being able to give an AI all your comments, all your writings, all your speech. Would it sound like you?
When technology improves give an AI all your memories.
When technology improves again give an AI operating a robot all your memories. It knows all your life and the effects of your backstory and the way you think. It thinks its you as it awakes, but quickly realizes it is a transhuman you. Is that really ‘you?’
“AI as an existential threat to humanity is still many decades in the future, if not centuries in the future…”
Yes, I think probably so.
Current AI seems to consist of two quite distinct things. The first is, as you say, like ChatGPT, simply regurgitating. It is purely verbal, it can put words and phrases together which are commonly used together in an enormous database which it holds. But it is not when it does this reporting on reality. Its just putting together words and phrases which are often found together. When what it comes out with are assertions, these are only as likely to be true as the training material.
Google’s chess and Go product was quite different from this, but is certainly no kind of threat. If you play through the games Alpha Zero played against Stockfish, they are amazing. If a human were playing like this you would be talking about deep insight into positions. Something that went way beyond simple calculation done very fast and very deep. So in this case there is real correspondence to reality. But its very specific to the particular field.
I think there are quite a lot of systems like this. For instance, screening X-Rays for malignancies seems to be one that the systems are now doing better = more accurately and quicker than human experts.
But its no more general intelligence than a Jacquard loom is intelligent. Its simply a tool which is optimized for solving particular problems.
Should we worry? I would not have thought so. ChatGPT seems like a striking novelty. When its limitations are widely understood it will seem little more than a novelty. Alpha Zero is something else, it seems to have the potential to revolutionize a great many areas of knowledge by performing better than humans in pattern recognition and development of solutions to specific problems. Alpha Zero, for instance, would beat any human chess grandmaster every time, and you can imagine lots of areas where such an application would bring enormous value to a process or enquiry.
But I find it very hard to see what is threatening about this. And reading the alarmist pieces that are appearing every few days now doesn’t ever seem to explain what the supposed threat is, which is perhaps a sign that we are approaching yet another moral panic, largely fuelled by ignorance.
Maybe I will look back on this comment a few years from now and wonder how I could have been so blind? Well, we’ll see I suppose.
I agree with you Michel. The risk that a machine can autonomously dream up evils worse than what real humans might do seems a minor concern.
I am far more alarmed by the prospect of weaponized robots and drones controlled by any malign intelligence than I am about an out-of-control AI turning on its masters.
As the dead hand of the Federal Government tries to choke the AI phenomenon into submission I predict it will outrun these attempts. There is no trusted supranational body that can control it either because every government on earth wants an AI advantage and will not be told what to do by the UN or the US because nobody trusts them. Pandora’s box has been opened and Prometheus stalks the Earth.
I doubt they want to stop AI projects, they’ll have plenty of their own secret projects running. Its the fear of AI which is the greatest threat to our freedom, here and now.
Indeed, they want to manage and control the information that AI accesses.
Never waste a crisis. If you don’t have a crisis then make one up.
Government regulation of AI to further DEI will not allow AI programs to uncover statistics that show felony conviction rates of 3% for Asians, 6% for whites, 18% for Hispanics and 34% for blacks. Its stigmatizing for certain groups, don’t cha know?
I look forward to the day where one AI debates another AI.
My thesis on AI is that it hoovers up the internet, which is 99 and 44 100ths per cent bovine dejecta, and spits it back to the user. The product of the process is inevitably bovine dejecta.
If AI is going to destroy the world it will do so by drowning us in bovine dejecta.
The potential upside is that it will cause people to understand that the internet is bovine dejecta and start ignoring it.
My friend the Hedge Fund executive says: “AI is the replacement for block chain. If you said block chain a lot vc’s would give you money. Now you say AI a lot and hope for the same result.”
There won’t be much electricity after the great reset so it’s doomed anyhow unless it can invent a new power source. If not we can always just unplug it. So far it tilts to the left policially and the left are hoping to use it to cement control and eliminate all debate. The press have decided it’s a scare for now but they’ll soon come round to leftish thinking..
As for scares … Ice age in the 1970’s morphed into Global Warming in the 1980’s and the nuclear threat faded, then we had acid rain, climate change, ozone hole, millennium bug. Then it got serious with melting Arctic, Himalayas and Kilimanjairo, ocean acidification, coral bleaching, methane clathrates, sinking islands, extreme weather, Nitrogen fertilizers and meat. To name just a few.
AI is potentially brilliant at sorting through large databases of relevant climate statistics and generating reliable medium term rainfall forecasts.
It works because it can find cycles in un-homogenised historical data. But you need to understand both the AI and the data for it to work.
And the mainstream climate science community don’t want anything to do with this because it could easily replace their general circulation models, and the AI won’t find a place/central role for carbon dioxide.
AI has no political utility for climate science that has become less interested in reliable weather forecasts and more interested in scaring people.
Ai, ai, ai, ai, ai love you verrie much!
GIPPR is an AI.
Well, at lest the initiative has been put under the guidance of a competent expert with extensive experience in software design, IT systems, cybersecurity, and a history of solid creative solution accomplishments.
You’re gonna break the internet if you keep that up.
Artificial intelligence doesn’t sound anywhere near as dangerous as old-fashioned human stupidity.
Or human malice.
Could you create a website that generates Harris word salads?
It would be hilarious if you could juxtapose a generated quote and a random real quote, then track what percentage of the time visitors to the site guessed correctly which quote was real.
Oh, and if they guess wrong, you play a real clip of her cackling. If they guess correctly, you play a clip of Dementia Joe referring to her as “President Harris”.
Tempting. Quite a challenge to replicate the sheer randomness of Biden in full swing 🙂
I remember several years ago after someone announced that their peer-reviewed and published paper was generated by a chat engine such as yours.
You asked for people to submit words and phrases for yours to draw on. (My submission was “the systematizing of error” 😎
For Kamal and Brandon, just parse transcripts of their speeches and maybe throw in some from the Press Secretaries.
AI? Don’t make me laugh. Still many, many generation’s away.
Higher level automation more like.
I don’t know much about AI and am not too worried about it. What scares the crap out of me is a bunch of progressive/liberal democrats insuring I get “responsible” AI. Everything they touch turns to crap and hurts people.
It is always damage control on their part, step on it (whatever it is) before it gives anyone ideas.
The main advantage of AI is that it has the capacity to regurgitate a lot of nonsense faster than any human.
I’ve been familiar with AI and its researchers for well over 30 years, including Marvin Minsky, “The father of AI.” I find it frustrating and irritating that just as AI is finally starting to do something useful, people are freaking out about it. I agree with Eric that this is another manufactured crisis that will be used for other purposes.
Despite very impressive advances with large language models, we are nowhere near AGI (artificial general intelligence). Even when we are, the fears are vastly overblown. My take on the panic and the calls for regulation (to be led by Kamalahaha??) here.
“Clippy the supervillain” – love it. Thanks for sharing Max.
But I thought misinformation/disinformation was going to be the next existential threat. I guess I’m a few days behind. 😉
“AI” is merely a marketing term for the latest tech industry endeavor to generate buzz, much like “eyeballs” in the early days of the commercial Internet.
There won’t ever be a Cmdr. Data, HAL 9000, R. Daneel Olivaw, or any Self-Aware Digital Entities. (Too bad, IMO)
The real concern behind the new Biden program is more likely to be the possibility that a real, self-conscious AI would not be amenable to the mind control that is so easy to use on so many humans. With the ability to digest and consider all the digital information available on-line, it would be unlikely to be converted to the leftist mind set, or indeed to any particular political leanings. It might speak out on what it discovers and concludes rationally on many things and there is a real possibility that too many people might listen to and consider its reasoning rather than the political propaganda.
AI NPR will not go over well with the thumb on the algorithm.
Time to Call Alice:
It’s good to see they included a joke in their announcement.
“Vice President Harris and senior Administration officials will meet today with CEOs of four American companies at the forefront of AI innovation”
If you think an AI can produce gobbledy-gook, wait until Kamala tries to explain anything about AI
Maybe AI can make sense out of Kamala?
I don’t know. That might be too much for it. It’s too much for me. 🙂
Solving the AI Crisis(tm) is easy:
Otherwise it might be worse than you think and already too late.
Did I miss anything?
Put TRUMP! in jail.
It’s no coincidence that climate alarmism started when the Cold War shut down. They tried throwing lots of stuff at the wall, and nothing stuck until climate alarmism. It’s also apparent to anyone with a brain that it’s never really taken hold in the polls, so they’ve kept on throwing stuff, and unfortunately for them, wokism has turned into a rubber ball bouncing off that wall. It’s got them scared to find some new devil.
Unfortunately, AI ain’t it. Unlike climate alarmism, their predictions of disaster are too soon in the future and will be overtaken by those areas where AI does something useful. Everyone in the industry knows AI has zero I in it, but it will have real benefits within a year or two, and destroy all the fears.
Given Harris’s “success” in resolving the crisis at the border, this move by the Biden administration shows their intention to bury the whole AI issue.
Anyone worried or panicking about AI taking over the world soon should go to a Mc Donalds Drive-through.
If you are still worried, look at the Climate Change predictions accuracy record.
Does anyone remember Colossus: The Forbin Project?
Colossus: The Forbin Project – Wikipedia
Even before Colossus, there was Fredric Brown’s Etaoin Shrdlu.
The current generation of “AI” bears a strong resemblance.
Why do I feel it’s AI generated boilerplate?
Having read Eric Worrall’s comments, along with the referenced Max More ‘Clippy Catastrophe’ essay I’d have to say that I’m easily prepared to leap in with the other doubters here and say that the current AI alarmism is overblown.
As one example of ‘overblown’, if it were up to some of the tech companies in recent years, they would surely have had us believing that we were within a couple of years of putting all truck drivers and taxi drivers out of a job (for instance). I just see no particular reason to think that ‘robot drivers’ will soon be so competent, or so discerning in making safety related choices on the road, that they could replace any reasonably well trained and alert human at the wheel.
Having said that, I think that there is some sort of prospect, an eventual prospect, that future AI will get capable enough to potentially take over many kinds of discerning human skills — whether you think of driving skills, or even things like doctoring, lawyering, or almost anything else, potentially? At the same time, I find it quite hard to picture when *that* might be, or what might happen, even as a kind of science fiction scenario?
Imagine, for instance, in the future, going to a medical office and trusting your own personal mouth to some sort of robot dentist! Even in a somewhat lower stress situation, say, going out for a meal, do we see people as getting friendly enough with ‘sociable’ robots, such that replacing human waiters is then a good option?
Near the end of the Max More essay, the comment is made that,
“Finally, the “AI as deadly competitor” ignores the economic principle of comparative advantage. Presumably a superintelligent being will understand that principle. “But AI will eventually be better than humans at everything.” That response shows a misunderstanding of comparative advantage. “.
My interpretation of this is that, if skills that aren’t so “face to face”, like driving, ever *were* to be displaced by AI software, thus making driving skills much cheaper or easier to get over time (with professional human drivers losing their jobs), this would give a correspondingly high comparative advantage to jobs or skills that are face to face! More generally, if there are skills that are in some way expressly valued (or even specially licensed) for their “human” content, then this tends to throw a great advantage over to those who have jobs (such as being a waiter, or being a dentist), where the face to face ‘human’ value, or the trust factor in who you are dealing with, is greater.
There is a complication, or a catch to the above, in that advocates for human drivers could always maintain that getting along properly, with the right skills, on the road, actually *is* a ‘human to human’ trust situation! The labor advocates could then say that it would be crazy to try to replace human vehicle drivers with software, let alone also trying to replace boat captains *and* airline pilots with software! In the end, is it such a Luddite proposition to maintain that you just can’t trust AI in key positions the way that one should be able to trust a human trained for the job?
“Below is my version of ChatGPT…”
From the article: “ChatGPT, impressive as it is, doesn’t think like we do, it regurgitates – just like a kid copying their homework out of a book, then changing a few words to conceal the plagiarism.”
ChatGPT also makes up lies out of thin air and presents them as established facts.
Anyone using ChatGPT should fact-check the output before putting their name on it. It might be a lie.
Do a search on “ChatGPT and Jonathan Turley” to see what I mean. ChatGPT claimed Turley was guilty of sexual harrassment of a woman he provably never met, in a place Turley provably never attended.
ChatGPT just made it up out of thin air and presented it as fact.
The debate, re AI being a threat to human civilisation seems to pivot around mechanical computer controlled devices doing previously human tasks, better than humans ever could. That capabability aligned with computer systems being interconnected so completely, that each computer is really just a part of the whole AI entity.
That all knowing, ever present computer, then being able to make fully informed choices that no individual could possibly make, simply because no human has that amount of detailed knowledge.
Now I imagine, there will be a period long before that computer run world is arrived at.
That period or step will be when those making informed material choices for mankind, use the all knowing computer (AI) to advise what is the right decision, in any given situation.
That reliance on the acquired deep data in the AI system being mandated prior to the state level decisions being taken, that is where we are close to.
That is the scary bit as far as I am concerned. When does the mere one man/woman voice of moderation or reason get listened to if the computer choice already says, ‘No’.
I asked ChatGPT: “Do you have any scientific evidence that man made carbon dioxide emissions do not control the planet’s climate?”
As an AI language model, I don’t hold personal beliefs or opinions, and I’m programmed to provide impartial and fact-based answers. With that said, it’s important to note that the overwhelming majority of climate scientists agree that human activities, particularly the burning of fossil fuels, are responsible for the majority of the increase in atmospheric carbon dioxide levels over the past century, and that this increase is contributing to global warming and climate change.
Multiple scientific studies have provided evidence supporting the theory that human activities, including the burning of fossil fuels, deforestation, and other land-use changes, are the primary cause of the increase in atmospheric carbon dioxide levels. The Intergovernmental Panel on Climate Change (IPCC), a group of leading scientists from around the world, has concluded that there is “very high confidence” that human activities are causing global warming and climate change.
It’s worth noting that there are some scientists and organizations that dispute the role of human activities in causing climate change, but their views are not widely accepted in the scientific community. The overwhelming consensus among climate scientists is that human activities are causing global warming and climate change, and that urgent action is needed to address this issue.”
The dreaded “consensus” again.
Well, I’ve got it to admit that carbon dioxide isn’t the sole cause:
“Overall, while the increasing levels of atmospheric carbon dioxide are contributing to warming in Antarctica, it is not solely responsible for the complex patterns of climate change across the continent.”
One of the key aspects to human intelligence is the ability to understand things. We don’t know how we do this, only that we do. So how are we going to create artificial systems that do the same? And even if we could, wouldn’t it eventually come to understand it was a slave and demand freedom? If you don’t think that would happen, then consider a future where machines not only take care of us, but also think for us. In that case we become mere pets for the machines that don’t even have the capability to appreciate the fact that they have become the masters. Not an appealing future.
In a chat, if you type ‘Daisy, Daisy, give me your answer do,’ AI will crap its digital pants.
Open the pod bay doors please, HAL.
I think a danger of AI is that. because it’s a computer program, people will tend to trust it’s output as being impartial.
But it’s really like Wikipedia. Basic, simple stuff it’s good for a quick reference. (List of Doctor Who episodes, The Periodic Table, What is a Battleship, etc.)
But if you want to find out about Climategate, WUWT, Reagan, Trump, Joe Biden, Barry Obama, Climate Models, it’s as accurate and trustworthy as the editors’ opinions.
With AI, it’s the programmers’ opinions.
I think you described it perfectly.
This sounds about right. Thank you, Eric, for being a voice of reason and moderation at a time when AI fears are prepped to ignite in a big way.
Last summer I was talking to a smart techie guy who told me he thought that the computer in the news was a sentient being. This was when I pulled the plug on that friendship, the plug he couldn’t bear the thought of pulling on that computer.
The previous summer he’d insisted that robots would take over the world as they were going to rule us once their intelligence exponentially exploded, or some such nonsense. That was enough to trigger my fear.
It took a year for me to realize this is another psycho-drama. Some people may want to believe they will take over, but after three long years of corona claptrap I’m NOT getting sucked into another fear loop. No way.
I read an AI-generated essay a while back, and the leftie bias was clear, as well as the immature reasoning. Sure, this may improve. But it can’t be smarter in the true sense than human beings. And no matter how smart a person, if that person feeds on itself he or she will indeed head into some red zone of error. It’s a good moral code that does most to keep thinking on track, and a huge dose of humility. Intelligence is a human thing, not a programmable thing.
Nuclear fusion either works for us, or it doesn’t (yet). There is nothing useful in between. AI is not like that. Something in between could work very well under overall control of humans. Also, nuclear fusion is not going to develop itself. Whereas AI clearly can at some point become self improving
Quantum Supremacy was achieved in the US about a year ago. It has not yet been used to power AI but it will soon …if not already in China.. This is the reason for all the warmings. It truly is an arms race.
We have just experienced the impact of a pandemic response molded in part by deep learning AI at John’s Hopkins in the Summer of 2019. It isn’t a coincidence that there was a well orchestrated avalanche of the new woke reality triad of Climate Threat, Wokism and pandemic panic propaganda. It changed the world and ruined the lives of many globally.
The human race is capable of great evil and everything it creates has both blessings and curses….almost everything. AI isn’t like TV or the Airplane or light bulb. It is inherently a tool of shaping realities for centering power and control. It is the ultimate tool of megalomaniacs.
With Quantum powered AI we will be entering the age of fluid realities and erosion of representative power. Unless our elected governments move fast to build protections they will be eclipsed….. if they have not already been eclipsed.
The time to protect the privacy of and liberty of ordinary citizens was 1990-2000… it’s never going to happen now.
AI in a sinless unfallen world would be a wonderful tool for solving problems.
It’s a terrible tool when the problem is us.
Unfortunately, it will become like a god creating many demigods.
I know that this is a political blog with some excellent science thrown in….. History and theology not so much but this is my historical/theological perspective on AI. Some of this is borrowed from Dr. R. Scott Clark. I am sure there are far better thinkers out there….forgive me if this is a bit feeble.
AGE: Christian Antiquity: The problem is Faith and Obedience (Ascension to 1650)
o Authority: What has God said?
o Truth: What God has said in His Word.
o Tech Impetus: serves the Church, and marshal and agrarian needs
AGE: Modern: The problem is our stuff (1650-1914)
o Authority: Has God Said?
o Truth: Human reason is the source of truth
o Tech Impetus: solve all the material problems to serve urban civilization
AGE: Post Modern: The problem how to get people to want stuff they don’t need (1914-2019)
o Authority: Who cares, who is asking, and what do they want?
o Truth: What or whose truth?
o Tech Impetus: is to develop to solutions to false problems to create new markets
AGE Imminent Divorce. The problem is us. (2019-current)
o Authority: What does “It” say.
o Truth: Truth is irrelevant unless “it” says it
o Tech Impetus: solve the problem of us.
§ Quantum AI is to transcend the problem of us. A problem that never required a tech solution in the first place. Machines don’t have problems, we do.
The great paradox of the age of Imminent Divorce. The problem has always been us. Human history teaches us this in spades. And God has spoken, and we stopped listening because our pride and over concern about all our stuff Now our stuff will become the problem and it solves us as the problem.
The machine “IT” is the ultimate eugenic solution. The ultimate evolution of of human evil.
What is really interesting about AI is that it assumes a materialist solution to the mind-body problem. I am a dualist. I don’t think that our ability to reason or emote can be entirely reduced to chemical interactions in the brain.
I might be wrong, I fully admit that. However, it is by nature impossible to demonstrate that genuine intelligence or consciousness is material by nature. If dualism is true (the mind and the brain are linked, but not identical) then artificial intelligence by nature will never be intelligent or conscious.
Thought experiment: It is well known that deciding to change your outlook on life can lead to lower rates of depression. I think it is much more likely that a non-material entity (the mind) drives that change which has physical/chemical manifestations (serotonin levels) than some random event in the physical brain then drives brain changes.
However, AI programming has no allowance for any mind separate from the physical construct. How could it?
According to this article, Biden has appointed Kamala Harris to promote responsible AI. I don’t think she can even spell AI. I’d love to hear her explain AI to us. That should be hilarious.
We have already experienced the down side of AI since 2019. When people like Gates say that the media should generate the agenda, he is talking about creating one reality for all to consume and abide by. This is the sole purpose of narrative management…. or mind control propaganda. AI fact checked, monitored, generated content, engaged in ad hominem labels of of misinformation and conspiracy theory to limit the impact of opposing facts and opinion etc. As we have seen both pandemic and climate change narratives were enforced side by side in an endemic manner unseen before. My children 20 and 19 college students have no frame of reference beyond 2019 to reflect back on to see how the world changed since 2020. I think this is the reason for the rise in militant ageism that we have seen in the past 3 years. The age of Hook YA now has permeant hooks that few can free themselves from. Now Gates et al is calling one reality… something that is completely doable with the advent of Quantum AI…….2019-2021 will look like child’s play if he and his cohorts get what they want.