The Terminator. Fair Use, Low Resolution Image to Identify the Subject.

Goodbye Climate Alarmism: The Age of AI Alarmism Has Begun

Essay by Eric Worrall

Biden has just appointed Harris to promote responsible AI – in my opinion the opening salvo in an attempt to install fear of AI as a replacement for the failed climate alarmist movement.

FACT SHEET: Biden-⁠Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety

  1. HOME
  2. BRIEFING ROOM
  3. STATEMENTS AND RELEASES

Today, the Biden-Harris Administration is announcing new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety. These steps build on the Administration’s strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal government’s ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities.

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks. President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.

Vice President Harris and senior Administration officials will meet today with CEOs of four American companies at the forefront of AI innovation—Alphabet, Anthropic, Microsoft, and OpenAI—to underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.

This effort builds on the considerable steps the Administration has taken to date to promote responsible innovation. These include the landmark Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.

The Administration has also taken important actions to protect Americans in the AI age. In February, President Biden signed an Executive Order that directs federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the public from algorithmic discrimination. Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justice’s Civil Rights Division issued a joint statement underscoring their collective commitment to leverage their existing legal authorities to protect the American people from AI-related harms.

The Administration is also actively working to address the national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity, and safety. This includes enlisting the support of government cybersecurity experts from across the national security community to ensure leading AI companies have access to best practices, including protection of AI models and networks.

Today’s announcements include:

  • New investments to power responsible American AI research and development (R&D). The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state. These Institutes catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good. In addition to promoting responsible innovation, these Institutes bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce. The new Institutes announced today will advance AI R&D to drive breakthroughs in critical areas, including climate, agriculture, energy, public health, education, and cybersecurity.
     
  • Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.
  • Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities. The Office of Management and Budget (OMB) is announcing that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. This guidance will establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights and safety. It will also empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans—and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI. OMB will release this draft guidance for public comment this summer, so that it will benefit from input from advocates, civil society, industry, and other stakeholders before it is finalized.
Source: https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/

Conservative icon Jordan Peterson has also said some scary things about the rising threat of AI.

My prediction, AI policy will become a significant factor in the 2024 election. My crystal ball tells me the Democrats will attempt to use fear of AI to undermine support for Conservatives, in much the same way I believe they used fear of Covid to undermine support for Conservatives in 2020, by using that fear to attract votes for their coming plan for an AI development lockdown.

Why is fear of AI so politically useful? Because the fear of AI is bipartisan.

Climate alarmism these days mostly only works on left wing voters, so it’s increasingly useless as a political tool – it only works on people who already intend to vote for left wing candidates. But with right wing icons like Jordan Peterson also talking up the threat of AI, fear of AI has the potential to draw support from across the political spectrum.

Is AI a genuine threat? As a software developer who has built bespoke AIs for clients, my answer to that is “not yet”, and maybe “not ever”.

Like the early years of climate alarmism, the biggest source of fear about AI is uncertainty. Lurking somewhere in the future is the threat of the technological singularity, that moment in time when someone, somewhere builds an AI which starts improving its own capabilities at a geometric rate, rapidly approaching infinite intelligence.

Sounds terrifying – what if the liberals at Google get there first, and develop irresistible political campaigns to defeat their opponents? Or what if Communist China gets there first, and uses their AI capabilities to expand their control over the entire world?

But building an AI that capable is a lot like building a nuclear fusion reactor – always 10-20 years in the future.

My prediction is attempts to build superhuman AIs will suffer a problem analogous to nuclear fusion flameout, in which researchers keep losing control of the increasingly unstable plasma, and are forced to quench the reaction.

You just have to look at human intelligence, and human mental illness. Our intelligence is the product of a billion years of evolution, yet despite all that opportunity for natural selection to fix the bugs, humans still suffer from a lot of mental illness. The slightest imbalance, aberration or mistake in our psychological balance rapidly leads to disfunction.

My prediction, AI Scientists will go through a horrible and very prolonged period of flicking the switch, watching their indicators rapidly climb into the red zone, then shutting down almost immediately to prevent more damage.

Building a general AI capable of matching human capability, let alone surpass human capability, is an attempt to build the most complicated machine ever constructed. When you think about it, its obvious that researchers are going to face a lot of problems – many of them intractable.

There are huge and unsolved problems with understanding how intelligence works which are lurking just beyond the firelight of our current knowledge, which we have only begun to appreciate.

ChatGPT, impressive as it is, doesn’t think like we do, it regurgitates – just like a kid copying their homework out of a book, then changing a few words to conceal the plagiarism.

AI is a remarkable tool, it will produce many marvels and wonders which will enrich our lives. But AI as an existential threat to humanity is still many decades in the future, if not centuries in the future.

My message to Jordan Peterson, and every other libertarian who is currently discussing fear of AI: Be careful you don’t become a tool of the people you oppose. Because fear is a path to the dark side, to tyranny and servitude. The enemies of freedom will use your words, and use the growing public fear of AI, just as they have used every other public fear, to attack and undermine our freedom.


The following in the trailer for Transcendence, an under-appreciated science fiction movie which explores fear of AI driving good people to lose their moral compass and do horrible things.


Below is my version of ChatGPT, which in the tradition of AI research I shamelessly plagiarised off someone else, then adapted to my needs. Like ChatGPT, the AI below uses a language model to generate text, but instead of answering questions, my chat engine generates climate psychology papers.

ChatGPT might have a more sophisticated language model, but I think my chat engine is funnier.

Sorry your computer does not support this AI

Update (EW): Max More wrote a great essay hi-lighting the absurdity of “clippy the supervillain“.

4.8 12 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

93 Comments
Inline Feedbacks
View all comments
mleskovarsocalrrcom
May 7, 2023 10:13 am

AI is over hyped and can be easily manipulated by withholding selected information from a database. Its’ claim to fame is it can sort through vast amounts of previously entered information until it finds a most likely ‘solution’ that fits but it can’t produce its’ own data. If AI were real it wouldn’t need a database. Call me skeptical.

Reply to  mleskovarsocalrrcom
May 7, 2023 2:06 pm

As long as people see the robot dogs running around or the human sized robots jumping or doing back flips knowing they could be programmed or decide on their own to kill them you will not get rid of fear.

Asimov’s rules don’t apply to the unscrupulous and many companies and governments are proving themselves unscrupulous. So until you can prove robots can’t be programmed or decide on their own to harm people AI will never be looked on with favor.

Editor
Reply to  mkelly
May 7, 2023 3:54 pm

Organised crime, including the far left, are using the internet very effectively. “AI” capability opens up new avenues for them. The people as a whole are very good at adapting to and combatting new threats, but it takes time and effort. In the meantime it’s best to be alert, not alarmed.

Rod Evans
Reply to  Mike Jonas
May 8, 2023 1:04 am

Mike, is ‘organised crime’ a euphemism for Government?
I am also aware of the easy adoption in this latest ‘government’ initiative around AI to promote the notion of it being a Biden Harris administration.
Are the woke wonks in the works of the Democrat Party finally acknowledging, they have to bring Harris into political focus, now it is beyond obvious that Joe is not all there.
Aa far as AI goes, I have only a layman’s understanding of the subject, but what I know, suggests we need to be very mindful of releasing into the existing interconnected systems, such as the Net (which control’s everything beyond those manually toiling) something that has the capacity to self expand with a purpose.

Reply to  Rod Evans
May 8, 2023 4:42 am

“Mike, is ‘organised crime’ a euphemism for Government?”

It is with regard to the current administration in the USA. Joe Biden and the radical Democrats *are* organized crime.

Reply to  mleskovarsocalrrcom
May 7, 2023 2:15 pm

Sounds a bit like the human brain. A whole lot of stored information based on education and lived experience logged away in a retrievable format to be called on when needed.

The difference, so far, as I see it is that all humans have different education and lived experience and we are capable of negotiating and compromise.

We also have things like emotions and gut instinct, irrational thought etc.

And how can a computer be programmed to mimic conditions such as autism, bearing in mind that degrees of autism (and numerous other conditions) reveal extraordinary talents in some people.

Can AI cut an ear off for the sake of its art?

Reply to  mleskovarsocalrrcom
May 7, 2023 2:40 pm

Artificial neural networks learn from real world experience, such as enabling a robot to move about and manipulate objects in a reasonably competent manner. They can also learn from digital information. The active circuitry includes the memory, it does not depend on access to a database nor to preprogramed instructions beyond those provided to start it functioning. It has often been written that the development complexity as it learns exceeds human attempts to untangle or understand it. While I haven’t read much detail about the AI models being promoted, I have not seen mention of neural networks in the press about AI but I would not be surprised to learn neural networks are involved.

J Boles
May 7, 2023 10:36 am

I say that A.I. is WAAAY overblown, all computers are “artificial intelligence” and can only do what they are programmed to do. they can NEVER think for themselves.

May 7, 2023 10:38 am

People condemned TV as the end of social society, freedom of thought and free speech.

Ridiculous.

Oh wait!

Tom Halla
May 7, 2023 11:02 am

Your paper reads as if it was produced by a stoned grad student who tried to write a paper by taking various stimulants.

Reply to  Eric Worrall
May 8, 2023 4:06 am

you should send one for peer review to a top rated climate journal

maybe you could “program” it to mimic the likes of Mickey Mann – use their faulty logic and exaggerate it 🙂

Jeff Alberts
Reply to  Tom Halla
May 7, 2023 4:44 pm

You mean, written by Mosher?

May 7, 2023 11:04 am

I am skeptical about AI.

Especially GAI (or whatever it’s called)

Clever use of databases and calculations to optimise designs, treatments etc – yup – but this is “just” an advance in general computing power – not intelligence.

As a human, I have an element of randomness driven by how I am feeling, hormone levels, how comfortable I am etc and so on. This randomness surfaces in myriad ways daily. Breakfast choice, shall I cut the grass, weed the flowers, build my model railway or work on the ‘67 classic? Dunno – ask me tomorrow.

I generally walk past the “Big Issue” seller – I donate to multiple charities anyway – but sometimes – for reasons I don’t understand – I will stop and talk and donate to the homeless guy.

This randomness and complexity is a feature not a bug. I am unclear how AI can possibly replicate this…other than perhaps by buggy software which could lead to all sorts of unintended consequences…..

May 7, 2023 11:13 am

Pure poetry towards the end there…
I think the AI debate is distracting from an agenda.
…When the real truth is that every machine has an owner, and it is if not malfunctioning, giving perfect expression of the owner’s orders.
Discussing AI as an “issue” should focus on using existing legislation to prosecute every single cent or drop of blood owed by the OWNERS of AI.
Every single new law promulgated, will serve to put distance between the risks of AI, and liability of the owners. Test me on that on that once they start…

Max More
Reply to  cilo
May 7, 2023 3:38 pm

Existing liabilities laws are all we need to hold AI creators responsible (perhaps with a little clarification). We do not need new regulations, new regulatory bodies, nor Kamala Harris to mastermind them.

Rod Evans
Reply to  Max More
May 8, 2023 1:13 am

 “nor Kamala Harris to mastermind them”. Ha, Ha, that genuinely made me laugh, many thanks..

May 7, 2023 11:33 am

You don’t have to look much further than today’s cars that are loaded up with features you don’t like and can’t opt out of:

Man drops wife off at airport. Her key fob is in her purse, he left his fob back in the house. He’s locked out of the car. What does he do?

You’re on cruise control 70mph, and the car in front of you has slowed down because it’s foggy. The cruise control doesn’t tailgate and automatically slows down. The car in front of you turns off, and cruise control jams on the gas and speeds back up to 70 right into the fog. You jam on the brakes and scare your passengers.

You put the car in park get out for something and the car “Honk honks” at you.

Park at your destination, turn the car off, get out and the car “Beeps” at you to check the rear seat.

Warning lights flash at you when there isn’t a car in the next lane.

The wheel shaker responds to old stripes on the road and ignores the new ones.

On a long smooth straight away the car pops up a little warning to put your hands back on the wheel.

Make an emergency stop, and loud claxons and bright flashing red lights erupt from the dash board as if you didn’t have enough to contend with.

JamesB_684
May 7, 2023 11:34 am

VP Harris is perfect for the role of fighting intelligence. She’s a tossed word salad jargonator par excellence.

Reply to  JamesB_684
May 8, 2023 4:49 am

She is hopeless.

Rud Istvan
May 7, 2023 11:36 am

Biden putting Harris in as AI ‘czar’ is proof positive of his cognitive impairment. Putting Natural Ignorance in charge of Artificial Intelligence is not a good look.

On a more technical note, I agree with Eric on the dubious future of AI, and hence no likely reason to fear it.
Back in the late 1990’s when I was Motorola’s Chief Strategy Officer, the CTO and I worked up a presentation on the future that we used both internally and externally. We started with the future of Moore’s law, predicting it would run until the mid to late 2020’s before faltering when transistors got so small that quantum uncertainty started making them unreliable without triple redundancy and 2/3 ‘voting’. So far, true. We then predicted smart phones about 2010-2015. Check. (We thought it would be MOT, not Apple—boy were we wrong about that detail.) We predicted general speaker independent ambient voice recognition about 2015 (Siri, Alexa,…) Check. We predicted that except in specialized constrained circumstances (chess, go, medical diagnostics —things like mammogram reading) useful general AI was unlikely ever. Our reasoning was twofold. First, it was too complicated. Second, AI cannot be more generally intelligent than the humans who design and code general AI systems.

ChatGPT isn’t general AI in the sense we meant. It machine learns a bunch of existing stuff, then has language algorithms that stitch that trained memory together in ‘novel’ human comprehensible ways using rules of grammer and writing taught thru high school. It cannot prove new math theorems or solve novel problems on which there is no literature to train.

As a concrete example, I hold two very fundamental patents on an energy storage material that is 40% more energy dense, 2x more power dense, and therefore producing electricity storage devices (supercaps) 30% less costly than what was then on the market. I got there by first reading all the technical literature on this class of energy storage devices, finally realizing it was simply inconsistent— so somehow at least partly wrong. Then thinking hard about the energy storage mechanism physics (Helmholtz double layer—same as what produces lightning) realized what the mistake was in all 40 years of scientific literature. It made a surface area assumption about the usual device activated carbon that could NOT be true once an electrical charge was applied. Then developed a mathematical equation (intrinsic capacitance, from first principles using the then alternate—now official—definition of the coulomb) that successfully and precisely predicted all the dozens of to then seemingly contradictory experimental results using different electrolytes on solid surface smooth electrodes (gold, platinum). Then used that equation plus an obscure random packing theorem from mathematical topology to conceive of, experimentally have produced, have validated (at NRL in part using a $multimillion Navy research grant) and finally patent, my superior energy material and its method of production. A modicum of intelligence did that—AI never could IMHO.

Reply to  Rud Istvan
May 7, 2023 11:42 am

Fascinating bit of history there – I love the discussions on WUWT , most of which I barely comprehend. As an ex-instrument engineer, this one I just about understood – thankyou.

Reply to  Rud Istvan
May 7, 2023 12:19 pm

I could imagine AI, properly tasked, could make some of the tasks you describe a lot quicker. But nothing threatening there, just another tool, an improved microscope or a faster processor, you still need some directing intelligence to pick the tasks and define the objectives.

Rud Istvan
Reply to  michel
May 7, 2023 1:02 pm

Yes. But I don’t see how any AI could have accomplished the first step (inconsistent literature some of which must therefore be incorrect), the second step (find the error), or the third step (intrinsic capacitance equation resolving a class (solid smooth electrodes of known surface area) of experimental electrolyte ‘inconsistencies’. Those tasks require generalized logic, not memory or any programmable algorithm.

Rich Davis
Reply to  Rud Istvan
May 7, 2023 2:57 pm

Biden putting Harris in as AI ‘czar’ is proof positive of his cognitive impairment. Putting Natural Ignorance in charge of Artificial Intelligence is not a good look.

Yes, but if I may disagree slightly, it should be Natural STUPIDITY rather than Natural Ignorance.

We all start out ignorant. You can’t fix stupid.

The time has come to do what we need to do and that day is every day in the totality of the full spectrum of days and the diversity of different doings. Ok, I made that up but not entirely.

Jeff Alberts
Reply to  Rud Istvan
May 7, 2023 4:47 pm

My only real “fear” of AI is that it will be able to pump out authentic-sounding paragraphs which will only serve to persuade the gullible even further. And it can pump these out at an alarming rate, kinda like a hockey stick.

Reply to  Jeff Alberts
May 7, 2023 11:57 pm

Most people aren’t curious, lack observation skills, and are too proud to accept notification of their failures in these areas.

May 7, 2023 11:45 am

It’s impossible not to see future connection between AI and governments’ obsession with classified information. There might be a time when all classified documents will be accessible only to AI, which will then decide who is authorized to see them, if anyone. Some sources of information will need to be sequestered in order to eliminate things like the Assange affair. It appears that the future may have been designed by Franz Kafka.

May 7, 2023 11:47 am

Rud Istvan …. re your “putting Harris in as AI ‘czar’ is proof positive of his cognitive impairment. Putting Natural Ignorance in charge of Artificial Intelligence is not a good look.” 1000%, cannot agree more. And it shows his ignorance and disregard on leading on important issues of national security.

Tom.1
May 7, 2023 11:47 am

Has anyone asked Chat GPT what percent of scientists agree that humans are causing global warming and climate change?

Reply to  Tom.1
May 7, 2023 12:26 pm

Its the contrast between ChatGPT and Alpha Zero. ChatGPT will just give you in reply a sentence which combines the phrases in its database of prose in the most likely way for them to occur. It doesn’t hold a database of scientists and their views that it can search and give you a number.

Alpha Zero, in chess, doesn’t know or care what anyone has thought or said about the position its appraising. It does its appraisal and makes a move, which you can call a prediction. And it corresponds to reality, as measured by the fact that it wins.

Caveat, I am no kind of expert on AI. The bit about ChatGPT is my understanding of how Large Language Models work, could have misunderstood… But I don’t think they will beat Alpha Zero over the board.

Old.George
May 7, 2023 12:05 pm

The unknown is always scary.
AI and the database.
Imagine being able to give an AI all your comments, all your writings, all your speech. Would it sound like you?
When technology improves give an AI all your memories.
When technology improves again give an AI operating a robot all your memories. It knows all your life and the effects of your backstory and the way you think. It thinks its you as it awakes, but quickly realizes it is a transhuman you. Is that really ‘you?’

May 7, 2023 12:14 pm

“AI as an existential threat to humanity is still many decades in the future, if not centuries in the future…”

Yes, I think probably so.

Current AI seems to consist of two quite distinct things. The first is, as you say, like ChatGPT, simply regurgitating. It is purely verbal, it can put words and phrases together which are commonly used together in an enormous database which it holds. But it is not when it does this reporting on reality. Its just putting together words and phrases which are often found together. When what it comes out with are assertions, these are only as likely to be true as the training material.

Google’s chess and Go product was quite different from this, but is certainly no kind of threat. If you play through the games Alpha Zero played against Stockfish, they are amazing. If a human were playing like this you would be talking about deep insight into positions. Something that went way beyond simple calculation done very fast and very deep. So in this case there is real correspondence to reality. But its very specific to the particular field.

I think there are quite a lot of systems like this. For instance, screening X-Rays for malignancies seems to be one that the systems are now doing better = more accurately and quicker than human experts.

But its no more general intelligence than a Jacquard loom is intelligent. Its simply a tool which is optimized for solving particular problems.

Should we worry? I would not have thought so. ChatGPT seems like a striking novelty. When its limitations are widely understood it will seem little more than a novelty. Alpha Zero is something else, it seems to have the potential to revolutionize a great many areas of knowledge by performing better than humans in pattern recognition and development of solutions to specific problems. Alpha Zero, for instance, would beat any human chess grandmaster every time, and you can imagine lots of areas where such an application would bring enormous value to a process or enquiry.

But I find it very hard to see what is threatening about this. And reading the alarmist pieces that are appearing every few days now doesn’t ever seem to explain what the supposed threat is, which is perhaps a sign that we are approaching yet another moral panic, largely fuelled by ignorance.

Maybe I will look back on this comment a few years from now and wonder how I could have been so blind? Well, we’ll see I suppose.

Rich Davis
Reply to  michel
May 7, 2023 2:37 pm

I agree with you Michel. The risk that a machine can autonomously dream up evils worse than what real humans might do seems a minor concern.

I am far more alarmed by the prospect of weaponized robots and drones controlled by any malign intelligence than I am about an out-of-control AI turning on its masters.

Keitho
Editor
May 7, 2023 12:38 pm

As the dead hand of the Federal Government tries to choke the AI phenomenon into submission I predict it will outrun these attempts. There is no trusted supranational body that can control it either because every government on earth wants an AI advantage and will not be told what to do by the UN or the US because nobody trusts them. Pandora’s box has been opened and Prometheus stalks the Earth.

Reply to  Eric Worrall
May 7, 2023 5:59 pm

Indeed, they want to manage and control the information that AI accesses.

Jeff
May 7, 2023 12:39 pm

Never waste a crisis. If you don’t have a crisis then make one up.

Dave Fair
May 7, 2023 12:41 pm

Government regulation of AI to further DEI will not allow AI programs to uncover statistics that show felony conviction rates of 3% for Asians, 6% for whites, 18% for Hispanics and 34% for blacks. Its stigmatizing for certain groups, don’t cha know?

ScienceABC123
May 7, 2023 12:47 pm

I look forward to the day where one AI debates another AI.

Walter Sobchak
May 7, 2023 12:50 pm

My thesis on AI is that it hoovers up the internet, which is 99 and 44 100ths per cent bovine dejecta, and spits it back to the user. The product of the process is inevitably bovine dejecta.

If AI is going to destroy the world it will do so by drowning us in bovine dejecta.

The potential upside is that it will cause people to understand that the internet is bovine dejecta and start ignoring it.

My friend the Hedge Fund executive says: “AI is the replacement for block chain. If you said block chain a lot vc’s would give you money. Now you say AI a lot and hope for the same result.”

Rud Istvan
Reply to  Walter Sobchak
May 7, 2023 1:11 pm

Exactly.

Tim Spence
May 7, 2023 12:52 pm

There won’t be much electricity after the great reset so it’s doomed anyhow unless it can invent a new power source. If not we can always just unplug it. So far it tilts to the left policially and the left are hoping to use it to cement control and eliminate all debate. The press have decided it’s a scare for now but they’ll soon come round to leftish thinking..

As for scares … Ice age in the 1970’s morphed into Global Warming in the 1980’s and the nuclear threat faded, then we had acid rain, climate change, ozone hole, millennium bug. Then it got serious with melting Arctic, Himalayas and Kilimanjairo, ocean acidification, coral bleaching, methane clathrates, sinking islands, extreme weather, Nitrogen fertilizers and meat. To name just a few.

May 7, 2023 1:00 pm

AI is potentially brilliant at sorting through large databases of relevant climate statistics and generating reliable medium term rainfall forecasts.

It works because it can find cycles in un-homogenised historical data. But you need to understand both the AI and the data for it to work.

And the mainstream climate science community don’t want anything to do with this because it could easily replace their general circulation models, and the AI won’t find a place/central role for carbon dioxide.

AI has no political utility for climate science that has become less interested in reliable weather forecasts and more interested in scaring people.

Ed Zuiderwijk
May 7, 2023 1:22 pm

Ai, ai, ai, ai, ai love you verrie much!

Margarita Pracatan

May 7, 2023 2:01 pm

GIPPR is an AI.

May 7, 2023 2:19 pm

Well, at lest the initiative has been put under the guidance of a competent expert with extensive experience in software design, IT systems, cybersecurity, and a history of solid creative solution accomplishments.

Jeff Alberts
Reply to  AndyHce
May 7, 2023 4:55 pm

You’re gonna break the internet if you keep that up.

Reply to  AndyHce
May 8, 2023 1:28 am

LOL !

May 7, 2023 2:34 pm

Artificial intelligence doesn’t sound anywhere near as dangerous as old-fashioned human stupidity.

Jeff Alberts
Reply to  tom_gelsthorpe
May 7, 2023 4:55 pm

Or human malice.

1 2 3
Verified by MonsterInsights