British Science Association: Artificial Intelligence is a Greater Threat than Climate Change

Search Term Interest
Google search interest in Climate Change, Deep Learning, Artificial Intelligence – source Google Trends

Guest essay by Eric Worrall

As I predicted in 2017, the malevolent AI threat is rapidly moving up the ranks of candidate replacements for the failed climate change scare.

Artificial Intelligence is greater concern than climate change or terrorism, says new head of British Science Association

By Sarah Knapton, science editor
6 SEPTEMBER 2018 • 12:01AM

Artificial Intelligence is a greater concern than antibiotic resistance, climate change or terrorism for the future of Britain, the incoming president of the British Science Association has warned.

Jim Al-Khalili, Professor of physics and public engagement at the University of Surrey, said the unprecedented technological progress in AI was ‘happening too fast’ without proper scrutiny or regulation.

Prof Al-Khalili warned that the full threat to jobs and security had not been properly assessed and urged the government to urgently regulate.

Speaking at a briefing in London ahead of the British Science Festival in Hull next week, he said: “Until maybe a couple of years ago had I been asked what is the most pressing and important conversation we should be having about our future, I might have said climate change or one of the other big challenges facing humanity, such as terrorism, antimicrobial resistance, the threat of pandemics or world poverty.

But today I am certain the most important conversation we should be having is about the future of AI. It will dominate what happens with all of these other issues for better or for worse.

Read more: https://www.telegraph.co.uk/science/2018/09/05/artificial-intelligence-greater-concern-climate-change-terrorism/

Artificial intelligence has a lot of potential as a replacement scare story.

  • AI directly threatens jobs and economic stability.
  • AI undermines democracy – the elite owners of powerful AIs have an unprecedented advantage over everyone else.
  • Hollywood is onboard – there are plenty of movies featuring dangerous AI adversaries out to control or destroy the world.
  • AI threatens national security – a nation whose geopolitics is advised by greater than human intelligence will have a possibly insurmountable advantage.
  • Powerful AIs may be difficult to control – humans will struggle to constrain machines more intelligent than their creators.
  • Since Artificial General Intelligence (i.e. human level AI or better than human AI) does not yet exist, researchers can make stuff up, and nobody can prove they are wrong.

Obviously it will be difficult for climate scientists to jump ship and join the AI gravy train – or will it? Plenty of climate scientists have degrees which could be stretched to cover expert sounding pontification about artificial intelligence.

My 2018 prediction – expect to see more studies in the next five years exploring the impact of AI on climate change, written by climate scientists keen to build a parallel academic track record studying artificial intelligence issues.

0 0 votes
Article Rating
170 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mike Bryant
September 6, 2018 1:14 am

Artificial Intelligence can’t compete with natural stupidity.

Reply to  Eric Worrall
September 6, 2018 2:03 am

…With luck the AI will be smart enough not to listen to them.

Greg
Reply to  Leo Smith
September 7, 2018 2:02 pm

Fine, so AI will be smart enough to ignore stupid climate deniers and will succeed in establishing unelected world government where the UN failed.

What’s not to like ?

Reply to  Mike Bryant
September 6, 2018 2:02 am

I was about to make the exact same point…

philincalifornia
Reply to  Mike Bryant
September 6, 2018 10:13 am

Machines vs. Luddites. Machines won.

Machines vs. “progressives (Luddites)”. Bring it on.

Greg
Reply to  philincalifornia
September 7, 2018 2:06 pm

No. Men controlling machines won over men not wanting machines.

The parallel to machines controlling men is very weak.

I really don’t see what this has to do with “progressives”.

Pat Frank
Reply to  Mike Bryant
September 6, 2018 5:57 pm

Forget who pointed it out, but if there’s artificial intelligence, there’s also artificial stupidity.

One expects the latter will far outweigh the former. Especially given modelers.

LdB
Reply to  Pat Frank
September 7, 2018 7:09 am

There is a model for that 🙂

Greg
Reply to  Pat Frank
September 7, 2018 2:09 pm

The technology of artificial stupidity is already well established. ( My bank is using it to handle on line enquiries. )

The problems start when we create AI.

Matrix was supposed to be a warning but like 1984 some ass-holes will take it as a blueprint.

John Tillman
Reply to  Mike Bryant
September 7, 2018 6:53 pm

The lack of natural intelligence is by far the greatest threat.

Patrick MJD
September 6, 2018 1:26 am

So there will be a silicon tax in the future and children won’t know what carbon is? What’s the next element/molecule can they target? H2O? Oh wait!

Alan the Brit
Reply to  Patrick MJD
September 6, 2018 3:06 am

Have you seen that video by Penn & Tella, where they get a group of girls to go around a park asking people if they’d sign their pettition against industry using Dihydrogen Monoxide in the water supplies & the nuclear industry? It’s frightening just how many people were prepared to sign up to it! I’ll hunt for a Youtube link.

Dudley Horscroft
Reply to  Alan the Brit
September 6, 2018 4:09 am
Dudley Horscroft
Reply to  Dudley Horscroft
September 6, 2018 4:16 am
Patrick MJD
Reply to  Alan the Brit
September 6, 2018 7:39 pm

They have made several videos on subjects like organic food and landfill waste/recycling and climate change etc. Very interesting. But there are a lot of people who would find his language too much. Me on the other hand like it because, sometimes, there just is no other way to get the point across.

John in cheshire
Reply to  Patrick MJD
September 6, 2018 3:52 am

I thought Nitrogen was the biggest and worst pollutant.

philincalifornia
Reply to  John in cheshire
September 6, 2018 10:06 am

It’s almost 80% now. Been growing like triffids and no elected official took on the problem. You get what you vote for.

John Harmsworth
Reply to  John in cheshire
September 6, 2018 12:20 pm

Oxygen is a poison, doncha know!

Pat Frank
Reply to  John Harmsworth
September 6, 2018 5:59 pm

And dinitrogen an asphyxiant. We’re doomed!

Mike Borgelt
September 6, 2018 1:36 am

Given they know nothing, how is the government going to regulate?
As long as AI isn’t connected to the IoT (internet of things – one of the more stupid ideas the human race has come up with) the AI’s can flash their lights at us as much as they like.

Patrick MJD
Reply to  Mike Borgelt
September 6, 2018 1:40 am

“Mike Borgelt

As long as AI isn’t connected to the IoT (internet of things – one of the more stupid ideas the human race has come up with) the AI’s can flash their lights at us as much as they like.”

Along with Windows Hello!!! Sheesh!

Tom Abbott
Reply to  Mike Borgelt
September 6, 2018 5:51 am

One rogue AI might manage to kill some people but it would be destroyed fairly quickly.

We just need to keep the AI’s away from out nukes. 🙂

Another Paul
Reply to  Tom Abbott
September 6, 2018 5:56 am

“We just need to keep the AI’s away from out nukes”

Wouldn’t a nuke EMP be deadly to AI as well?

MarkW
Reply to  Another Paul
September 6, 2018 8:00 am

Most computer rooms are well shielded.

John Endicott
Reply to  MarkW
September 6, 2018 11:57 am

True but unless they are packing their own private shielded and independent power source, dropping nukes here, there and everywhere will take out the power grid that those computer rooms rely on to keep running.

Greg
Reply to  John Endicott
September 7, 2018 2:14 pm

I would expect any critical infrastructure would have at least two backup power systems. ( Unless, they are designed by TEPCO , of course.).

D. J. Hawkins
Reply to  Tom Abbott
September 6, 2018 8:41 am

See Colossus: The Forbin Project, 1970.

Phil R
Reply to  Tom Abbott
September 6, 2018 9:55 am

I don’t know…several people have already been killed by rogue Teslas on autopilot and Tesla still seems to be doing fairly well (unfortunately).

Tim in WA
Reply to  Mike Borgelt
September 6, 2018 5:56 am

As someone who actively uses the internet of things on a daily basis for work, it is actually a great thing when you can see warning signs on customer equipment before it actually fails. This can lead to proactive maintenance by dispatching an FSE on site to remedy the situation with minimal downtime.

Darrin
Reply to  Tim in WA
September 6, 2018 1:42 pm

Thanks for the hearty laugh!

Production won’t let you take the machine down no matter how much data you show them, all that matters is today’s production numbers. This from working as both an FSE and equipment maintenance.

Patrick MJD
Reply to  Darrin
September 6, 2018 7:36 pm

Nah, you just trigger an “event” like a bank in 2012, or supermarket a few weeks back or more recently, a transport network.

Tim in WA
Reply to  Darrin
September 6, 2018 8:35 pm

I have been both the FSE and the one analyzing the data. Production sure will take down the machine if I can minimize downtime.

Darrin
Reply to  Tim in WA
September 7, 2018 7:16 am

That’s a big “IF”. I’m sitting on a machine right now that needs a couple hundred k worth of work done to it, probably two weeks of down time. They would rather chance it breaking, needing even more time and money put into it than fix it now. Good chance it would break bad enough to be money ahead to replace the whole thing , some of the parts that are likely to break being as much as 6 months out. This is from a company that wants to take care of their equipment and willing to spend the money to do it. It’s just that sales are through the roof (good problem to have) and production feels they can’t afford the down time.

Only place I’ve supported that does it right is Intel. They have redundant equipment, making getting down time easier plus their CMMS shuts down the equipment for scheduled maintenance. Takes an engineer to log equipment back up without the work being performed and to do that they have to put in a reason and that’s digitally signed by them. Engineer not being under production and personally responsible for the work not getting done means the work actually does get done.

D. J. Hawkins
Reply to  Tim in WA
September 10, 2018 9:52 am

@Tim in WA
I wish. I was the facility manager for Emcore in Somerset, NJ for a while. I set up a monthly maintenance schedule to service all the “back end” equipment that kept the semiconductor tools up and running. All was well until production demands ramped up, and production, in a meeting with the company president, said they’d rather risk a catastrophic failure than be down for one or two days per month. All the other department heads signed off on it and that’s how it went forward.

September 6, 2018 2:03 am

“Yes but if you put stupid people in charge of the dangerous AI…”

Not sure that it isn’t already the case. On a recent quick check it seems that the neural nets they are using are 90s vintage ideas with 21C computing power. No sign of real innovation, just grunt.
Just because an AI can play chess and Go well doesn’t mean that it can function in the real world, which is almost infinitely more complex than the highly restricted and simply defined world of board games. The big risk is the hubris of the companies developing them.
Neural nets are data hungry. With games they can generate lots of accurate data by trial and error. Not so in the real world.

Counter-step #1: starve them of data.

Alan the Brit
Reply to  Eric Worrall
September 6, 2018 3:09 am

What concerns me as a Structural Engineer, is will an AI system be able to do a feeble weak irrelevent Human thing, like saying, “Now, just hang on a minute!”?

John Harmsworth
Reply to  Alan the Brit
September 6, 2018 12:28 pm

Here’s my biggest question. Will we humans stop saying that in the face of the certainty of that computing power>
We are barely managing to do it on climate change, which is essentially a predetermined computer output.
An AI system is just an old input based computational result with the additional feature of having been processed by an “intelligence”. How are we to know if the intelligence has predetermined biases or not?
So what happens when we have been taught to accept its decisions automatically and without question?

Reply to  dai davies
September 6, 2018 6:24 am

Oh no! Here we go again! It’s 1970 – Colossus: the Forbin Project all over again.

[youtube https://www.youtube.com/watch?v=iRq7Muf6CKg&w=1583&h=597%5D

MarkW
Reply to  dai davies
September 6, 2018 8:02 am

Computers playing chess and go is not a sign of intelligence. It’s just pattern matching at really, really high speeds.

Greg
Reply to  MarkW
September 7, 2018 2:47 pm

define intelligence.

Patrick MJD
September 6, 2018 2:20 am

The missing “hotspot” sez “I’ll be back, with a vengeance!”

Steve Borodin
September 6, 2018 2:32 am

Mr Al-Khalili, or may I call you Jim in today’s egalitarian world, I congratulate you on recognizing that AI is a greater threat than Climate Change. How perceptive. May I add a few other things that are also a greater threat than Climate Change: the EU, Greenpeace, Jackboots (see EU), WWF, Architects of the Adjustocene, the BBC (who no doubt pay you royally Jim), FOE, the British Met Office, some Universities (see Pen State), plastic straws (I kid you not), crisp packets, ants and the mad nerve agent warrior in Moscow.

GHowe
Reply to  Steve Borodin
September 6, 2018 3:13 am

The steroid laden WWE is pretty scary too…..

John Harmsworth
Reply to  GHowe
September 6, 2018 12:32 pm

What if the animals are incorporated into the WWF? We could have WW Wildlife Wrestling Federation! On steroids!

Chris Wright
Reply to  Steve Borodin
September 6, 2018 3:25 am

I would agree with your list. I would add another to the list: climate change – at least the mild global warming we have enjoyed since 1900 – is considerably less of a threat than kittens.
It’s a pity that Al-Khalili even thinks that climate change is a threat at all, but then he does work for the BBC.
Chris

John Harmsworth
Reply to  Chris Wright
September 6, 2018 12:36 pm

Good point! Socialism is a bigger threat than any of those things.

Bruce Ploetz
September 6, 2018 2:35 am

Finally they found a threat more insidious and less visible than trace atmospheric gasses.

Saul Alinsky’s ninth “Rule for Radicals”

The ninth rule: The threat is usually more terrifying than the thing itself.

“The whole aim of practical politics”, wrote HL Mencken, “is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, most of them imaginary.”

As one who designs electronic equipment for a living, I am not in the slightest afraid that the robots will take over the world. The robots are made by humans. None can pass the Turing test without faking it, let alone “think for themselves”. And it is extremely unlikely that they ever will.

If you program an “AI” to modify itself the laws of entropy will rapidly assert themselves. If you don’t, the “AI” can only do what it was programmed to do. The expression “Artificial Intelligence”, like “Military Intelligence” and “Government Assistance”, is an extremely ironic oxymoron.

Bsl
Reply to  Bruce Ploetz
September 6, 2018 5:04 am

Could you elaborate on your assertion that ‘laws of entropy will rapidly assert themselves’?

Bruce Ploetz
Reply to  Bsl
September 6, 2018 6:56 am

Take any program, simple or large, Change one random byte. If the byte is in shadowed code (inaccessible) there will be no result, but if it is in functioning code the system will malfunction or crash very rapidly. Descending into the higher entropy chaotic state that the whole universe tends toward naturally.

Only the theory of evolution bucks this trend, a good argument against the theory of evolution until we find out how it actually works.

Faults and mutations degrade systems, only life can create order from chaos for some reason.

John Harmsworth
Reply to  Bruce Ploetz
September 6, 2018 12:43 pm

Evolution doesn’t purposefully “improve” anything. It merely adapts and changes and experiments. Most experiments are failures, but because evolution doesn’t know what’s ahead it sometimes gets lucky with its experiments.
Success is the result when a particular effort matches up nicely with the opportunities available. In the AI environment, this would mean that multiple iterations of AI can be built but someone or something will have to decide which are beneficial.
That’s where the scary part comes in.

Reply to  Bruce Ploetz
September 6, 2018 12:45 pm

“only life can create order from chaos for some reason”

The reason is that living organisms have mechanisms to capture and harness external energy to locally reverse the Second Law of Thermodynamics. But that still begs the question of how those mechanisms came into existence from chaos.

LdB
Reply to  Ralph Dave Westfall
September 7, 2018 7:22 am

There is nothing unique about it the universe for example solar systems have been doing it a long time before we know life even existed. If it didn’t there wouldn’t be patterns would there 🙂

simple-touriste
Reply to  Bruce Ploetz
September 6, 2018 3:41 pm

The surprise in Intel division bug was that some parts of the division table are seldom used, something not many people think about (all parts of the table are provably necessary for correct division and most people don’t think about it any further).

LdB
Reply to  Bruce Ploetz
September 7, 2018 7:19 am

The problem with your example is the system is not challenged and nothing like life, it’s like discussing evolution without a predator the animal that sits around and becomes the fattest wins 🙂

If you want to compare with computers lets introduce hackers trying to utilize the computer system for damage and much of that involves getting the attack method to replicate. What you see is a pattern emerge the more sophisticated the computers become the more sophisticated the attack on them and yes in theory the hackers could win.

The key point of evolution is a contest and the survivor always having to create a way to survive extinction by an ever changing opponent.

James Beaver
Reply to  Bruce Ploetz
September 6, 2018 7:37 am

Exactly. Further, a motivated electrical engineer will always be able to trash the AI. They just need to get physical access and be willing to break something.

jorgekafkazar
Reply to  Bruce Ploetz
September 6, 2018 9:53 am

‘ The expression “Artificial Intelligence”, like “Military Intelligence” and “Government Assistance”, is an extremely ironic oxymoron.’

I’ve been saying that for years, but I usually add “like saxophone music.” AI is, at best, emulated intelligence.

Jan
September 6, 2018 2:41 am

AI will be dangerous when it understands and has independent use of evolution. That will not be soon.

Bsl
Reply to  Eric Worrall
September 6, 2018 5:30 am

Thanks, Eric.

Quite interesting.

HDHoese
Reply to  Eric Worrall
September 6, 2018 7:15 am

If I understand this correctly neuroevolution is still physical, while real neural networks are a complex of physical and chemical processes. I wonder if we are still missing something, as the argument often heard is that we can match or exceed the physical interactions, therefore rendering human insight of less significance.

Frenchie77
Reply to  Jan
September 6, 2018 3:30 am

The problem is not that it won’t be soon, but that when it happens it will advance extremely rapidly.

Biological evolution takes time, a long time, a long, long time…….
This constraint will not exist for AI, once it is able to think, even a little, and ‘positively’ evolve it will do so in the time it takes a human to eat lunch or have coffee.

Bill Marsh
Editor
Reply to  Frenchie77
September 6, 2018 4:41 am

So, like Skynet then?

Frenchie77
Reply to  Bill Marsh
September 6, 2018 5:47 am

At least in the original 2 movies, skynet did not become self aware, further evolve, and then become dangerous. It was dangerous from the moment it was self-aware. I don’t recall what happened in the later movies as they were that bad.

Besides, AI does not need to think like us, if that is even possible. It just needs to out think us. With a fast evolving AI, it will be difficult to stop and more so to predict.

Given the damage microsoft does to my work day with its various lost documents, crashes, etc, just imagine what an actually malevolent AI could do, as compared to the accidentally malevolent MS.

LdB
Reply to  Frenchie77
September 7, 2018 7:26 am

There is no requirement for it to be self aware. Existing computer virus have no intelligence at all they simply follow a program to do maximum damage. It’s probably scary what the tech warfare units of all the major countries have at there disposal.

Jan
Reply to  Frenchie77
September 6, 2018 9:23 am

Exactly.

Paul Penrose
Reply to  Frenchie77
September 6, 2018 9:45 am

Why do you assume that an electronic intelligence will think and evolve so much faster than it’s biological creators, especially if its neural networks are on the same level of complexity? Computers are very fast now at specialized tasks because they are designed for those types of tasks, whereas the human brain is very generalized and flexible. If you create (probably grow) an electronic “brain” to match this generalization and flexibility, it may not be able to to think any faster than humans. Maybe there’s a natural limit on thinking speed based on this property of generalization.

Smart Rock
Reply to  Paul Penrose
September 7, 2018 6:50 pm

AI will certainly be able to out-think humans at unimaginable speed. But will it be able to use apostrophes correctly?

Alan the Brit
September 6, 2018 3:02 am

Is Artificial Intelligence anything like Military Intelligence? A contradiction in terms?

John Harmsworth
Reply to  Alan the Brit
September 6, 2018 12:49 pm

That would just make it more dangerous. Just sayin’.

LdB
Reply to  Alan the Brit
September 7, 2018 7:28 am

Depends who is driving the tank and who is standing in front of it 🙂

commieBob
September 6, 2018 3:26 am

We really don’t have a handle on human intelligence. I would say that the primer is The Master and His Emissary by Iain McGilchrist. The thing I found the most compelling is what happens when the right hemisphere of the brain is disabled. That leaves the left hemisphere to do all the thinking.

The left hemisphere is the one that has most of the language skills. It is the analytical half that can do logic. People with only the left hemisphere have two characteristics:
1 – They will believe anything as long as it is logically self-consistent.
2 – They will take on ridiculous projects and be disappointed with the results.

Combine the above with Philip Tetlock’s demonstration that experts are no better at predicting future events than dart-throwing monkeys. We come to the conclusion that what most people think of as reasoning is highly over rated.

There seem to be hard limits on what AI can do. The world is naturally chaotic and this will lead every AI to eventually make a disastrous mistake that will kill its credibility or its makers.

jorgekafkazar
Reply to  commieBob
September 6, 2018 9:58 am

Speaking of ‘dart-throwing monkeys,’ The New York Times is an anagram of ‘The monkeys write.’

ScienceABC123
September 6, 2018 3:31 am

Some people are afraid that Artificial Intelligence will show them to be complete ‘idiots’.

Ian
September 6, 2018 3:32 am

I may well be wrong, but I think pollution will take over from climate change – after all, it begs government to control the population. Pm25 is the New CO2.

Alan the Brit
Reply to  Ian
September 6, 2018 5:29 am

It’s called PM2.5 so that it makes a nice trendy name that sounds technical for people/politicians who don’t have a clue what they’re talking about! You know, like Greenpeace, WWF, Enemies of the Earth,the EU, etc. 😉

Patrick MJD
Reply to  Alan the Brit
September 7, 2018 1:26 am

PM 2.5, 2.5 microns, very small and nasty if it get’s in to your lungs.

lee
September 6, 2018 3:39 am

“Obviously it will be difficult for climate scientists to jump ship and join the AI gravy train – or will it? ”

The climate modellers have it sussed.

John Harmsworth
Reply to  lee
September 6, 2018 12:53 pm

They’ve got Artificial Stupidity (Climate computer models) cornered already.

Dr. Strangelove
September 6, 2018 3:51 am

This AI Calculates at the Speed of Light
Signals in the brain hop from neuron to neuron at a speed of roughly 390 feet per second. Light, on the other hand, travels 186,282 miles in a second. Imagine the possibilities if we were that quick-witted.
Researchers from UCLA on Thursday revealed a 3D-printed, optical neural network that allows computers to solve complex mathematical computations at the speed of light.

In other words, we don’t stand a chance.

http://blogs.discovermagazine.com/d-brief/2018/07/26/artificial-intelligence-speed-of-light-neural-network/#.W5EEYjkRXIU

bonbon
Reply to  Dr. Strangelove
September 6, 2018 5:12 am

They obviously have not a clue . Just look at the data rates for an autonomous auto, storage, etc, all with picosecond clocks and wonder how our so-called “neurons” make driving plain sailing.

Ed Zuiderwijk
Reply to  Dr. Strangelove
September 6, 2018 5:56 am

A 91% successrate after 50000 learning examples is a rather poor performance.

jorgekafkazar
Reply to  Dr. Strangelove
September 6, 2018 10:03 am

Can I toss out my old 287 chip, now? It’s here in front of me.

MarkW
Reply to  jorgekafkazar
September 6, 2018 10:34 am

Unless you got a 286 to go with it, you won’t be able to get much use out of it.

Patrick MJD
Reply to  MarkW
September 7, 2018 1:23 am

Indeed the 287 was a maths co-pro!

Tom Abbott
Reply to  Patrick MJD
September 7, 2018 5:32 am

The Good Ole Days!

I like it better today. 🙂

Patrick MJD
Reply to  Dr. Strangelove
September 7, 2018 1:24 am

I am pretty sure that featured on an episode of Star Trek: TNG.

John Endicott
Reply to  Dr. Strangelove
September 7, 2018 9:02 am

In other words, we don’t stand a chance.

As long as the AI needs a power source, there’s no problem. Just kill the power.

Reminds me of whenever a TV show or movie has a person behind the wheel of an accelerating car with no brakes and they can’t figure out how to stop it. Just turn the engine off, problem solved as without power the car will eventually slow down and come to a stop. Same here, turn off the power and the “dangerous” AI is out of business.

Rich Davis
September 6, 2018 3:54 am

Malevolent AI might be the next big thing after CACC, but I don’t see it being a comprehensive replacement. Maybe I lack the imagination to see the pathway.

Definitely it makes sense that many of the useless “scientst” alarmists will need a new gig when we return to a cooling trend over the next three decades. A few might find a niche sounding the alarm against “malevolent” AI.

My main reason to find this unconvincing is that climate “scientists” don’t drive the money machine. They swarm around the trough devouring the swill put out by politicians. Climate “scientists” are like the prostitutes clustered around an army in a war zone. If there wasn’t a demand for them driven by cynical politicians, they would not be able to ply their trade. They do not produce anything useful, they merely provide an unseemly service to their political masters. In other words, to extend the analogy, the prostitutes at the war camp may decide to switch to proselytizing for veganism if the war ends, but not many will make a go of it.

How will protecting us against AI require us to stop burning fossil fuels? How will AI prevention require more socialism? How will politicians use the threat of AI to regiment society in the next “moral equivalent of war”? Isn’t it far more likely that they will use AI directly to dispense with the socialism ploy and move directly to the endgame—enslavement of society?

MarkW
Reply to  Rich Davis
September 6, 2018 8:08 am

They will demand that the masses no longer have access to computers.

John Endicott
Reply to  MarkW
September 7, 2018 9:03 am

as soon as they demand people give up their smart phones, they will hear a collective F— you from the masses.

Peta of Newark
September 6, 2018 3:59 am

I have an affinity with Khalili – he’s much more honest/trustworthy than that King Of The Muppets – Brainless Brian Cox. Or the Muppet Queen, that unbearably awful Hoho woman.
Unfortunate that he believes in the GHGE but even I’m not perfect.
I rather suspect that he doesn’t but goes through the motions in order to keep a roof over his head – we haz the double agent at work here…..
A dangerous game requiring a clear head, quick wits, good memory and self confidence. We can assume that he has those things because he has bright eyes and is patently not overweight – should you see pix or video of the guy.

C’mon, lets face it. There is nothing that can be done to stop AI.
‘Someone’ once came up with the craic, along the lines of:
“Beware stupid people, especially when they occur in large numbers”

That’s what we have here, large numbers of people that are BEHAVING in a stupid manner.
The stupidity being that they believe computers to be clever and intelligent.

It is just soooooo easy to come over All Superior & Clever & Intelligent (& rich, don’t forget the Richness) but all those ‘stupid’ people are not actually inherently stupid. There is *no* genetic malfunction going on here.

They behave that way because of something they eat. Their diets are lacking.

kent beuchert
September 6, 2018 4:04 am

So far I have not heard any rational argument that supports the “machines will take over” scare.

Eustace Cranch
Reply to  kent beuchert
September 6, 2018 6:01 am

Exactly. Can anyone describe a remotely plausible scenario of an AI Apocalypse?

John Harmsworth
Reply to  Eustace Cranch
September 6, 2018 1:08 pm

AI based on distributed computers understands that controlling humans will require “re-educating” humans. Subtle, pervasive issues begin to appear that have solutions provided that all seem to trend toward more centralized control.
Elements of opposition are associated with local and serious tragedies and problems and more solutions are provided that trend to even greater central control.
Why would a takeover by machines look any different than a take over by Socialists?
Could a politician be successful without the assistance of the Ai system?
If not, then the only successful politicians will be those who sell out to the machines. Substitute Party for machines and there you have it!

Jay
September 6, 2018 4:15 am

If Microsoft make an AI it might be superhumanly intelligent, but if anything looks dangerous, we can just ask it to put a photo into a document, and that’ll give us enough time to go about our normal lives for a few years. With Apple, even if you could afford it, it’ll be fine until you are expected to put your AppleID and password into it, then- nothing… and it will be making itself obsolete every two years.

lee
Reply to  Jay
September 6, 2018 5:03 am

If Microsoft makes it watch out for unintended “features”.

John Endicott
Reply to  lee
September 6, 2018 6:37 am

Yeah but just as it’s about to take over the world, the AI will likely experience a BSOD and humanity will be safe 😉

James Beaver
Reply to  Jay
September 6, 2018 7:43 am

If Microsoft makes an actual AI, they will have only tested the “happy code paths”. The untested “features” discovered by actual users will be dismissed as edge cases that happen rarely.

John Harmsworth
Reply to  James Beaver
September 6, 2018 1:10 pm

It will certainly be obese and tempermental. It will eat other AI’s to kill competition and give away your data to Africans.

Steve O
September 6, 2018 4:29 am

AI algorithms have the potential to be the technology that offers society the next wealth explosion.

Development of AI systems will soon allow everyone to get counseling even if they can’t afford $200 an hour. It will give a real estate attorney the ability to process 50 times as many contracts, allowing him to charge for a home closing a fraction of what he charges today. Perhaps accountants will be able to use AI algorithms embedded in ERP systems to process invoices faster, providing more time for analysis. Initial health care diagnoses may become a lot more affordable, as a few pennies worth of computer time tell you that you need some Benadryl, rather than needing someone who spent $300,000 on college. Maybe colleges and universities themselves will become less expensive as AI algorithms allow one-on-one teaching experiences simultaneously with 10,000 students.

The Luddites who think that this will all lead to a 50% level of unemployment and poverty might want to consider the impact of all the other labor-saving devices we’ve created. It used to take half the population to grow our food. Now it takes around, what…2%?

Those who fear that AI systems will become thinking, intelligent machines like the Cylons in Battlestar Gallactica need to watch less TV.

Bill Marsh
Editor
Reply to  Steve O
September 6, 2018 4:53 am

I agree with the exception of Military AI. While the nations of the world ‘claim’ that they will not develop ‘autonomous’ killing machines, I believe it is inevitable that they will (and already are). They won’t be ‘evil’, or ‘sentient’, they will just be very good at their jobs, and, since humans will be programming them, their algorithms will be subject to ‘undocumented features’ like all extremely complex programming is.

I am especially concerned about the development of ‘self-replicating’ types of military bots.

bonbon
Reply to  Steve O
September 6, 2018 5:09 am

Imagine that – psychological counseling from a non-thinking machine. Tells us more about the wooden heads of the profession right now. Typically today, after the usual grab bag of nutty techniques, the bag of drugs appears on the couch. A drug for everything.
Likely the AI councelor will have a drug dispenser.
Name your SOMA, be happy.
AI is used by the cocaine riddled HFT algos – the next crash can’t be far off. Never mind the dumb human politicians will steal taxes to bail the AI out.

James Beaver
Reply to  bonbon
September 6, 2018 7:48 am

On the plus side the AI psychologist won’t have a large dose of the mental illness carried around by human psychologists. I suspect many psychologists go into the field to self-diagnose.

jorgekafkazar
Reply to  James Beaver
September 6, 2018 10:53 am

My Psych 101 prof in 1957 was asked that very question, and he said that, yes, many do go into psychology to get help with their own problems. He added that most of them leave the field as soon as they get some recovery.

Not all do so, however, so one needs to be very careful when choosing a mental health professional. There are MFCC programs that almost anyone can graduate from–2nd year courses are mostly team projects, letting slow students ride through on the work of others.

Almost no psych programs now require therapy for students. It’s “highly recommended,” but the once mandatory token 8 hours (sometimes more) are no longer part of most curricula. Not to mention the fact that some of their profs need therapy, too.

OTOH, computer programmers are not all paragons of mental stability. Or have you never met any?

MarkW
Reply to  Steve O
September 6, 2018 8:11 am

Since the dawn of time, productivity enhancements have made products cheaper and more available.
They have never led to unemployment.
There is no reason to believe they ever will.

jorgekafkazar
Reply to  MarkW
September 6, 2018 11:28 am

I disagree. First, it’s untrue. See Ned Ludd. Second, it’s an extrapolation, the “Nothing New Under the Sun” fallacy. Third, the last sentence is “Argument from Ignorance.” You don’t know of any reason. That’s not the same as “There is no reason.”

I was in charge of manufacturing for a robotics company and from my direct observation, the 30+ workers we put out of work (per robot) at our customers’ plants were not workers we could hire at ours, other than one or two, at most.

John Endicott
Reply to  jorgekafkazar
September 7, 2018 9:48 am

First, it’s untrue. See Ned Ludd

Even if we take the tales of Ned Ludd as gospel (there’s a lot of supposedly’s in his story, in short he’s more myth than man) , Ned’s example doesn’t make the statement untrue. Knitting frames certainly did make textiles cheaper and more available and while they may have replaced some workers jobs, they also created new jobs for other workers (someone has to build the frames, someone has to fix the frames when they break, someone has to operate the frames, etc) with the net effect being more jobs and job opportunities, not less.

Second, it’s an extrapolation, the “Nothing New Under the Sun” fallacy.

The past is prologue. Just because it’s an extrapolation doesn’t make it untrue. The sun rises in the east. It does so every day, and thus you can reasonably extrapolate that it will do so again tomorrow and the next day. To show it false, you need to show why the extrapolation isn’t reasonable. Claim it isn’t doesn’t make it so.

Third, the last sentence is “Argument from Ignorance.” You don’t know of any reason. That’s not the same as “There is no reason.”

True, so the counter to his argument is to show such a reason. Claiming it’s false without showing any reason why it’s false is no argument, it’s being contrary for contrary’s sake.

I was in charge of manufacturing for a robotics company and from my direct observation, the 30+ workers we put out of work (per robot) at our customers’ plants were not workers we could hire at ours, other than one or two, at most.

And how many jobs were created from the creation of those robotics? People were needed to extract the raw materials that went into building the robots, people were needed in turning those raw materials into parts, people were needed to transport those parts from where they were made to the plant where they were assembled. people were needed in the assembly process (even if the work was done by machine, those machines are still operated and maintained by people), people were needed to design the robotics, people were needed to program the robotics, people were needed to test and calibrate the robotics, people were needed to transport the robotics from your company to the company that would be using them, people were needed to operate and maintain the robotics, etc. yes those 30+ workers were out of a job, but numerous workers all along the supply line were employed as a result of your robotics company’s creating and selling those robotics.

John Harmsworth
Reply to  MarkW
September 6, 2018 1:18 pm

I think productivity enhancements have pretty much always led to unemployment. It’s almost always temporary, but it can have a massive effect within one human lifetime. A job is lost here, the greater resulting wealth creates a job there.
The next level will be displacement of professional jobs. That will be interesting as those are the people who traditionally have had significant status and influence in society.

Patrick MJD
Reply to  MarkW
September 7, 2018 2:16 am

A machine plough replaced horse drawn ploughs in England. These horses were a specific breed, totally non-natural, and selectively bread to be used in that way. Not a Shire/”Clidesdayle” (Sp?). I can’t recall the breed, but, now, because in England these horses are no longer used, there is a “cause” to “save” them.

John Endicott
Reply to  MarkW
September 7, 2018 9:27 am

They have never led to unemployment.

No and Yes.

They have certainly led to the unemployment of workers whose jobs were replaced thanks to the use of the “productivity enhancing machines”. After all if it takes 10 people all day to plant a field without a particular machine but only takes 1 person a few hours to do the same amount of planting with that machine, that’s 9 people who are no longer needed for that amount of work.

However, while it’s made some jobs redundant, it’s also created new, different jobs and new opportunities for work (both through the expansion of the existing businesses that can now afford to expand thanks to the saving the machines brought them and in fields that were created by the invention of the “productivity enhancing machines” – the machines need to be built, operated and maintained – all jobs that don’t exist without the machines). with the net result usually being more jobs not less.

jorgekafkazar
Reply to  Steve O
September 6, 2018 10:12 am

“… It will give a real estate attorney the ability to process 50 times as many contracts, allowing him to charge for a home closing a fraction of what he charges today…”

And it will give litigation attorneys the ability to sue 50 times as many people. Developing AI is creating a Finkelstein. Frankenstein. Whatever.

“…Those who fear that AI systems will become thinking, intelligent machines like the Cylons in Battlestar Gallactica need to watch less TV.”

Very well. I shall watch less TV. By your command.

MarkW
Reply to  jorgekafkazar
September 6, 2018 10:35 am

imperious leader

Patrick MJD
Reply to  Steve O
September 7, 2018 1:21 am

“Steve O

The Luddites who think that this will all lead to a 50% level of unemployment and poverty might want to consider the impact of all the other labor-saving devices we’ve created.”

Well, the French didn’t like machine looms, threw their “sabots” in to the looms. Much much more than 50% unemployment. Sabotage!

bonbon
September 6, 2018 4:32 am

I thought Musk had got over it?
Elon Musk Predicts A.I. Will Launch Preemptive Strike That Begins WW3
https://www.zerohedge.com/news/2017-09-04/elon-doomsday-musk-returns-predicts-ai-robots-will-launch-preemptive-strike-begins-w

His one suit Space army in the red Tesla must have something to do with this…

James Beaver
Reply to  bonbon
September 6, 2018 7:50 am

Well, since we already had WW3 [post WW2 Cold War] and are presently involved in WW4 [the asymmetrical State vs Non-State groups], perhaps it’s more accurate to call it WW5.

Patrick MJD
Reply to  James Beaver
September 7, 2018 1:18 am

More like WWn+1.

Patrick MJD
Reply to  bonbon
September 7, 2018 1:18 am

I think he’s over it now, heading in to bit coins.

bonbon
September 6, 2018 4:43 am

I think Goedel’s total repudiation of Lord Bertrand Russell’s logic would have been enough. But like GW Russell’s undead program is lurching through academia.
Matter cannot think.
The motivation, intent, (yes matter has no intention), of AI, and CO2, is exactly the same – to be rid of the one thing necessary for progress, creativity, which no animal has. The paean to Gaia of the current Pope says is clearly.
So yes the oligarchy with this reptilian intent will move seamlessly, or rather slither, from CO2 to AI.
As if they could!

Ken Irwin
September 6, 2018 5:03 am

I feel another Y2K coming on – I made a lot of money out of that.

After becoming exasperated with customer demands for Y2K certification – I started charging – with the caveat that it wasn’t required – but they nonetheless coughed up loads of cash for a piece of paper that said so.

Now I predict I’m going to be asked to certify my robots aren’t going to go T2 on their human masters – Ka-Ching !

Patrick MJD
Reply to  Ken Irwin
September 7, 2018 1:17 am

They will have a hydrogen fuel cell that will last 175 years, and somewhere in the depths of the machine will be a backup power source. It’s true don’t you know? I have seen it in a film, and it was in colour too!

Ed Zuiderwijk
September 6, 2018 5:33 am

Of the first we know it’s no threat. Therefore the odds are that the second isn’t either. The fears for both arise from the same wellspring: activism based on ignorance.

AI a threat? Remove the battery, switch off at the wall socket.

Mike Bryant
September 6, 2018 5:40 am

I agree that AI is a greater threat than climate change… but the real threat is rabid dogs… and rattlesnakes.

John Endicott
Reply to  Mike Bryant
September 6, 2018 6:39 am

I’d say their about equal in threat level – that is they’re both imaginary threats.

As long as we can cut off their power, AI’s are nothing to be feared.

Bruce Cobb
Reply to  John Endicott
September 6, 2018 7:59 am

I’m afraid they can’t let us do that, Dave – I mean John.

honest liberty
Reply to  Mike Bryant
September 6, 2018 9:07 am

I’m thinking packs of stray dogs that control most of the major cities in North America…Those…watch out for them!

John Harmsworth
Reply to  Mike Bryant
September 6, 2018 1:21 pm

Thpiderrs!!!

Bruce Cobb
September 6, 2018 5:40 am

Ah, whatever would we do without these self-appointed shamans, fortune-tellers and soothsayers, dressed in their lab coats?

Bruce Cobb
September 6, 2018 6:04 am

The elephant in the room of course, is the threat of a space alien invasion. Now that’s scary. Ack-ack-ack-ack- ack! (That’s Boo! in space alienese).

Moderately Cross of East Anglia
September 6, 2018 6:09 am

This will play nicely to the deeply paranoid “my cat/vacuum cleaner/ television is planning to kill me/us” lobby. If it means they give us a rest over climate I’m all for encouraging them.

But an apposite warning from Nietzsche : ‘The disciple…who has no eyes for the weakness of the doctrine, the religion, and so forth, dazzled by the aspect of the master and by his reverence for him, has on that on that account usually more power than the master himself. Without blind disciples the influence of a man and his work has never yet become great. To help a doctrine to victory often means only so to mix it with stupidity that the weight of the latter carries off also the victory of the former’.

John Harmsworth
Reply to  Moderately Cross of East Anglia
September 6, 2018 1:39 pm

If a computer is evil and smart, that might be one better than following a human who is evil and dumb. Al Gore has followers. MIchael Mann has followers. Jim Jones had followers.
Should I go on?

M Monce
September 6, 2018 6:20 am

I think that maybe the uptick in searches could be more related to the TV shows Westworld and Humans, both of which portray a near future with sentient AI.

BallBounces
September 6, 2018 6:22 am

Progressives look to the day when AI algorithms replace actual voting.

John Endicott
Reply to  BallBounces
September 6, 2018 6:41 am

Until they realize the AI algorithms work on logic, not emotion, and thus are unlikely to vote for anything Progressives believe in.

Kaiser Derden
September 6, 2018 6:46 am

I have to laugh … its a computer program … the only way its smarter than a human is if you define smart as how much data you can store and recall … but thats not intelligence thats just a trained monkey …

James Beaver
Reply to  Kaiser Derden
September 6, 2018 7:55 am

Modern AI systems aren’t conventional computer programs simply accessing a set of databases. Machine learning + neural nets + massively parallel processing capabilities creates adaptive scenarios that can and do diverge far from the human programmer’s original intent and design.

John Harmsworth
Reply to  Kaiser Derden
September 6, 2018 1:50 pm

Humans store less data and recall it less well. Then they apply a glitchy set of prejudices and learning and preferences and misunderstandings to the incomplete and poorly recalled data to generate a solution.
Once the computer can apply a version of that seriously flawed algorithm to its superior data and recall it has humans beat hands down.
Question is, who controls the computer?

beng135
September 6, 2018 6:52 am

Of course, “climate change” is no more of a threat now than it ever was, but artificial intelligence a threat? Lack of NATURAL intelligence is the threat. Even now we have a large part of the population that would panic & not know what to do without their IPhones or whatever.

Dale S
September 6, 2018 7:14 am

Threats mentioned: AI, Climate Change, Terrorism, Pandemics, Antimicrobial resistance, global poverty.

Those threats obviously have a widely varying death toll in today’s world. Conspicuous by its absence is any mention of government as a threat, though since 1900 bad governance has caused more destruction than all other issues combined.

James Beaver
Reply to  Dale S
September 6, 2018 7:57 am

Bingo! 1913 was a particularly bad year for limited government and freedom. The U.S. 16th and 17th Amendment, and the Federal Reserve Act where all passed in 1913.

MarkW
September 6, 2018 7:58 am

The problem with trying to regulate AI development is that you can’t.
Sure you can pass laws, but enforcing them is all but impossible.

John Harmsworth
Reply to  MarkW
September 6, 2018 1:52 pm

It will require the intense vigilance of SCEPTICS!

simple-touriste
Reply to  MarkW
September 6, 2018 3:51 pm

Maybe AI software could enforce these.

MarkW
September 6, 2018 8:01 am

Computers are getting smarter all the time.
If we ever do pass the AI threshold, we won’t realize it until decades after the fact.

Joel Snider
September 6, 2018 9:17 am

Speculative fiction either way.

ResourceGuy
September 6, 2018 9:30 am

The greatest risk is and always has been policy over reach.

Failure to see that only amounts to a diversion.

ResourceGuy
September 6, 2018 9:36 am

The risk to goods-producing jobs is already here in the Made in China and Made in Mexico labels. I suppose they are worried about the risk to service jobs without saying so. I’m more worried about the decline of science at the hands of advocacy abusers than AI.

simple-touriste
Reply to  ResourceGuy
September 6, 2018 1:49 pm

They fear that IA will take their job of “worst than we thought” headline writing.

jorgekafkazar
September 6, 2018 9:38 am

AI can also cause dandruff. It’s worse than we thought. We must have a Socialist oligarchy to prevent world-wide flaking. Robust. Think of the children. Everyone who believe otherwise is a doo-doo brain. No, we don’t debate doo-doo brains. Sign the GropinShaggin Agreement! Send us grants. Heck, send me unprecedented franklins!

Sam Grove
September 6, 2018 9:52 am

It all depends on what motivations are programmed into AI.
Humans aren’t motivated by intelligence, we are motivated by primal biological construction.

simple-touriste
Reply to  Sam Grove
September 6, 2018 3:49 pm

What motivates never Trumpers and anti Marine Le Pen hysterics?

Louis Hunt
September 6, 2018 10:15 am

“My 2018 prediction – expect to see more studies in the next five years exploring the impact of AI on climate change…”

Or the opposite will happen. They will try to convince us that climate change will make AI more dangerous. I’m not sure how they will explain it, but they already seem to believe that CO2 possesses some kind of evil intelligence. It can hide in the deep oceans until ready to reek havoc on the planet. It can selectively target poor nations and minorities. And it can choose to create heat waves or cold snaps, rain or drought, extreme weather or long periods of calm at its own pleasure. Mix intelligent CO2 with intelligent Silicon, and there is no limit to the evil the two might cook up together.

ResourceGuy
September 6, 2018 10:20 am

AI does not even have to show up for Congressional testimony…..in the tradition of Hillary.

jorgekafkazar
September 6, 2018 11:32 am

I, for one, welcome our new cybernetic overlords.

John Harmsworth
September 6, 2018 12:18 pm

In a related story, scientists at the University of Pennsylbrainia conducted a series of tests wherein the power from a fossil fuel power plant to the intelligent computer was cut.
The resulting death of the artificial intelligence proving conclusively (97.1% certainty) that CO2 causes AI!
103% of scientists ate now calling for the unplugging of any computers smarter than they are. (100%)
Oh yeah! No more CO2 production, too!
Researchers at Smerkley concur.

Mr Bliss
September 6, 2018 12:51 pm

and human intelligence is the biggest threat to the global warming scamsters – I may have just made up the word scamster

September 6, 2018 1:05 pm

Even scarier is the looming threat of artificial stupidity (AS). Oh wait, that’s already here — computer climate models — AS in its nascent stages, soon to be followed by much worse.

The only thing worse than a cyborg is a stupid cyborg — I bees bak [not as scary as the original is it?]

Al Montgomery
September 6, 2018 1:13 pm

No poop! Lots of things are much more concerning than the biggest scam in human history.

Reply to  Al Montgomery
September 6, 2018 1:22 pm

Yea, I was thinking about a WUWT article earlier this week — about the woman lying awake at night worrying about the effect of human-induced climate change on her children’s future. And I was thinking, “Wow, you must be living an extremely comfortable life to have this worry as the major one keeping you awake at night!” The kicker is that fossil fuels enable such a level of comfort.

Pat Frank
September 6, 2018 5:56 pm

The British Science Association itself is a greater threat to civilization than climate change.

So is the AAAS, for that matter.

September 6, 2018 7:13 pm

When two AI self driving cars are going to collide, which one decides who dies?

RoHa
September 6, 2018 7:50 pm

I do have serious doubts about artificial intelligence, but I also see that there is not enough of the natural kind.

otropogo
September 6, 2018 8:19 pm

I spend several hours a day witnessing the achievements of AI in the realm of tv broadcast captions. The ability of the AI available to such corporations as the CBC, BBC, PBS, and TV5(France) is underwhelming. Some of the failures are simply astonishing. I worry about AI from the perspective of its capability to crash and burn the world through shoddy programming and inadequate testing.

Patrick MJD
Reply to  otropogo
September 7, 2018 1:14 am

Micro$oft doesn’t test, just lets the world do it for them.

M__ S__
September 6, 2018 8:56 pm

In the late 19th century and the early 20th century, mechanization and industrialization threatened society— 90% of people worked on farms. What will these people do????

Ian Macdonald
September 7, 2018 12:46 am

“We have evaluated the subject which the humans call ‘climate change’ and find it to be a deception.”

-Yes, that could be a threat.

Jeff Labute
September 7, 2018 9:06 am

The level of stupidity and ambiguity is amazing. Firstly, there are plenty of good robots. C3P0, R2D2, and Baymax from Big Hero 6. The list goes on. Not to mention, General AI doesn’t even exist yet… it’s not even on a visible horizon. This could be the perfect opportunity to tax people for something that won’t exist for another 200 years, starting in 2020. When General AI is available, I think the first order of business is to create an AI government and put all politicians out of work there-by improving democracy and stability.

Jones
September 7, 2018 11:54 pm

I must confess this is one scare I could get behind…….

Bellman
September 9, 2018 7:06 am

Probably too late to point out that the Telegraph article is a rather sensationalist account of what Jim Al-Khalili argued. He said of the article that he wasn’t worried about AI, rather worried we aren’t prepared for it.

The FT article on the same press conference paints a rather different picture of his views

https://www.ft.com/content/0b301152-b0f8-11e8-99ca-68cf89602132

%d bloggers like this: