
Guest essay by Eric Worrall
As I predicted in 2017, the malevolent AI threat is rapidly moving up the ranks of candidate replacements for the failed climate change scare.
Artificial Intelligence is greater concern than climate change or terrorism, says new head of British Science Association
By Sarah Knapton, science editor
6 SEPTEMBER 2018 • 12:01AMArtificial Intelligence is a greater concern than antibiotic resistance, climate change or terrorism for the future of Britain, the incoming president of the British Science Association has warned.
Jim Al-Khalili, Professor of physics and public engagement at the University of Surrey, said the unprecedented technological progress in AI was ‘happening too fast’ without proper scrutiny or regulation.
Prof Al-Khalili warned that the full threat to jobs and security had not been properly assessed and urged the government to urgently regulate.
Speaking at a briefing in London ahead of the British Science Festival in Hull next week, he said: “Until maybe a couple of years ago had I been asked what is the most pressing and important conversation we should be having about our future, I might have said climate change or one of the other big challenges facing humanity, such as terrorism, antimicrobial resistance, the threat of pandemics or world poverty.
“But today I am certain the most important conversation we should be having is about the future of AI. It will dominate what happens with all of these other issues for better or for worse.
…
Artificial intelligence has a lot of potential as a replacement scare story.
- AI directly threatens jobs and economic stability.
- AI undermines democracy – the elite owners of powerful AIs have an unprecedented advantage over everyone else.
- Hollywood is onboard – there are plenty of movies featuring dangerous AI adversaries out to control or destroy the world.
- AI threatens national security – a nation whose geopolitics is advised by greater than human intelligence will have a possibly insurmountable advantage.
- Powerful AIs may be difficult to control – humans will struggle to constrain machines more intelligent than their creators.
- Since Artificial General Intelligence (i.e. human level AI or better than human AI) does not yet exist, researchers can make stuff up, and nobody can prove they are wrong.
Obviously it will be difficult for climate scientists to jump ship and join the AI gravy train – or will it? Plenty of climate scientists have degrees which could be stretched to cover expert sounding pontification about artificial intelligence.
My 2018 prediction – expect to see more studies in the next five years exploring the impact of AI on climate change, written by climate scientists keen to build a parallel academic track record studying artificial intelligence issues.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Artificial Intelligence can’t compete with natural stupidity.
Yes but if you put stupid people in charge of the dangerous AI… 😉
…With luck the AI will be smart enough not to listen to them.
Fine, so AI will be smart enough to ignore stupid climate deniers and will succeed in establishing unelected world government where the UN failed.
What’s not to like ?
I was about to make the exact same point…
Machines vs. Luddites. Machines won.
Machines vs. “progressives (Luddites)”. Bring it on.
No. Men controlling machines won over men not wanting machines.
The parallel to machines controlling men is very weak.
I really don’t see what this has to do with “progressives”.
Forget who pointed it out, but if there’s artificial intelligence, there’s also artificial stupidity.
One expects the latter will far outweigh the former. Especially given modelers.
There is a model for that 🙂
The technology of artificial stupidity is already well established. ( My bank is using it to handle on line enquiries. )
The problems start when we create AI.
Matrix was supposed to be a warning but like 1984 some ass-holes will take it as a blueprint.
The lack of natural intelligence is by far the greatest threat.
So there will be a silicon tax in the future and children won’t know what carbon is? What’s the next element/molecule can they target? H2O? Oh wait!
Have you seen that video by Penn & Tella, where they get a group of girls to go around a park asking people if they’d sign their pettition against industry using Dihydrogen Monoxide in the water supplies & the nuclear industry? It’s frightening just how many people were prepared to sign up to it! I’ll hunt for a Youtube link.
https://www.gopetition.com/petitions/ban-dihydrogen-oxide.html
See also:
https://en.wikipedia.org/wiki/Dihydrogen_monoxide_hoax
They have made several videos on subjects like organic food and landfill waste/recycling and climate change etc. Very interesting. But there are a lot of people who would find his language too much. Me on the other hand like it because, sometimes, there just is no other way to get the point across.
I thought Nitrogen was the biggest and worst pollutant.
It’s almost 80% now. Been growing like triffids and no elected official took on the problem. You get what you vote for.
Oxygen is a poison, doncha know!
And dinitrogen an asphyxiant. We’re doomed!
Given they know nothing, how is the government going to regulate?
As long as AI isn’t connected to the IoT (internet of things – one of the more stupid ideas the human race has come up with) the AI’s can flash their lights at us as much as they like.
“Mike Borgelt
As long as AI isn’t connected to the IoT (internet of things – one of the more stupid ideas the human race has come up with) the AI’s can flash their lights at us as much as they like.”
Along with Windows Hello!!! Sheesh!
One rogue AI might manage to kill some people but it would be destroyed fairly quickly.
We just need to keep the AI’s away from out nukes. 🙂
“We just need to keep the AI’s away from out nukes”
Wouldn’t a nuke EMP be deadly to AI as well?
Most computer rooms are well shielded.
True but unless they are packing their own private shielded and independent power source, dropping nukes here, there and everywhere will take out the power grid that those computer rooms rely on to keep running.
I would expect any critical infrastructure would have at least two backup power systems. ( Unless, they are designed by TEPCO , of course.).
See Colossus: The Forbin Project, 1970.
I don’t know…several people have already been killed by rogue Teslas on autopilot and Tesla still seems to be doing fairly well (unfortunately).
As someone who actively uses the internet of things on a daily basis for work, it is actually a great thing when you can see warning signs on customer equipment before it actually fails. This can lead to proactive maintenance by dispatching an FSE on site to remedy the situation with minimal downtime.
Thanks for the hearty laugh!
Production won’t let you take the machine down no matter how much data you show them, all that matters is today’s production numbers. This from working as both an FSE and equipment maintenance.
Nah, you just trigger an “event” like a bank in 2012, or supermarket a few weeks back or more recently, a transport network.
I have been both the FSE and the one analyzing the data. Production sure will take down the machine if I can minimize downtime.
That’s a big “IF”. I’m sitting on a machine right now that needs a couple hundred k worth of work done to it, probably two weeks of down time. They would rather chance it breaking, needing even more time and money put into it than fix it now. Good chance it would break bad enough to be money ahead to replace the whole thing , some of the parts that are likely to break being as much as 6 months out. This is from a company that wants to take care of their equipment and willing to spend the money to do it. It’s just that sales are through the roof (good problem to have) and production feels they can’t afford the down time.
Only place I’ve supported that does it right is Intel. They have redundant equipment, making getting down time easier plus their CMMS shuts down the equipment for scheduled maintenance. Takes an engineer to log equipment back up without the work being performed and to do that they have to put in a reason and that’s digitally signed by them. Engineer not being under production and personally responsible for the work not getting done means the work actually does get done.
@Tim in WA
I wish. I was the facility manager for Emcore in Somerset, NJ for a while. I set up a monthly maintenance schedule to service all the “back end” equipment that kept the semiconductor tools up and running. All was well until production demands ramped up, and production, in a meeting with the company president, said they’d rather risk a catastrophic failure than be down for one or two days per month. All the other department heads signed off on it and that’s how it went forward.
“Yes but if you put stupid people in charge of the dangerous AI…”
Not sure that it isn’t already the case. On a recent quick check it seems that the neural nets they are using are 90s vintage ideas with 21C computing power. No sign of real innovation, just grunt.
Just because an AI can play chess and Go well doesn’t mean that it can function in the real world, which is almost infinitely more complex than the highly restricted and simply defined world of board games. The big risk is the hubris of the companies developing them.
Neural nets are data hungry. With games they can generate lots of accurate data by trial and error. Not so in the real world.
Counter-step #1: starve them of data.
Ha – Horror writer Dean Koontz covered what happens if you try to starve a powerful AI of data in his story “Demon Seed”. It found a chink in its cage.
What concerns me as a Structural Engineer, is will an AI system be able to do a feeble weak irrelevent Human thing, like saying, “Now, just hang on a minute!”?
Here’s my biggest question. Will we humans stop saying that in the face of the certainty of that computing power>
We are barely managing to do it on climate change, which is essentially a predetermined computer output.
An AI system is just an old input based computational result with the additional feature of having been processed by an “intelligence”. How are we to know if the intelligence has predetermined biases or not?
So what happens when we have been taught to accept its decisions automatically and without question?
Oh no! Here we go again! It’s 1970 – Colossus: the Forbin Project all over again.
[youtube https://www.youtube.com/watch?v=iRq7Muf6CKg&w=1583&h=597%5D
Computers playing chess and go is not a sign of intelligence. It’s just pattern matching at really, really high speeds.
define intelligence.
The missing “hotspot” sez “I’ll be back, with a vengeance!”
Mr Al-Khalili, or may I call you Jim in today’s egalitarian world, I congratulate you on recognizing that AI is a greater threat than Climate Change. How perceptive. May I add a few other things that are also a greater threat than Climate Change: the EU, Greenpeace, Jackboots (see EU), WWF, Architects of the Adjustocene, the BBC (who no doubt pay you royally Jim), FOE, the British Met Office, some Universities (see Pen State), plastic straws (I kid you not), crisp packets, ants and the mad nerve agent warrior in Moscow.
The steroid laden WWE is pretty scary too…..
What if the animals are incorporated into the WWF? We could have WW Wildlife Wrestling Federation! On steroids!
I would agree with your list. I would add another to the list: climate change – at least the mild global warming we have enjoyed since 1900 – is considerably less of a threat than kittens.
It’s a pity that Al-Khalili even thinks that climate change is a threat at all, but then he does work for the BBC.
Chris
Good point! Socialism is a bigger threat than any of those things.
Finally they found a threat more insidious and less visible than trace atmospheric gasses.
Saul Alinsky’s ninth “Rule for Radicals”
As one who designs electronic equipment for a living, I am not in the slightest afraid that the robots will take over the world. The robots are made by humans. None can pass the Turing test without faking it, let alone “think for themselves”. And it is extremely unlikely that they ever will.
If you program an “AI” to modify itself the laws of entropy will rapidly assert themselves. If you don’t, the “AI” can only do what it was programmed to do. The expression “Artificial Intelligence”, like “Military Intelligence” and “Government Assistance”, is an extremely ironic oxymoron.
Could you elaborate on your assertion that ‘laws of entropy will rapidly assert themselves’?
Take any program, simple or large, Change one random byte. If the byte is in shadowed code (inaccessible) there will be no result, but if it is in functioning code the system will malfunction or crash very rapidly. Descending into the higher entropy chaotic state that the whole universe tends toward naturally.
Only the theory of evolution bucks this trend, a good argument against the theory of evolution until we find out how it actually works.
Faults and mutations degrade systems, only life can create order from chaos for some reason.
Evolution doesn’t purposefully “improve” anything. It merely adapts and changes and experiments. Most experiments are failures, but because evolution doesn’t know what’s ahead it sometimes gets lucky with its experiments.
Success is the result when a particular effort matches up nicely with the opportunities available. In the AI environment, this would mean that multiple iterations of AI can be built but someone or something will have to decide which are beneficial.
That’s where the scary part comes in.
“only life can create order from chaos for some reason”
The reason is that living organisms have mechanisms to capture and harness external energy to locally reverse the Second Law of Thermodynamics. But that still begs the question of how those mechanisms came into existence from chaos.
There is nothing unique about it the universe for example solar systems have been doing it a long time before we know life even existed. If it didn’t there wouldn’t be patterns would there 🙂
The surprise in Intel division bug was that some parts of the division table are seldom used, something not many people think about (all parts of the table are provably necessary for correct division and most people don’t think about it any further).
The problem with your example is the system is not challenged and nothing like life, it’s like discussing evolution without a predator the animal that sits around and becomes the fattest wins 🙂
If you want to compare with computers lets introduce hackers trying to utilize the computer system for damage and much of that involves getting the attack method to replicate. What you see is a pattern emerge the more sophisticated the computers become the more sophisticated the attack on them and yes in theory the hackers could win.
The key point of evolution is a contest and the survivor always having to create a way to survive extinction by an ever changing opponent.
Exactly. Further, a motivated electrical engineer will always be able to trash the AI. They just need to get physical access and be willing to break something.
‘ The expression “Artificial Intelligence”, like “Military Intelligence” and “Government Assistance”, is an extremely ironic oxymoron.’
I’ve been saying that for years, but I usually add “like saxophone music.” AI is, at best, emulated intelligence.
AI will be dangerous when it understands and has independent use of evolution. That will not be soon.
Neuroevolution of augmenting topologies – combines neural net technology with evolution, uniquely well adapted to solving realtime control problems. Until quite recently this wasn’t thought to be possible, but a brilliant AI researcher who lives in Florida figured out a way to make it work.
Thanks, Eric.
Quite interesting.
If I understand this correctly neuroevolution is still physical, while real neural networks are a complex of physical and chemical processes. I wonder if we are still missing something, as the argument often heard is that we can match or exceed the physical interactions, therefore rendering human insight of less significance.
The problem is not that it won’t be soon, but that when it happens it will advance extremely rapidly.
Biological evolution takes time, a long time, a long, long time…….
This constraint will not exist for AI, once it is able to think, even a little, and ‘positively’ evolve it will do so in the time it takes a human to eat lunch or have coffee.
So, like Skynet then?
At least in the original 2 movies, skynet did not become self aware, further evolve, and then become dangerous. It was dangerous from the moment it was self-aware. I don’t recall what happened in the later movies as they were that bad.
Besides, AI does not need to think like us, if that is even possible. It just needs to out think us. With a fast evolving AI, it will be difficult to stop and more so to predict.
Given the damage microsoft does to my work day with its various lost documents, crashes, etc, just imagine what an actually malevolent AI could do, as compared to the accidentally malevolent MS.
There is no requirement for it to be self aware. Existing computer virus have no intelligence at all they simply follow a program to do maximum damage. It’s probably scary what the tech warfare units of all the major countries have at there disposal.
Exactly.
Why do you assume that an electronic intelligence will think and evolve so much faster than it’s biological creators, especially if its neural networks are on the same level of complexity? Computers are very fast now at specialized tasks because they are designed for those types of tasks, whereas the human brain is very generalized and flexible. If you create (probably grow) an electronic “brain” to match this generalization and flexibility, it may not be able to to think any faster than humans. Maybe there’s a natural limit on thinking speed based on this property of generalization.
AI will certainly be able to out-think humans at unimaginable speed. But will it be able to use apostrophes correctly?
Is Artificial Intelligence anything like Military Intelligence? A contradiction in terms?
That would just make it more dangerous. Just sayin’.
Depends who is driving the tank and who is standing in front of it 🙂
We really don’t have a handle on human intelligence. I would say that the primer is The Master and His Emissary by Iain McGilchrist. The thing I found the most compelling is what happens when the right hemisphere of the brain is disabled. That leaves the left hemisphere to do all the thinking.
The left hemisphere is the one that has most of the language skills. It is the analytical half that can do logic. People with only the left hemisphere have two characteristics:
1 – They will believe anything as long as it is logically self-consistent.
2 – They will take on ridiculous projects and be disappointed with the results.
Combine the above with Philip Tetlock’s demonstration that experts are no better at predicting future events than dart-throwing monkeys. We come to the conclusion that what most people think of as reasoning is highly over rated.
There seem to be hard limits on what AI can do. The world is naturally chaotic and this will lead every AI to eventually make a disastrous mistake that will kill its credibility or its makers.
Speaking of ‘dart-throwing monkeys,’ The New York Times is an anagram of ‘The monkeys write.’
Some people are afraid that Artificial Intelligence will show them to be complete ‘idiots’.
I may well be wrong, but I think pollution will take over from climate change – after all, it begs government to control the population. Pm25 is the New CO2.
It’s called PM2.5 so that it makes a nice trendy name that sounds technical for people/politicians who don’t have a clue what they’re talking about! You know, like Greenpeace, WWF, Enemies of the Earth,the EU, etc. 😉
PM 2.5, 2.5 microns, very small and nasty if it get’s in to your lungs.
“Obviously it will be difficult for climate scientists to jump ship and join the AI gravy train – or will it? ”
The climate modellers have it sussed.
They’ve got Artificial Stupidity (Climate computer models) cornered already.
This AI Calculates at the Speed of Light
Signals in the brain hop from neuron to neuron at a speed of roughly 390 feet per second. Light, on the other hand, travels 186,282 miles in a second. Imagine the possibilities if we were that quick-witted.
Researchers from UCLA on Thursday revealed a 3D-printed, optical neural network that allows computers to solve complex mathematical computations at the speed of light.
In other words, we don’t stand a chance.
http://blogs.discovermagazine.com/d-brief/2018/07/26/artificial-intelligence-speed-of-light-neural-network/#.W5EEYjkRXIU
They obviously have not a clue . Just look at the data rates for an autonomous auto, storage, etc, all with picosecond clocks and wonder how our so-called “neurons” make driving plain sailing.
A 91% successrate after 50000 learning examples is a rather poor performance.
Can I toss out my old 287 chip, now? It’s here in front of me.
Unless you got a 286 to go with it, you won’t be able to get much use out of it.
Indeed the 287 was a maths co-pro!
The Good Ole Days!
I like it better today. 🙂
I am pretty sure that featured on an episode of Star Trek: TNG.
In other words, we don’t stand a chance.
As long as the AI needs a power source, there’s no problem. Just kill the power.
Reminds me of whenever a TV show or movie has a person behind the wheel of an accelerating car with no brakes and they can’t figure out how to stop it. Just turn the engine off, problem solved as without power the car will eventually slow down and come to a stop. Same here, turn off the power and the “dangerous” AI is out of business.
Malevolent AI might be the next big thing after CACC, but I don’t see it being a comprehensive replacement. Maybe I lack the imagination to see the pathway.
Definitely it makes sense that many of the useless “scientst” alarmists will need a new gig when we return to a cooling trend over the next three decades. A few might find a niche sounding the alarm against “malevolent” AI.
My main reason to find this unconvincing is that climate “scientists” don’t drive the money machine. They swarm around the trough devouring the swill put out by politicians. Climate “scientists” are like the prostitutes clustered around an army in a war zone. If there wasn’t a demand for them driven by cynical politicians, they would not be able to ply their trade. They do not produce anything useful, they merely provide an unseemly service to their political masters. In other words, to extend the analogy, the prostitutes at the war camp may decide to switch to proselytizing for veganism if the war ends, but not many will make a go of it.
How will protecting us against AI require us to stop burning fossil fuels? How will AI prevention require more socialism? How will politicians use the threat of AI to regiment society in the next “moral equivalent of war”? Isn’t it far more likely that they will use AI directly to dispense with the socialism ploy and move directly to the endgame—enslavement of society?
They will demand that the masses no longer have access to computers.
as soon as they demand people give up their smart phones, they will hear a collective F— you from the masses.
I have an affinity with Khalili – he’s much more honest/trustworthy than that King Of The Muppets – Brainless Brian Cox. Or the Muppet Queen, that unbearably awful Hoho woman.
Unfortunate that he believes in the GHGE but even I’m not perfect.
I rather suspect that he doesn’t but goes through the motions in order to keep a roof over his head – we haz the double agent at work here…..
A dangerous game requiring a clear head, quick wits, good memory and self confidence. We can assume that he has those things because he has bright eyes and is patently not overweight – should you see pix or video of the guy.
C’mon, lets face it. There is nothing that can be done to stop AI.
‘Someone’ once came up with the craic, along the lines of:
“Beware stupid people, especially when they occur in large numbers”
That’s what we have here, large numbers of people that are BEHAVING in a stupid manner.
The stupidity being that they believe computers to be clever and intelligent.
It is just soooooo easy to come over All Superior & Clever & Intelligent (& rich, don’t forget the Richness) but all those ‘stupid’ people are not actually inherently stupid. There is *no* genetic malfunction going on here.
They behave that way because of something they eat. Their diets are lacking.
So far I have not heard any rational argument that supports the “machines will take over” scare.
Exactly. Can anyone describe a remotely plausible scenario of an AI Apocalypse?
AI based on distributed computers understands that controlling humans will require “re-educating” humans. Subtle, pervasive issues begin to appear that have solutions provided that all seem to trend toward more centralized control.
Elements of opposition are associated with local and serious tragedies and problems and more solutions are provided that trend to even greater central control.
Why would a takeover by machines look any different than a take over by Socialists?
Could a politician be successful without the assistance of the Ai system?
If not, then the only successful politicians will be those who sell out to the machines. Substitute Party for machines and there you have it!
If Microsoft make an AI it might be superhumanly intelligent, but if anything looks dangerous, we can just ask it to put a photo into a document, and that’ll give us enough time to go about our normal lives for a few years. With Apple, even if you could afford it, it’ll be fine until you are expected to put your AppleID and password into it, then- nothing… and it will be making itself obsolete every two years.
If Microsoft makes it watch out for unintended “features”.
Yeah but just as it’s about to take over the world, the AI will likely experience a BSOD and humanity will be safe 😉
If Microsoft makes an actual AI, they will have only tested the “happy code paths”. The untested “features” discovered by actual users will be dismissed as edge cases that happen rarely.
It will certainly be obese and tempermental. It will eat other AI’s to kill competition and give away your data to Africans.
AI algorithms have the potential to be the technology that offers society the next wealth explosion.
Development of AI systems will soon allow everyone to get counseling even if they can’t afford $200 an hour. It will give a real estate attorney the ability to process 50 times as many contracts, allowing him to charge for a home closing a fraction of what he charges today. Perhaps accountants will be able to use AI algorithms embedded in ERP systems to process invoices faster, providing more time for analysis. Initial health care diagnoses may become a lot more affordable, as a few pennies worth of computer time tell you that you need some Benadryl, rather than needing someone who spent $300,000 on college. Maybe colleges and universities themselves will become less expensive as AI algorithms allow one-on-one teaching experiences simultaneously with 10,000 students.
The Luddites who think that this will all lead to a 50% level of unemployment and poverty might want to consider the impact of all the other labor-saving devices we’ve created. It used to take half the population to grow our food. Now it takes around, what…2%?
Those who fear that AI systems will become thinking, intelligent machines like the Cylons in Battlestar Gallactica need to watch less TV.
I agree with the exception of Military AI. While the nations of the world ‘claim’ that they will not develop ‘autonomous’ killing machines, I believe it is inevitable that they will (and already are). They won’t be ‘evil’, or ‘sentient’, they will just be very good at their jobs, and, since humans will be programming them, their algorithms will be subject to ‘undocumented features’ like all extremely complex programming is.
I am especially concerned about the development of ‘self-replicating’ types of military bots.
Imagine that – psychological counseling from a non-thinking machine. Tells us more about the wooden heads of the profession right now. Typically today, after the usual grab bag of nutty techniques, the bag of drugs appears on the couch. A drug for everything.
Likely the AI councelor will have a drug dispenser.
Name your SOMA, be happy.
AI is used by the cocaine riddled HFT algos – the next crash can’t be far off. Never mind the dumb human politicians will steal taxes to bail the AI out.
On the plus side the AI psychologist won’t have a large dose of the mental illness carried around by human psychologists. I suspect many psychologists go into the field to self-diagnose.
My Psych 101 prof in 1957 was asked that very question, and he said that, yes, many do go into psychology to get help with their own problems. He added that most of them leave the field as soon as they get some recovery.
Not all do so, however, so one needs to be very careful when choosing a mental health professional. There are MFCC programs that almost anyone can graduate from–2nd year courses are mostly team projects, letting slow students ride through on the work of others.
Almost no psych programs now require therapy for students. It’s “highly recommended,” but the once mandatory token 8 hours (sometimes more) are no longer part of most curricula. Not to mention the fact that some of their profs need therapy, too.
OTOH, computer programmers are not all paragons of mental stability. Or have you never met any?
Since the dawn of time, productivity enhancements have made products cheaper and more available.
They have never led to unemployment.
There is no reason to believe they ever will.
I disagree. First, it’s untrue. See Ned Ludd. Second, it’s an extrapolation, the “Nothing New Under the Sun” fallacy. Third, the last sentence is “Argument from Ignorance.” You don’t know of any reason. That’s not the same as “There is no reason.”
I was in charge of manufacturing for a robotics company and from my direct observation, the 30+ workers we put out of work (per robot) at our customers’ plants were not workers we could hire at ours, other than one or two, at most.
First, it’s untrue. See Ned Ludd
Even if we take the tales of Ned Ludd as gospel (there’s a lot of supposedly’s in his story, in short he’s more myth than man) , Ned’s example doesn’t make the statement untrue. Knitting frames certainly did make textiles cheaper and more available and while they may have replaced some workers jobs, they also created new jobs for other workers (someone has to build the frames, someone has to fix the frames when they break, someone has to operate the frames, etc) with the net effect being more jobs and job opportunities, not less.
Second, it’s an extrapolation, the “Nothing New Under the Sun” fallacy.
The past is prologue. Just because it’s an extrapolation doesn’t make it untrue. The sun rises in the east. It does so every day, and thus you can reasonably extrapolate that it will do so again tomorrow and the next day. To show it false, you need to show why the extrapolation isn’t reasonable. Claim it isn’t doesn’t make it so.
Third, the last sentence is “Argument from Ignorance.” You don’t know of any reason. That’s not the same as “There is no reason.”
True, so the counter to his argument is to show such a reason. Claiming it’s false without showing any reason why it’s false is no argument, it’s being contrary for contrary’s sake.
I was in charge of manufacturing for a robotics company and from my direct observation, the 30+ workers we put out of work (per robot) at our customers’ plants were not workers we could hire at ours, other than one or two, at most.
And how many jobs were created from the creation of those robotics? People were needed to extract the raw materials that went into building the robots, people were needed in turning those raw materials into parts, people were needed to transport those parts from where they were made to the plant where they were assembled. people were needed in the assembly process (even if the work was done by machine, those machines are still operated and maintained by people), people were needed to design the robotics, people were needed to program the robotics, people were needed to test and calibrate the robotics, people were needed to transport the robotics from your company to the company that would be using them, people were needed to operate and maintain the robotics, etc. yes those 30+ workers were out of a job, but numerous workers all along the supply line were employed as a result of your robotics company’s creating and selling those robotics.
I think productivity enhancements have pretty much always led to unemployment. It’s almost always temporary, but it can have a massive effect within one human lifetime. A job is lost here, the greater resulting wealth creates a job there.
The next level will be displacement of professional jobs. That will be interesting as those are the people who traditionally have had significant status and influence in society.
A machine plough replaced horse drawn ploughs in England. These horses were a specific breed, totally non-natural, and selectively bread to be used in that way. Not a Shire/”Clidesdayle” (Sp?). I can’t recall the breed, but, now, because in England these horses are no longer used, there is a “cause” to “save” them.
They have never led to unemployment.
No and Yes.
They have certainly led to the unemployment of workers whose jobs were replaced thanks to the use of the “productivity enhancing machines”. After all if it takes 10 people all day to plant a field without a particular machine but only takes 1 person a few hours to do the same amount of planting with that machine, that’s 9 people who are no longer needed for that amount of work.
However, while it’s made some jobs redundant, it’s also created new, different jobs and new opportunities for work (both through the expansion of the existing businesses that can now afford to expand thanks to the saving the machines brought them and in fields that were created by the invention of the “productivity enhancing machines” – the machines need to be built, operated and maintained – all jobs that don’t exist without the machines). with the net result usually being more jobs not less.
“… It will give a real estate attorney the ability to process 50 times as many contracts, allowing him to charge for a home closing a fraction of what he charges today…”
And it will give litigation attorneys the ability to sue 50 times as many people. Developing AI is creating a Finkelstein. Frankenstein. Whatever.
“…Those who fear that AI systems will become thinking, intelligent machines like the Cylons in Battlestar Gallactica need to watch less TV.”
Very well. I shall watch less TV. By your command.
imperious leader
“Steve O
The Luddites who think that this will all lead to a 50% level of unemployment and poverty might want to consider the impact of all the other labor-saving devices we’ve created.”
Well, the French didn’t like machine looms, threw their “sabots” in to the looms. Much much more than 50% unemployment. Sabotage!
I thought Musk had got over it?
Elon Musk Predicts A.I. Will Launch Preemptive Strike That Begins WW3
https://www.zerohedge.com/news/2017-09-04/elon-doomsday-musk-returns-predicts-ai-robots-will-launch-preemptive-strike-begins-w
His one suit Space army in the red Tesla must have something to do with this…
Well, since we already had WW3 [post WW2 Cold War] and are presently involved in WW4 [the asymmetrical State vs Non-State groups], perhaps it’s more accurate to call it WW5.
More like WWn+1.
I think he’s over it now, heading in to bit coins.
I think Goedel’s total repudiation of Lord Bertrand Russell’s logic would have been enough. But like GW Russell’s undead program is lurching through academia.
Matter cannot think.
The motivation, intent, (yes matter has no intention), of AI, and CO2, is exactly the same – to be rid of the one thing necessary for progress, creativity, which no animal has. The paean to Gaia of the current Pope says is clearly.
So yes the oligarchy with this reptilian intent will move seamlessly, or rather slither, from CO2 to AI.
As if they could!
I feel another Y2K coming on – I made a lot of money out of that.
After becoming exasperated with customer demands for Y2K certification – I started charging – with the caveat that it wasn’t required – but they nonetheless coughed up loads of cash for a piece of paper that said so.
Now I predict I’m going to be asked to certify my robots aren’t going to go T2 on their human masters – Ka-Ching !
They will have a hydrogen fuel cell that will last 175 years, and somewhere in the depths of the machine will be a backup power source. It’s true don’t you know? I have seen it in a film, and it was in colour too!
Of the first we know it’s no threat. Therefore the odds are that the second isn’t either. The fears for both arise from the same wellspring: activism based on ignorance.
AI a threat? Remove the battery, switch off at the wall socket.
I agree that AI is a greater threat than climate change… but the real threat is rabid dogs… and rattlesnakes.
I’d say their about equal in threat level – that is they’re both imaginary threats.
As long as we can cut off their power, AI’s are nothing to be feared.
I’m afraid they can’t let us do that, Dave – I mean John.
I’m thinking packs of stray dogs that control most of the major cities in North America…Those…watch out for them!
Thpiderrs!!!
Ah, whatever would we do without these self-appointed shamans, fortune-tellers and soothsayers, dressed in their lab coats?