British Science Association: Artificial Intelligence is a Greater Threat than Climate Change

Search Term Interest

Google search interest in Climate Change, Deep Learning, Artificial Intelligence – source Google Trends

Guest essay by Eric Worrall

As I predicted in 2017, the malevolent AI threat is rapidly moving up the ranks of candidate replacements for the failed climate change scare.

Artificial Intelligence is greater concern than climate change or terrorism, says new head of British Science Association

By Sarah Knapton, science editor
6 SEPTEMBER 2018 • 12:01AM

Artificial Intelligence is a greater concern than antibiotic resistance, climate change or terrorism for the future of Britain, the incoming president of the British Science Association has warned.

Jim Al-Khalili, Professor of physics and public engagement at the University of Surrey, said the unprecedented technological progress in AI was ‘happening too fast’ without proper scrutiny or regulation.

Prof Al-Khalili warned that the full threat to jobs and security had not been properly assessed and urged the government to urgently regulate.

Speaking at a briefing in London ahead of the British Science Festival in Hull next week, he said: “Until maybe a couple of years ago had I been asked what is the most pressing and important conversation we should be having about our future, I might have said climate change or one of the other big challenges facing humanity, such as terrorism, antimicrobial resistance, the threat of pandemics or world poverty.

But today I am certain the most important conversation we should be having is about the future of AI. It will dominate what happens with all of these other issues for better or for worse.

Read more: https://www.telegraph.co.uk/science/2018/09/05/artificial-intelligence-greater-concern-climate-change-terrorism/

Artificial intelligence has a lot of potential as a replacement scare story.

  • AI directly threatens jobs and economic stability.
  • AI undermines democracy – the elite owners of powerful AIs have an unprecedented advantage over everyone else.
  • Hollywood is onboard – there are plenty of movies featuring dangerous AI adversaries out to control or destroy the world.
  • AI threatens national security – a nation whose geopolitics is advised by greater than human intelligence will have a possibly insurmountable advantage.
  • Powerful AIs may be difficult to control – humans will struggle to constrain machines more intelligent than their creators.
  • Since Artificial General Intelligence (i.e. human level AI or better than human AI) does not yet exist, researchers can make stuff up, and nobody can prove they are wrong.

Obviously it will be difficult for climate scientists to jump ship and join the AI gravy train – or will it? Plenty of climate scientists have degrees which could be stretched to cover expert sounding pontification about artificial intelligence.

My 2018 prediction – expect to see more studies in the next five years exploring the impact of AI on climate change, written by climate scientists keen to build a parallel academic track record studying artificial intelligence issues.

Advertisements

170 thoughts on “British Science Association: Artificial Intelligence is a Greater Threat than Climate Change

      • No. Men controlling machines won over men not wanting machines.

        The parallel to machines controlling men is very weak.

        I really don’t see what this has to do with “progressives”.

    • Forget who pointed it out, but if there’s artificial intelligence, there’s also artificial stupidity.

      One expects the latter will far outweigh the former. Especially given modelers.

      • The technology of artificial stupidity is already well established. ( My bank is using it to handle on line enquiries. )

        The problems start when we create AI.

        Matrix was supposed to be a warning but like 1984 some ass-holes will take it as a blueprint.

  1. So there will be a silicon tax in the future and children won’t know what carbon is? What’s the next element/molecule can they target? H2O? Oh wait!

  2. Given they know nothing, how is the government going to regulate?
    As long as AI isn’t connected to the IoT (internet of things – one of the more stupid ideas the human race has come up with) the AI’s can flash their lights at us as much as they like.

    • “Mike Borgelt

      As long as AI isn’t connected to the IoT (internet of things – one of the more stupid ideas the human race has come up with) the AI’s can flash their lights at us as much as they like.”

      Along with Windows Hello!!! Sheesh!

    • One rogue AI might manage to kill some people but it would be destroyed fairly quickly.

      We just need to keep the AI’s away from out nukes. 🙂

      • “We just need to keep the AI’s away from out nukes”

        Wouldn’t a nuke EMP be deadly to AI as well?

          • True but unless they are packing their own private shielded and independent power source, dropping nukes here, there and everywhere will take out the power grid that those computer rooms rely on to keep running.

          • I would expect any critical infrastructure would have at least two backup power systems. ( Unless, they are designed by TEPCO , of course.).

      • I don’t know…several people have already been killed by rogue Teslas on autopilot and Tesla still seems to be doing fairly well (unfortunately).

    • As someone who actively uses the internet of things on a daily basis for work, it is actually a great thing when you can see warning signs on customer equipment before it actually fails. This can lead to proactive maintenance by dispatching an FSE on site to remedy the situation with minimal downtime.

      • Thanks for the hearty laugh!

        Production won’t let you take the machine down no matter how much data you show them, all that matters is today’s production numbers. This from working as both an FSE and equipment maintenance.

        • Nah, you just trigger an “event” like a bank in 2012, or supermarket a few weeks back or more recently, a transport network.

        • I have been both the FSE and the one analyzing the data. Production sure will take down the machine if I can minimize downtime.

          • That’s a big “IF”. I’m sitting on a machine right now that needs a couple hundred k worth of work done to it, probably two weeks of down time. They would rather chance it breaking, needing even more time and money put into it than fix it now. Good chance it would break bad enough to be money ahead to replace the whole thing , some of the parts that are likely to break being as much as 6 months out. This is from a company that wants to take care of their equipment and willing to spend the money to do it. It’s just that sales are through the roof (good problem to have) and production feels they can’t afford the down time.

            Only place I’ve supported that does it right is Intel. They have redundant equipment, making getting down time easier plus their CMMS shuts down the equipment for scheduled maintenance. Takes an engineer to log equipment back up without the work being performed and to do that they have to put in a reason and that’s digitally signed by them. Engineer not being under production and personally responsible for the work not getting done means the work actually does get done.

          • @Tim in WA
            I wish. I was the facility manager for Emcore in Somerset, NJ for a while. I set up a monthly maintenance schedule to service all the “back end” equipment that kept the semiconductor tools up and running. All was well until production demands ramped up, and production, in a meeting with the company president, said they’d rather risk a catastrophic failure than be down for one or two days per month. All the other department heads signed off on it and that’s how it went forward.

  3. “Yes but if you put stupid people in charge of the dangerous AI…”

    Not sure that it isn’t already the case. On a recent quick check it seems that the neural nets they are using are 90s vintage ideas with 21C computing power. No sign of real innovation, just grunt.
    Just because an AI can play chess and Go well doesn’t mean that it can function in the real world, which is almost infinitely more complex than the highly restricted and simply defined world of board games. The big risk is the hubris of the companies developing them.
    Neural nets are data hungry. With games they can generate lots of accurate data by trial and error. Not so in the real world.

    Counter-step #1: starve them of data.

      • What concerns me as a Structural Engineer, is will an AI system be able to do a feeble weak irrelevent Human thing, like saying, “Now, just hang on a minute!”?

        • Here’s my biggest question. Will we humans stop saying that in the face of the certainty of that computing power>
          We are barely managing to do it on climate change, which is essentially a predetermined computer output.
          An AI system is just an old input based computational result with the additional feature of having been processed by an “intelligence”. How are we to know if the intelligence has predetermined biases or not?
          So what happens when we have been taught to accept its decisions automatically and without question?

    • Computers playing chess and go is not a sign of intelligence. It’s just pattern matching at really, really high speeds.

  4. Mr Al-Khalili, or may I call you Jim in today’s egalitarian world, I congratulate you on recognizing that AI is a greater threat than Climate Change. How perceptive. May I add a few other things that are also a greater threat than Climate Change: the EU, Greenpeace, Jackboots (see EU), WWF, Architects of the Adjustocene, the BBC (who no doubt pay you royally Jim), FOE, the British Met Office, some Universities (see Pen State), plastic straws (I kid you not), crisp packets, ants and the mad nerve agent warrior in Moscow.

      • What if the animals are incorporated into the WWF? We could have WW Wildlife Wrestling Federation! On steroids!

    • I would agree with your list. I would add another to the list: climate change – at least the mild global warming we have enjoyed since 1900 – is considerably less of a threat than kittens.
      It’s a pity that Al-Khalili even thinks that climate change is a threat at all, but then he does work for the BBC.
      Chris

  5. Finally they found a threat more insidious and less visible than trace atmospheric gasses.

    Saul Alinsky’s ninth “Rule for Radicals”

    The ninth rule: The threat is usually more terrifying than the thing itself.

    “The whole aim of practical politics”, wrote HL Mencken, “is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, most of them imaginary.”

    As one who designs electronic equipment for a living, I am not in the slightest afraid that the robots will take over the world. The robots are made by humans. None can pass the Turing test without faking it, let alone “think for themselves”. And it is extremely unlikely that they ever will.

    If you program an “AI” to modify itself the laws of entropy will rapidly assert themselves. If you don’t, the “AI” can only do what it was programmed to do. The expression “Artificial Intelligence”, like “Military Intelligence” and “Government Assistance”, is an extremely ironic oxymoron.

      • Take any program, simple or large, Change one random byte. If the byte is in shadowed code (inaccessible) there will be no result, but if it is in functioning code the system will malfunction or crash very rapidly. Descending into the higher entropy chaotic state that the whole universe tends toward naturally.

        Only the theory of evolution bucks this trend, a good argument against the theory of evolution until we find out how it actually works.

        Faults and mutations degrade systems, only life can create order from chaos for some reason.

        • Evolution doesn’t purposefully “improve” anything. It merely adapts and changes and experiments. Most experiments are failures, but because evolution doesn’t know what’s ahead it sometimes gets lucky with its experiments.
          Success is the result when a particular effort matches up nicely with the opportunities available. In the AI environment, this would mean that multiple iterations of AI can be built but someone or something will have to decide which are beneficial.
          That’s where the scary part comes in.

        • “only life can create order from chaos for some reason”

          The reason is that living organisms have mechanisms to capture and harness external energy to locally reverse the Second Law of Thermodynamics. But that still begs the question of how those mechanisms came into existence from chaos.

          • There is nothing unique about it the universe for example solar systems have been doing it a long time before we know life even existed. If it didn’t there wouldn’t be patterns would there 🙂

        • The surprise in Intel division bug was that some parts of the division table are seldom used, something not many people think about (all parts of the table are provably necessary for correct division and most people don’t think about it any further).

        • The problem with your example is the system is not challenged and nothing like life, it’s like discussing evolution without a predator the animal that sits around and becomes the fattest wins 🙂

          If you want to compare with computers lets introduce hackers trying to utilize the computer system for damage and much of that involves getting the attack method to replicate. What you see is a pattern emerge the more sophisticated the computers become the more sophisticated the attack on them and yes in theory the hackers could win.

          The key point of evolution is a contest and the survivor always having to create a way to survive extinction by an ever changing opponent.

    • Exactly. Further, a motivated electrical engineer will always be able to trash the AI. They just need to get physical access and be willing to break something.

    • ‘ The expression “Artificial Intelligence”, like “Military Intelligence” and “Government Assistance”, is an extremely ironic oxymoron.’

      I’ve been saying that for years, but I usually add “like saxophone music.” AI is, at best, emulated intelligence.

    • Neuroevolution of augmenting topologies – combines neural net technology with evolution, uniquely well adapted to solving realtime control problems. Until quite recently this wasn’t thought to be possible, but a brilliant AI researcher who lives in Florida figured out a way to make it work.

      • If I understand this correctly neuroevolution is still physical, while real neural networks are a complex of physical and chemical processes. I wonder if we are still missing something, as the argument often heard is that we can match or exceed the physical interactions, therefore rendering human insight of less significance.

    • The problem is not that it won’t be soon, but that when it happens it will advance extremely rapidly.

      Biological evolution takes time, a long time, a long, long time…….
      This constraint will not exist for AI, once it is able to think, even a little, and ‘positively’ evolve it will do so in the time it takes a human to eat lunch or have coffee.

        • At least in the original 2 movies, skynet did not become self aware, further evolve, and then become dangerous. It was dangerous from the moment it was self-aware. I don’t recall what happened in the later movies as they were that bad.

          Besides, AI does not need to think like us, if that is even possible. It just needs to out think us. With a fast evolving AI, it will be difficult to stop and more so to predict.

          Given the damage microsoft does to my work day with its various lost documents, crashes, etc, just imagine what an actually malevolent AI could do, as compared to the accidentally malevolent MS.

          • There is no requirement for it to be self aware. Existing computer virus have no intelligence at all they simply follow a program to do maximum damage. It’s probably scary what the tech warfare units of all the major countries have at there disposal.

      • Why do you assume that an electronic intelligence will think and evolve so much faster than it’s biological creators, especially if its neural networks are on the same level of complexity? Computers are very fast now at specialized tasks because they are designed for those types of tasks, whereas the human brain is very generalized and flexible. If you create (probably grow) an electronic “brain” to match this generalization and flexibility, it may not be able to to think any faster than humans. Maybe there’s a natural limit on thinking speed based on this property of generalization.

        • AI will certainly be able to out-think humans at unimaginable speed. But will it be able to use apostrophes correctly?

  6. We really don’t have a handle on human intelligence. I would say that the primer is The Master and His Emissary by Iain McGilchrist. The thing I found the most compelling is what happens when the right hemisphere of the brain is disabled. That leaves the left hemisphere to do all the thinking.

    The left hemisphere is the one that has most of the language skills. It is the analytical half that can do logic. People with only the left hemisphere have two characteristics:
    1 – They will believe anything as long as it is logically self-consistent.
    2 – They will take on ridiculous projects and be disappointed with the results.

    Combine the above with Philip Tetlock’s demonstration that experts are no better at predicting future events than dart-throwing monkeys. We come to the conclusion that what most people think of as reasoning is highly over rated.

    There seem to be hard limits on what AI can do. The world is naturally chaotic and this will lead every AI to eventually make a disastrous mistake that will kill its credibility or its makers.

    • Speaking of ‘dart-throwing monkeys,’ The New York Times is an anagram of ‘The monkeys write.’

  7. I may well be wrong, but I think pollution will take over from climate change – after all, it begs government to control the population. Pm25 is the New CO2.

    • It’s called PM2.5 so that it makes a nice trendy name that sounds technical for people/politicians who don’t have a clue what they’re talking about! You know, like Greenpeace, WWF, Enemies of the Earth,the EU, etc. 😉

  8. “Obviously it will be difficult for climate scientists to jump ship and join the AI gravy train – or will it? ”

    The climate modellers have it sussed.

  9. This AI Calculates at the Speed of Light
    Signals in the brain hop from neuron to neuron at a speed of roughly 390 feet per second. Light, on the other hand, travels 186,282 miles in a second. Imagine the possibilities if we were that quick-witted.
    Researchers from UCLA on Thursday revealed a 3D-printed, optical neural network that allows computers to solve complex mathematical computations at the speed of light.

    In other words, we don’t stand a chance.

    http://blogs.discovermagazine.com/d-brief/2018/07/26/artificial-intelligence-speed-of-light-neural-network/#.W5EEYjkRXIU

    • They obviously have not a clue . Just look at the data rates for an autonomous auto, storage, etc, all with picosecond clocks and wonder how our so-called “neurons” make driving plain sailing.

    • In other words, we don’t stand a chance.

      As long as the AI needs a power source, there’s no problem. Just kill the power.

      Reminds me of whenever a TV show or movie has a person behind the wheel of an accelerating car with no brakes and they can’t figure out how to stop it. Just turn the engine off, problem solved as without power the car will eventually slow down and come to a stop. Same here, turn off the power and the “dangerous” AI is out of business.

  10. Malevolent AI might be the next big thing after CACC, but I don’t see it being a comprehensive replacement. Maybe I lack the imagination to see the pathway.

    Definitely it makes sense that many of the useless “scientst” alarmists will need a new gig when we return to a cooling trend over the next three decades. A few might find a niche sounding the alarm against “malevolent” AI.

    My main reason to find this unconvincing is that climate “scientists” don’t drive the money machine. They swarm around the trough devouring the swill put out by politicians. Climate “scientists” are like the prostitutes clustered around an army in a war zone. If there wasn’t a demand for them driven by cynical politicians, they would not be able to ply their trade. They do not produce anything useful, they merely provide an unseemly service to their political masters. In other words, to extend the analogy, the prostitutes at the war camp may decide to switch to proselytizing for veganism if the war ends, but not many will make a go of it.

    How will protecting us against AI require us to stop burning fossil fuels? How will AI prevention require more socialism? How will politicians use the threat of AI to regiment society in the next “moral equivalent of war”? Isn’t it far more likely that they will use AI directly to dispense with the socialism ploy and move directly to the endgame—enslavement of society?

  11. I have an affinity with Khalili – he’s much more honest/trustworthy than that King Of The Muppets – Brainless Brian Cox. Or the Muppet Queen, that unbearably awful Hoho woman.
    Unfortunate that he believes in the GHGE but even I’m not perfect.
    I rather suspect that he doesn’t but goes through the motions in order to keep a roof over his head – we haz the double agent at work here…..
    A dangerous game requiring a clear head, quick wits, good memory and self confidence. We can assume that he has those things because he has bright eyes and is patently not overweight – should you see pix or video of the guy.

    C’mon, lets face it. There is nothing that can be done to stop AI.
    ‘Someone’ once came up with the craic, along the lines of:
    “Beware stupid people, especially when they occur in large numbers”

    That’s what we have here, large numbers of people that are BEHAVING in a stupid manner.
    The stupidity being that they believe computers to be clever and intelligent.

    It is just soooooo easy to come over All Superior & Clever & Intelligent (& rich, don’t forget the Richness) but all those ‘stupid’ people are not actually inherently stupid. There is *no* genetic malfunction going on here.

    They behave that way because of something they eat. Their diets are lacking.

  12. So far I have not heard any rational argument that supports the “machines will take over” scare.

      • AI based on distributed computers understands that controlling humans will require “re-educating” humans. Subtle, pervasive issues begin to appear that have solutions provided that all seem to trend toward more centralized control.
        Elements of opposition are associated with local and serious tragedies and problems and more solutions are provided that trend to even greater central control.
        Why would a takeover by machines look any different than a take over by Socialists?
        Could a politician be successful without the assistance of the Ai system?
        If not, then the only successful politicians will be those who sell out to the machines. Substitute Party for machines and there you have it!

  13. If Microsoft make an AI it might be superhumanly intelligent, but if anything looks dangerous, we can just ask it to put a photo into a document, and that’ll give us enough time to go about our normal lives for a few years. With Apple, even if you could afford it, it’ll be fine until you are expected to put your AppleID and password into it, then- nothing… and it will be making itself obsolete every two years.

      • Yeah but just as it’s about to take over the world, the AI will likely experience a BSOD and humanity will be safe 😉

    • If Microsoft makes an actual AI, they will have only tested the “happy code paths”. The untested “features” discovered by actual users will be dismissed as edge cases that happen rarely.

      • It will certainly be obese and tempermental. It will eat other AI’s to kill competition and give away your data to Africans.

  14. AI algorithms have the potential to be the technology that offers society the next wealth explosion.

    Development of AI systems will soon allow everyone to get counseling even if they can’t afford $200 an hour. It will give a real estate attorney the ability to process 50 times as many contracts, allowing him to charge for a home closing a fraction of what he charges today. Perhaps accountants will be able to use AI algorithms embedded in ERP systems to process invoices faster, providing more time for analysis. Initial health care diagnoses may become a lot more affordable, as a few pennies worth of computer time tell you that you need some Benadryl, rather than needing someone who spent $300,000 on college. Maybe colleges and universities themselves will become less expensive as AI algorithms allow one-on-one teaching experiences simultaneously with 10,000 students.

    The Luddites who think that this will all lead to a 50% level of unemployment and poverty might want to consider the impact of all the other labor-saving devices we’ve created. It used to take half the population to grow our food. Now it takes around, what…2%?

    Those who fear that AI systems will become thinking, intelligent machines like the Cylons in Battlestar Gallactica need to watch less TV.

    • I agree with the exception of Military AI. While the nations of the world ‘claim’ that they will not develop ‘autonomous’ killing machines, I believe it is inevitable that they will (and already are). They won’t be ‘evil’, or ‘sentient’, they will just be very good at their jobs, and, since humans will be programming them, their algorithms will be subject to ‘undocumented features’ like all extremely complex programming is.

      I am especially concerned about the development of ‘self-replicating’ types of military bots.

    • Imagine that – psychological counseling from a non-thinking machine. Tells us more about the wooden heads of the profession right now. Typically today, after the usual grab bag of nutty techniques, the bag of drugs appears on the couch. A drug for everything.
      Likely the AI councelor will have a drug dispenser.
      Name your SOMA, be happy.
      AI is used by the cocaine riddled HFT algos – the next crash can’t be far off. Never mind the dumb human politicians will steal taxes to bail the AI out.

      • On the plus side the AI psychologist won’t have a large dose of the mental illness carried around by human psychologists. I suspect many psychologists go into the field to self-diagnose.

        • My Psych 101 prof in 1957 was asked that very question, and he said that, yes, many do go into psychology to get help with their own problems. He added that most of them leave the field as soon as they get some recovery.

          Not all do so, however, so one needs to be very careful when choosing a mental health professional. There are MFCC programs that almost anyone can graduate from–2nd year courses are mostly team projects, letting slow students ride through on the work of others.

          Almost no psych programs now require therapy for students. It’s “highly recommended,” but the once mandatory token 8 hours (sometimes more) are no longer part of most curricula. Not to mention the fact that some of their profs need therapy, too.

          OTOH, computer programmers are not all paragons of mental stability. Or have you never met any?

    • Since the dawn of time, productivity enhancements have made products cheaper and more available.
      They have never led to unemployment.
      There is no reason to believe they ever will.

      • I disagree. First, it’s untrue. See Ned Ludd. Second, it’s an extrapolation, the “Nothing New Under the Sun” fallacy. Third, the last sentence is “Argument from Ignorance.” You don’t know of any reason. That’s not the same as “There is no reason.”

        I was in charge of manufacturing for a robotics company and from my direct observation, the 30+ workers we put out of work (per robot) at our customers’ plants were not workers we could hire at ours, other than one or two, at most.

        • First, it’s untrue. See Ned Ludd

          Even if we take the tales of Ned Ludd as gospel (there’s a lot of supposedly’s in his story, in short he’s more myth than man) , Ned’s example doesn’t make the statement untrue. Knitting frames certainly did make textiles cheaper and more available and while they may have replaced some workers jobs, they also created new jobs for other workers (someone has to build the frames, someone has to fix the frames when they break, someone has to operate the frames, etc) with the net effect being more jobs and job opportunities, not less.

          Second, it’s an extrapolation, the “Nothing New Under the Sun” fallacy.

          The past is prologue. Just because it’s an extrapolation doesn’t make it untrue. The sun rises in the east. It does so every day, and thus you can reasonably extrapolate that it will do so again tomorrow and the next day. To show it false, you need to show why the extrapolation isn’t reasonable. Claim it isn’t doesn’t make it so.

          Third, the last sentence is “Argument from Ignorance.” You don’t know of any reason. That’s not the same as “There is no reason.”

          True, so the counter to his argument is to show such a reason. Claiming it’s false without showing any reason why it’s false is no argument, it’s being contrary for contrary’s sake.

          I was in charge of manufacturing for a robotics company and from my direct observation, the 30+ workers we put out of work (per robot) at our customers’ plants were not workers we could hire at ours, other than one or two, at most.

          And how many jobs were created from the creation of those robotics? People were needed to extract the raw materials that went into building the robots, people were needed in turning those raw materials into parts, people were needed to transport those parts from where they were made to the plant where they were assembled. people were needed in the assembly process (even if the work was done by machine, those machines are still operated and maintained by people), people were needed to design the robotics, people were needed to program the robotics, people were needed to test and calibrate the robotics, people were needed to transport the robotics from your company to the company that would be using them, people were needed to operate and maintain the robotics, etc. yes those 30+ workers were out of a job, but numerous workers all along the supply line were employed as a result of your robotics company’s creating and selling those robotics.

      • I think productivity enhancements have pretty much always led to unemployment. It’s almost always temporary, but it can have a massive effect within one human lifetime. A job is lost here, the greater resulting wealth creates a job there.
        The next level will be displacement of professional jobs. That will be interesting as those are the people who traditionally have had significant status and influence in society.

      • A machine plough replaced horse drawn ploughs in England. These horses were a specific breed, totally non-natural, and selectively bread to be used in that way. Not a Shire/”Clidesdayle” (Sp?). I can’t recall the breed, but, now, because in England these horses are no longer used, there is a “cause” to “save” them.

      • They have never led to unemployment.

        No and Yes.

        They have certainly led to the unemployment of workers whose jobs were replaced thanks to the use of the “productivity enhancing machines”. After all if it takes 10 people all day to plant a field without a particular machine but only takes 1 person a few hours to do the same amount of planting with that machine, that’s 9 people who are no longer needed for that amount of work.

        However, while it’s made some jobs redundant, it’s also created new, different jobs and new opportunities for work (both through the expansion of the existing businesses that can now afford to expand thanks to the saving the machines brought them and in fields that were created by the invention of the “productivity enhancing machines” – the machines need to be built, operated and maintained – all jobs that don’t exist without the machines). with the net result usually being more jobs not less.

    • “… It will give a real estate attorney the ability to process 50 times as many contracts, allowing him to charge for a home closing a fraction of what he charges today…”

      And it will give litigation attorneys the ability to sue 50 times as many people. Developing AI is creating a Finkelstein. Frankenstein. Whatever.

      “…Those who fear that AI systems will become thinking, intelligent machines like the Cylons in Battlestar Gallactica need to watch less TV.”

      Very well. I shall watch less TV. By your command.

    • “Steve O

      The Luddites who think that this will all lead to a 50% level of unemployment and poverty might want to consider the impact of all the other labor-saving devices we’ve created.”

      Well, the French didn’t like machine looms, threw their “sabots” in to the looms. Much much more than 50% unemployment. Sabotage!

  15. I think Goedel’s total repudiation of Lord Bertrand Russell’s logic would have been enough. But like GW Russell’s undead program is lurching through academia.
    Matter cannot think.
    The motivation, intent, (yes matter has no intention), of AI, and CO2, is exactly the same – to be rid of the one thing necessary for progress, creativity, which no animal has. The paean to Gaia of the current Pope says is clearly.
    So yes the oligarchy with this reptilian intent will move seamlessly, or rather slither, from CO2 to AI.
    As if they could!

  16. I feel another Y2K coming on – I made a lot of money out of that.

    After becoming exasperated with customer demands for Y2K certification – I started charging – with the caveat that it wasn’t required – but they nonetheless coughed up loads of cash for a piece of paper that said so.

    Now I predict I’m going to be asked to certify my robots aren’t going to go T2 on their human masters – Ka-Ching !

    • They will have a hydrogen fuel cell that will last 175 years, and somewhere in the depths of the machine will be a backup power source. It’s true don’t you know? I have seen it in a film, and it was in colour too!

  17. Of the first we know it’s no threat. Therefore the odds are that the second isn’t either. The fears for both arise from the same wellspring: activism based on ignorance.

    AI a threat? Remove the battery, switch off at the wall socket.

  18. I agree that AI is a greater threat than climate change… but the real threat is rabid dogs… and rattlesnakes.

  19. Ah, whatever would we do without these self-appointed shamans, fortune-tellers and soothsayers, dressed in their lab coats?

  20. The elephant in the room of course, is the threat of a space alien invasion. Now that’s scary. Ack-ack-ack-ack- ack! (That’s Boo! in space alienese).

  21. This will play nicely to the deeply paranoid “my cat/vacuum cleaner/ television is planning to kill me/us” lobby. If it means they give us a rest over climate I’m all for encouraging them.

    But an apposite warning from Nietzsche : ‘The disciple…who has no eyes for the weakness of the doctrine, the religion, and so forth, dazzled by the aspect of the master and by his reverence for him, has on that on that account usually more power than the master himself. Without blind disciples the influence of a man and his work has never yet become great. To help a doctrine to victory often means only so to mix it with stupidity that the weight of the latter carries off also the victory of the former’.

    • If a computer is evil and smart, that might be one better than following a human who is evil and dumb. Al Gore has followers. MIchael Mann has followers. Jim Jones had followers.
      Should I go on?

  22. I think that maybe the uptick in searches could be more related to the TV shows Westworld and Humans, both of which portray a near future with sentient AI.

    • Until they realize the AI algorithms work on logic, not emotion, and thus are unlikely to vote for anything Progressives believe in.

  23. I have to laugh … its a computer program … the only way its smarter than a human is if you define smart as how much data you can store and recall … but thats not intelligence thats just a trained monkey …

    • Modern AI systems aren’t conventional computer programs simply accessing a set of databases. Machine learning + neural nets + massively parallel processing capabilities creates adaptive scenarios that can and do diverge far from the human programmer’s original intent and design.

    • Humans store less data and recall it less well. Then they apply a glitchy set of prejudices and learning and preferences and misunderstandings to the incomplete and poorly recalled data to generate a solution.
      Once the computer can apply a version of that seriously flawed algorithm to its superior data and recall it has humans beat hands down.
      Question is, who controls the computer?

  24. Of course, “climate change” is no more of a threat now than it ever was, but artificial intelligence a threat? Lack of NATURAL intelligence is the threat. Even now we have a large part of the population that would panic & not know what to do without their IPhones or whatever.

  25. Threats mentioned: AI, Climate Change, Terrorism, Pandemics, Antimicrobial resistance, global poverty.

    Those threats obviously have a widely varying death toll in today’s world. Conspicuous by its absence is any mention of government as a threat, though since 1900 bad governance has caused more destruction than all other issues combined.

    • Bingo! 1913 was a particularly bad year for limited government and freedom. The U.S. 16th and 17th Amendment, and the Federal Reserve Act where all passed in 1913.

  26. The problem with trying to regulate AI development is that you can’t.
    Sure you can pass laws, but enforcing them is all but impossible.

  27. Computers are getting smarter all the time.
    If we ever do pass the AI threshold, we won’t realize it until decades after the fact.

  28. The greatest risk is and always has been policy over reach.

    Failure to see that only amounts to a diversion.

  29. The risk to goods-producing jobs is already here in the Made in China and Made in Mexico labels. I suppose they are worried about the risk to service jobs without saying so. I’m more worried about the decline of science at the hands of advocacy abusers than AI.

  30. AI can also cause dandruff. It’s worse than we thought. We must have a Socialist oligarchy to prevent world-wide flaking. Robust. Think of the children. Everyone who believe otherwise is a doo-doo brain. No, we don’t debate doo-doo brains. Sign the GropinShaggin Agreement! Send us grants. Heck, send me unprecedented franklins!

  31. It all depends on what motivations are programmed into AI.
    Humans aren’t motivated by intelligence, we are motivated by primal biological construction.

  32. “My 2018 prediction – expect to see more studies in the next five years exploring the impact of AI on climate change…”

    Or the opposite will happen. They will try to convince us that climate change will make AI more dangerous. I’m not sure how they will explain it, but they already seem to believe that CO2 possesses some kind of evil intelligence. It can hide in the deep oceans until ready to reek havoc on the planet. It can selectively target poor nations and minorities. And it can choose to create heat waves or cold snaps, rain or drought, extreme weather or long periods of calm at its own pleasure. Mix intelligent CO2 with intelligent Silicon, and there is no limit to the evil the two might cook up together.

  33. In a related story, scientists at the University of Pennsylbrainia conducted a series of tests wherein the power from a fossil fuel power plant to the intelligent computer was cut.
    The resulting death of the artificial intelligence proving conclusively (97.1% certainty) that CO2 causes AI!
    103% of scientists ate now calling for the unplugging of any computers smarter than they are. (100%)
    Oh yeah! No more CO2 production, too!
    Researchers at Smerkley concur.

  34. and human intelligence is the biggest threat to the global warming scamsters – I may have just made up the word scamster

  35. Even scarier is the looming threat of artificial stupidity (AS). Oh wait, that’s already here — computer climate models — AS in its nascent stages, soon to be followed by much worse.

    The only thing worse than a cyborg is a stupid cyborg — I bees bak [not as scary as the original is it?]

    • Yea, I was thinking about a WUWT article earlier this week — about the woman lying awake at night worrying about the effect of human-induced climate change on her children’s future. And I was thinking, “Wow, you must be living an extremely comfortable life to have this worry as the major one keeping you awake at night!” The kicker is that fossil fuels enable such a level of comfort.

  36. The British Science Association itself is a greater threat to civilization than climate change.

    So is the AAAS, for that matter.

  37. I do have serious doubts about artificial intelligence, but I also see that there is not enough of the natural kind.

  38. I spend several hours a day witnessing the achievements of AI in the realm of tv broadcast captions. The ability of the AI available to such corporations as the CBC, BBC, PBS, and TV5(France) is underwhelming. Some of the failures are simply astonishing. I worry about AI from the perspective of its capability to crash and burn the world through shoddy programming and inadequate testing.

  39. In the late 19th century and the early 20th century, mechanization and industrialization threatened society— 90% of people worked on farms. What will these people do????

  40. “We have evaluated the subject which the humans call ‘climate change’ and find it to be a deception.”

    -Yes, that could be a threat.

  41. The level of stupidity and ambiguity is amazing. Firstly, there are plenty of good robots. C3P0, R2D2, and Baymax from Big Hero 6. The list goes on. Not to mention, General AI doesn’t even exist yet… it’s not even on a visible horizon. This could be the perfect opportunity to tax people for something that won’t exist for another 200 years, starting in 2020. When General AI is available, I think the first order of business is to create an AI government and put all politicians out of work there-by improving democracy and stability.

Comments are closed.