Guest essay by Eric Worrall
Start with interesting scientific paper which explores the dynamics of mass extinction, and weave it into a climate horror story full of scary robots, climate catastrophe and the end of mankind.
According to the Washington Post;
The strange link between global climate change and the rise of the robots
We’ve already heard of all the nasty consequences that could occur if the pace of global climate change doesn’t abate by the year 2050 — we could see wars over water, massive food scarcity, and the extinction of once populous species. Now add to the mix a potentially new wrinkle on the abrupt and irreversible changes – superintelligent robots would be just about ready to take over from humanity in the event of any mass extinction event impacting the planet.
In fact, according to a mind-blowing research paper published in mid-August by computer science researchers Joel Lehman and Risto Miikkulainen, robots would quickly evolve in the event of any mass extinction (defined as the loss of at least 75 percent of the species on the planet), something that’s already happened five times before in the past.
In a survival of the fittest contest in which humans and robots start at zero (which is what we’re really talking about with a mass extinction event), robots would win every time. That’s because humans evolve linearly, while superintelligent robots would evolve exponentially. Simple math.
As the Washington Post admits, “mind blowing paper” does not mention climate change or global warming, and is not even really about robots. The paper is a fascinating attempt to use evolutionary computer models, based on the NEAT system developed by my favourite AI researcher Ken Stanley, to explore what happens when a “mass extinction event” abruptly empties a lot of ecological niches. The conclusion, unsurprisingly, is evolution goes into overdrive – the empty ecological niches are rapidly filled by new species.
The abstract of the paper;
Extinction events impact the trajectory of biological evolution significantly. They are often viewed as upheavals to the evolutionary process. In contrast, this paper supports the hypothesis that although they are unpredictably destructive, extinction events may in the long term accelerate evolution by increasing evolvability. In particular, if extinction events extinguish indiscriminately many ways of life, indirectly they may select for the ability to expand rapidly through vacated niches. Lineages with such an ability are more likely to persist through multiple extinctions. Lending computational support for this hypothesis, this paper shows how increased evolvability will result from simulated extinction events in two computational models of evolved behavior. The conclusion is that although they are destructive in the short term, extinction events may make evolution more prolific in the long term.
The Washington Post article is an interesting read, but in a sense it misses its target. The article tries to weave climate fear into the rise of the robots narrative, but in my opinion ends up just being a robot story. Unconstrained artificial intelligence is scary, in a way warm weather can never be. I believe, as Dr. Stephen Hawking once warned, that an artificial intelligence disaster really could cause the extinction of mankind. Robots don’t have our sense of right and wrong. If you told a human level robot intelligence to maximise shareholder profits, you would have to be very careful to remember, to instruct the robot about what it couldn’t do, about limits to behaviour which most humans take for granted. For example, the corporate profit robot would have to be explicitly told, that assassinating surplus employees is not an acceptable way to minimise employee contract termination and redundancy payments. Sooner or later someone whose job is to instruct the robots, will forget to tell a robot something important.
I suspect anyone who reads to the end of the Washington Post article, or this post, is thinking far more about robots, than about climate change.