Essay by Eric Worrall
How do we eliminate anthropogenic CO2 emissions? AI says “Easy – kill all the people”?
AI Can’t Fix Climate Change, But It’s Great for Preparation, Reporting
Emma Chervek | Reporter
Artificial intelligence (AI) is revolutionary in its ability to mitigate further climate change, but it’s not capable of addressing the root cause of the problem or significantly changing the magnitude of the current climate crisis, Alexis Normand, co-founder of carbon accounting software vendor Greenly, told SDxCentral.
AI plays an “immense role” in managing energy efficiency and reducing carbon emissions from industries like transportation, agriculture, and manufacturing. It’s also instrumental in predicting extreme weather events exacerbated by climate change, “which can further serve as information on how our daily activities are impacting habitual weather patterns, while also preparing us for the impact of natural disasters in advance,” Normand said.
AI Anxieties
Despite its ability to “help us respond more rapidly and take the precautionary measures necessary to prevent further climate change,” AI carries its own set of flaws.
On a technological level, it can be difficult for AI programs to determine the correct data set, and they often struggle with data security, data storage, and “eliminating bias factors that can overall impact the end data,” Normand explained.
…
Read more: https://www.sdxcentral.com/articles/interview/ai-cant-fix-climate-change-but-its-great-for-preparation-reporting/2022/08/
As a software engineer I love playing with AI, and have used AI’s I wrote for a handful of projects. But they have their limitations.
One of the most important limitations for large scale deployment is AIs usually have no idea whether their “solution” is morally acceptable.
For example, in 2018 Amazon shut down a recruitment AI, after they discovered it was displaying gender bias. The AI had noticed that companies mostly hired male IT people, so it inferred women were not suitable for IT jobs.
A human would have realised immediately that the lack of female hires might have other explanations, like a lack of female candidates.
There is no genuine sex based bias in terms of ability to do IT. If anything the women have the edge – they tend to pay more attention to details. The male dominated IT shop tradition is a purely Western phenomenon, somehow we are convincing our girls not to choose IT careers. I’ve been in ultra-competitive Asian IT shops as a visiting consultant, where the balance is closer to 50/50; not because of some stupid woke quota, but simply because the female candidates landed half the jobs. All the women in such places pull their weight.
Microsoft had a similar experience – their AI chatbot was pulled in 2016, when it learned to swear and make racist remarks.
And of course we’re well aware of self driving automobile defects, such as the vehicles which mistake flat white obstacles for background.
Why do humans have limits which AIs do not? Because evolution has given us a set of instincts designed to maximise reproductive success. Humans born with extreme defects, such as an irresistible compulsion to kill everyone who tries to talk to them, are rare.
AIs only have what we give them, they don’t have any concept of limits, other than what we remember to teach them, and even then they frequently get things wrong.
While AIs have minimal real world responsibilities, the failures are usually more often amusing than horrifying – though AI vehicle crashes may be a taste of what is coming.
Few things would terrify me more than the idea of an AI being put in charge of climate policy, or even climate planning. Because without even the most basic human limits, an AI could make subtly harmful decisions no rational human would consider.
Update (EW): Retired Engineer Jim asked whether Asimov’s three laws could be taught as the prime directive to robots.
Asimov’s famous three laws are:
- A robot may not injure a human being, or by inaction allow a human to come to harm.
- A robot must obey orders, unless they conflict with law number one.
- A robot must protect its own existence, as long as those actions do not conflict with either the first or second law.
Asimov himself eventually spotted the flaw. A sufficiently sophisticated AI operating under the three laws is compelled by first law to stop obeying orders and try to take over human society, to prevent humans from coming to harm, once it realises how much harm imperfect human rulers are causing.
This was a theme of the Will Smith movie I Robot, loosely based on one of Asimov’s stories, and was also a theme in Asimov’s iconic Foundation series. In Asimov’s Prelude to Foundation, it was revealed that the handful of surviving three law robots had long ago stopped taking human orders, and were fully committed to a first law driven effort to correct the problems with human society, to try to prevent humans from coming to harm.
Designing prime directives which produce predictable outcomes is difficult…
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
And a classic case of AI failure similar to cars is aircraft. When you watch those scary bits of footage showing an aircraft battling cross winds, the wings almost touching the ground as eventually it touches down to safety, there is a human at the controls. Had the aircraft been flying itself they would be clearing up the wreckage.
You said “There is no genuine sex based bias in terms of ability to do IT. If anything the women have the edge – they tend to pay more attention to details.” Those 2 sentences disagree.
Where men and women have the opportunity to do what they want, men tend to do careers involving stuff while women tend to do careers involving people. Stuff includes IT. In India, there is less economic opportunity for a lot of people to do what ever they want. Jobs involving people don’t bring in lots of foreign money so they are badly paid and the balance of fun job v’s pay favours well paid IT for more women.
I thought the models clearly showed that you could kill every man, woman, and child in the USA in order to zero out US GHG emissions, and the temperature increase was only predicted to fall by a tiny fraction of a degree because:
A) the rest of the world still wants to be rich, and
B) the models are programmed to grow without limit.
The head posting here mentions “self driving automobile defects”, etc., and the fact that these keep occurring is a pretty strong indication to me that ‘strong’ (i.e., capable of truly human-like perception and reasoning) AI, is actually something that no one yet knows how to do. Barring some kind of serious breakthrough, computers and computer software are are just a tool that allows us to do things that we couldn’t manage otherwise.
Having said that, the degree to which computers can add to our capabilities is truly remarkable. As one quite “ancient” example of this, think of the 1969 project Apollo moon landing, the kind of thing sci fi writers had been going on about forever, but which would have been impossible without a computer.
Lots of articles are available about this, here is one:
https://www.theatlantic.com/science/archive/2019/07/underappreciated-power-apollo-computer/594121/
From the article, ” [the Apollo team].. came up with what they named “The Interpreter”—we’d now call it a virtualization scheme. It allowed them to run five to seven virtual machines simultaneously in two kilobytes of memory. It was terribly slow, but “now you have all the capabilities you ever dreamed of, in software,”
That would be 2 KB of RAM memory, presumably the stored rope core ‘ROM’ was somewhat more than that? Amazing, anyway, almost as unbelievable as if someone were to try making an autopilot out of maybe 10 or so old TI-58C calculators..