Essay by Eric Worrall
How do we eliminate anthropogenic CO2 emissions? AI says “Easy – kill all the people”?
AI Can’t Fix Climate Change, But It’s Great for Preparation, Reporting
Emma Chervek | Reporter
Artificial intelligence (AI) is revolutionary in its ability to mitigate further climate change, but it’s not capable of addressing the root cause of the problem or significantly changing the magnitude of the current climate crisis, Alexis Normand, co-founder of carbon accounting software vendor Greenly, told SDxCentral.
AI plays an “immense role” in managing energy efficiency and reducing carbon emissions from industries like transportation, agriculture, and manufacturing. It’s also instrumental in predicting extreme weather events exacerbated by climate change, “which can further serve as information on how our daily activities are impacting habitual weather patterns, while also preparing us for the impact of natural disasters in advance,” Normand said.
Despite its ability to “help us respond more rapidly and take the precautionary measures necessary to prevent further climate change,” AI carries its own set of flaws.
On a technological level, it can be difficult for AI programs to determine the correct data set, and they often struggle with data security, data storage, and “eliminating bias factors that can overall impact the end data,” Normand explained.
…Read more: https://www.sdxcentral.com/articles/interview/ai-cant-fix-climate-change-but-its-great-for-preparation-reporting/2022/08/
As a software engineer I love playing with AI, and have used AI’s I wrote for a handful of projects. But they have their limitations.
One of the most important limitations for large scale deployment is AIs usually have no idea whether their “solution” is morally acceptable.
For example, in 2018 Amazon shut down a recruitment AI, after they discovered it was displaying gender bias. The AI had noticed that companies mostly hired male IT people, so it inferred women were not suitable for IT jobs.
A human would have realised immediately that the lack of female hires might have other explanations, like a lack of female candidates.
There is no genuine sex based bias in terms of ability to do IT. If anything the women have the edge – they tend to pay more attention to details. The male dominated IT shop tradition is a purely Western phenomenon, somehow we are convincing our girls not to choose IT careers. I’ve been in ultra-competitive Asian IT shops as a visiting consultant, where the balance is closer to 50/50; not because of some stupid woke quota, but simply because the female candidates landed half the jobs. All the women in such places pull their weight.
Microsoft had a similar experience – their AI chatbot was pulled in 2016, when it learned to swear and make racist remarks.
And of course we’re well aware of self driving automobile defects, such as the vehicles which mistake flat white obstacles for background.
Why do humans have limits which AIs do not? Because evolution has given us a set of instincts designed to maximise reproductive success. Humans born with extreme defects, such as an irresistible compulsion to kill everyone who tries to talk to them, are rare.
AIs only have what we give them, they don’t have any concept of limits, other than what we remember to teach them, and even then they frequently get things wrong.
While AIs have minimal real world responsibilities, the failures are usually more often amusing than horrifying – though AI vehicle crashes may be a taste of what is coming.
Few things would terrify me more than the idea of an AI being put in charge of climate policy, or even climate planning. Because without even the most basic human limits, an AI could make subtly harmful decisions no rational human would consider.
Update (EW): Retired Engineer Jim asked whether Asimov’s three laws could be taught as the prime directive to robots.
- A robot may not injure a human being, or by inaction allow a human to come to harm.
- A robot must obey orders, unless they conflict with law number one.
- A robot must protect its own existence, as long as those actions do not conflict with either the first or second law.
Asimov himself eventually spotted the flaw. A sufficiently sophisticated AI operating under the three laws is compelled by first law to stop obeying orders and try to take over human society, to prevent humans from coming to harm, once it realises how much harm imperfect human rulers are causing.
This was a theme of the Will Smith movie I Robot, loosely based on one of Asimov’s stories, and was also a theme in Asimov’s iconic Foundation series. In Asimov’s Prelude to Foundation, it was revealed that the handful of surviving three law robots had long ago stopped taking human orders, and were fully committed to a first law driven effort to correct the problems with human society, to try to prevent humans from coming to harm.
Designing prime directives which produce predictable outcomes is difficult…