Timnit Gebru. By TechCrunch - link, CC BY 2.0, link

Google Fires Ethics Heads for Questioning the Global Warming Impact of AI

Guest essay by Eric Worrall

When the woke outwoke the woke. Back in December, Google fired AI Ethics Unit co-leader Timnit Gebru, in relation to her paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”. Google have now just fired their other ethics head, Margaret Mitchell, apparently for trying to gather evidence while investigating the ousting of Timnit.

I’m fired: Google AI in meltdown as ethics unit co-lead forced out just weeks after coworker ousted

Plus: IBM reportedly trying to sell Watson AI Health, and more

Katyanna Quach Mon 22 Feb 2021 // 12:21 UTC

Google has finished its probe into the controversial ousting of Timnit Gebru, co-leader of its Ethical AI unit. The ad giant promised to implement new procedures around “potentially sensitive employee exits,” though it did not make its findings public.

Gebru said she was fired for warning coworkers in an internal memo that, due to management apathy, it was a waste of energy trying to foster diversity, equality, and inclusion within the Silicon Valley goliath. Google claimed she effectively resigned.

Meanwhile, Margaret Mitchell, who also co-led the Ethical AI unit alongside Gebru, said on Friday she has been fired. Mitchell had been locked out of her corporate account for weeks.

Read more: https://www.theregister.com/2021/02/22/in_brief_ai/

From the paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”;

3 ENVIRONMENTAL AND FINANCIAL COST

Strubell et al. recently benchmarked model training and develop- ment costs in terms of dollars and estimated 𝐶𝑂2 emissions [129]. While the average human is responsible for an estimated 5t 𝐶𝑂2𝑒 per year,2 the authors trained a Transformer (big) model [136] with neural architecture search and estimated that the training procedure emitted 284t of 𝐶𝑂2. Training a single BERT base model (without hyperparameter tuning) on GPUs was estimated to require as much energy as a trans-American flight.

While some of this energy comes from renewable sources, or cloud compute companies’ use of carbon credit-offset sources, the authors note that the majority of cloud compute providers’ energy is not sourced from renewable sources and many energy sources in the world are not carbon neutral. In addition, renewable energy sources are still costly to the environment,3 and data centers with increasing computation requirements take away from other potential uses of green energy,4 underscoring the need for energy efficient model architectures and training paradigms.

Read more: http://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf

Timnit has also criticised other issues with AIs, for example in 2018 she helped stop the rollout of an Amazon facial recognition system being used by police agencies, by demonstrating the flawed Amazon system was 34% less capable of correctly identifying black women, compared to its ability to correctly identify white men. The problem – the dataset used to train the AI mostly contained white faces.

What can I say – losing one ethics head could be an accident. Losing two in quick succession starts to look like carelessness, perhaps even raises suspicions that what Google really wants is a compliant ethics team which does whatever top management tells them to do.

Get notified when a new post is published.
Subscribe today!
4.7 18 votes
Article Rating
94 Comments
Inline Feedbacks
View all comments
JEHILL
February 23, 2021 1:09 pm

Reading this sounds like this human was fire more for questioning the management’s commitment to diversity and equality and publicly commenting than this human’s paper on tonnage of CO2 emitted.

Now this human’s paper was dumb, stupid and unnecessary for a modeling project. There’s a piece of equipment attached to every building that is connected to an electrical grid and has electricity delivered to them from the grid. Not to mention all power monitoring devices in servers frams. Take a reading before you start your model processing; take a reading after your model completes it processing phase. This equals power consumed.

Zig Zag Wanderer
February 23, 2021 1:24 pm

what Google really wants is a compliant ethics team which does whatever top management tells them to do.

Calling Peter Gleick. Urgent message for Peter Gleick!

Clyde Spencer
February 23, 2021 2:05 pm

Losing two in quick succession starts to look like carelessness, perhaps even raises suspicions that what Google really wants is a compliant ethics team which does whatever top management tells them to do.

An alternative hypothesis is that gender and ethnicity were given more weight in the hiring decision than competence. Perhaps the AI Ethics Unit just realized that.

commieBob
Reply to  Clyde Spencer
February 23, 2021 2:23 pm

This reminds me of the Canadian Content rules for Canadian broadcasters.

commieBob
Reply to  commieBob
February 23, 2021 2:42 pm

Darn.

This reminds me of the Canadian Content rules for Canadian broadcasters.

There was a time when most of the music played on Canadian radio stations was pop music from the ‘States. Canadian artists, for whatever reason, didn’t stand a chance of getting on the air.

Was it because Ameryican music was superior? Some of it clearly was. On the other hand, most of the music on the radio was crappy.

Someone in the Canadian government observed that our crappy music was every bit as good as crappy American music.

I will riff on that and say that the crappy third rate work of anyone of any sex and gender and ethnicity is every bit as good as the third rate crappy work turned out by most male Caucasian scholars.

Reply to  commieBob
February 23, 2021 3:09 pm

To mock Canadian Content rules are why Dave Thomas and Rick Moranis created the MacKenzie brothers and their “Great White North” show.
Cooorooka coocacoo
Cooorooka coocacoo
eh?

Reply to  Mumbles McGuirck
February 24, 2021 9:11 am

Loved that skit on Second City! And Boys’ In The Hall! And all the great Canadian bands (The Guess Who, BTO, Tragically Hip, Bryan Adams, Blue Rodeo, etc). Judging from that success and lack of suckiness, the Canadian Content Rules tried to fight the herd mentality in pop culture – start with a band that finds some success and pretty soon it goes viral and it is saturation bombed everywhere over and over by radio stations. That’s bad enough inside the States itself, where other good bands might as well take a break from touring while the new fad hogs the limelight. But it’s even worse for those outside the States.

MarkW
Reply to  commieBob
February 23, 2021 4:21 pm

If you assume that good musicians are randomly distributed throughout the population, then the odds are that a population that is 10 times larger will have 10 times as many good musicians.

commieBob
Reply to  MarkW
February 23, 2021 8:41 pm

The market for music, and many other things, is described by a Pareto distribution.

February 23, 2021 2:50 pm

The only irony of note is combining Google and ethics in a single sentence.

dave
February 23, 2021 3:28 pm

The horror…the horror…

observa
February 23, 2021 3:45 pm

Nothing to see here folks move along-
Google Engineers Explain Why They Stopped R&D in Renewable Energy | Greentech Media
Google don’t need know steenking facts as there’s a new administration in town.

Tsk Tsk
February 23, 2021 3:52 pm

There are no good guys here. All of them should lose.

Lrp
February 23, 2021 5:20 pm

She could try doing some honest work now

Neo
February 23, 2021 5:22 pm

But beyond that, we call on the field to recognize that appli- cations that aim to believably mimic humans bring risk of extreme harms. Work on synthetic human behavior is a bright line in ethical AI development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups. Thus what is also needed is scholarship on the benefits, harms, and risks of mimicking humans and thoughtful design of target tasks grounded in use cases sufficiently concrete to allow collaborative design with affected communities.

ResourceGuy
February 23, 2021 5:47 pm

Now turn the job over fully to the computers to automate the process of splintering society into Planck lengths.

John Dueker
February 23, 2021 7:05 pm

First google has only suppression of truth an dissent as their cor ethics.

Second this isn’t AI. Its a kludge of if then what if with only the programmers intelligence.

fred250
Reply to  John Dueker
February 23, 2021 8:08 pm

Artificial…

but NOT intelligent !

February 24, 2021 12:23 am

Google is evil, period, end of story.

James
February 24, 2021 2:45 am

Eventually they WILL COME FOR YOU TOO

ozspeaksup
February 24, 2021 3:56 am

yeah cloud n other servers energy use is massive
as is the water use for cooling
funny how sensitive they get when thats raised isnt it?

Afterthought
February 24, 2021 4:06 am

It requires a total or near total ban. That is the ethical solution.

Matthew Siekierski
February 24, 2021 4:54 am

Ethics committees don’t exist to promote ethical behavior, they exist to justify the decisions already made elsewhere. Medical ethics boards are prime examples. “It’s ethical to remove food and water from this person because their quality of life is so poor.” “It’s ethical to experiment using human embryos because of the life-saving potential.”

Google expects the same thing from their ethics team. “CO2 is bad, but our increased emissions is ethical because we’re using it to improve society. Everyone else, though, should emit less, those uncaring bastards.”

They want a rubber-stamp of “this is ethical”, not an actual look at the ethics of their practices.

Ferdberple
February 24, 2021 8:13 am

Losing two in quick succession
======
Google fails to recognize the Importance of Being Earnest.

Reply to  Ferdberple
February 24, 2021 9:25 am

“Timnit has also criticised other issues with AIs, for example in 2018 she helped stop the rollout of an Amazon facial recognition system being used by police agencies, by demonstrating the flawed Amazon system was 34% less capable of correctly identifying black women, compared to its ability to correctly identify white men. The problem – the dataset used to train the AI mostly contained white faces.” What a racist organisation! even in the artificial “intelligence” sphere.

Bill Parsons
February 24, 2021 10:22 am

Web site like this make mokery of oppressed peeples. me native tong is Tlingit and haff time kant read. because need trnslation sevices. Tlingit peeples need to rize up for they rites.

How you like need to go in Tlingit-speaking countries, not no where the man’s room is? Then show is on other feet!

Tlingit need equal rites.

Bill Parsons
Reply to  Bill Parsons
February 24, 2021 4:33 pm

White man come, kill our women, rake our buffalo, now this… warming take our icees. where the great narwhal hunts of yesteryear? Gone!

Fresh water
February 24, 2021 12:17 pm

Having an ethics leader is just virtue signaling. Fine so long as they dont mess with the $$$ or question upper management

Xinnie the Pooh
February 25, 2021 1:05 am

AI is already voting for you

observa
February 25, 2021 8:12 am