Last month, Google artificial intelligence researcher Timnit Gebru received disturbing news from a senior manager. She says the manager asked her to either retract or remove her name from a research paper she had coauthored, because an internal review had found the contents objectionable.
The paper discussed ethical issues raised by recent advances in AI technology that works with language, which Google has said is important to the future of its business. Gebru says she objected because the process was unscholarly. On Wednesday she said she was fired. A Google spokesperson said she was not fired but resigned, and declined further comment.
Gebru’s tweets about the incident Wednesday night triggered an outpouring of support from AI researchers at Google and elsewhere, including top universities and companies such as Microsoft and chipmaker Nvidia. Many said Google had tarnished its reputation in the crucial field, which CEO Sundar Pichai says underpins the company’s business. Late Thursday, more than 200 Google employees signed an open letter calling on the company to release details of its handling of Gebru’s paper and to commit to “research integrity and academic freedom.”
This content can also be viewed on the site it originates from.
The controversy highlights tension between the human and ethical consequences of AI development and the fact much leading AI research is underwritten by companies motivated by the technology’s profit-making potential. Gebru is a superstar of a recent movement in AI research to consider the ethical and societal impacts of the technology. She helped assemble and lead a small team of computer and social scientists dedicated to ethics research inside Google’s AI research group.
Gebru says she pushed back on the treatment of her work out of concern for the future of AI ethics research at Google and people who work on it. “You’re not going to have papers that make the company happy all the time and don’t point out problems,” she says. “That’s antithetical to what it means to be that kind of researcher.”
Gebru, a Black woman, also suspects her history of speaking up inside Google about the lack of diversity among the company’s workforce and the treatment of minority employees may have contributed to her dismissal. Google employees have protested and walked out in recent years over the company’s treatment of women and minorities and over its ethical stances on AI technology.
News that Gebru was suddenly an ex-Googler came the same day the National Labor Relations Board said Google wrongly fired two workers last year who were involved in labor organizing. One of them tweeted in support of Gebru Wednesday, hoping the NLRB would “acknowledge what is happening to Timnit sooner.”
Before joining Google in 2018, Gebru worked with MIT researcher Joy Buolamwini on a project called Gender Shades that revealed face analysis technology from IBM and Microsoft was highly accurate for white men but highly inaccurate for Black women. It helped push US lawmakers and technologists to question and test the accuracy of face recognition on different demographics, and contributed to Microsoft, IBM, and Amazon announcing they would pause sales of the technology this year. Gebru also cofounded an influential conference called Black in AI that tries to increase the diversity of researchers contributing to the field.
Gebru’s departure was set in motion when she collaborated with researchers inside and outside of Google on a paper discussing ethical issues raised by recent advances in AI language software.
Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.
By Tom Simonite
Researchers have made leaps of progress on problems like generating text and answering questions by creating giant machine learning models trained on huge swaths of the online text. Google has said that technology has made its lucrative, eponymous search engine more powerful. But researchers have also shown that creating these more powerful models consumes large amounts of electricity because of the vast computing resources required, and documented how the models can replicate biased language on gender and race found online.
Gebru says her draft paper discussed those issues and urged responsible use of the technology, for example by documenting the data used to create language models. She was troubled when the senior manager insisted she and other Google authors either remove their names from the paper or retract it altogether, particularly when she couldn’t learn the process used to review the draft. “I felt like we were being censored and thought this had implications for all of ethical AI research,” she says.
Gebru says she failed to convince the senior manager to work through the issues with the paper; she says the manager insisted that she remove her name. Tuesday Gebru emailed back offering a deal: If she received a full explanation of what happened, and the research team met with management to agree on a process for fair handling of future research, she would remove her name from the paper. If not, she would arrange to depart the company at a later date, leaving her free to publish the paper without the company’s affiliation.
Gebru also sent an email to a wider list within Google’s AI research group saying that managers’ attempts to improve diversity had been ineffective. She included a description of her dispute about the language paper as an example of how Google managers can silence people from marginalized groups. Platformer published a copy of the email Thursday.
Wednesday, Gebru says, she learned from her direct reports that they had been told she had resigned from Google and that her resignation had been accepted. She discovered her corporate account was disabled.
An email sent by a manager to Gebru’s personal address said her resignation should take effect immediately because she had sent an email reflecting “behavior that is inconsistent with the expectations of a Google manager.” Gebru took to Twitter, and outrage quickly grew among AI researchers online.
By Tom Simonite
Many criticizing Google, both from inside and outside the company, noted that the company had at a stroke damaged the diversity of its AI workforce and also lost a prominent advocate for improving that diversity. Gebru suspects her treatment was in part motivated by her outspokenness around diversity and Google’s treatment of people from marginalized groups. "We have been pleading for representation, but there are barely any Black people in Google Research, and, from what I see, none in leadership whatsoever," she says.
Thursday, Google’s research head Jeff Dean sent an email to company researchers claiming that Gebru’s paper “didn’t meet our bar for publication” and that she had submitted it for internal review later than the company requires.
His message also suggested the disputed paper was perceived as too negative by Google managers. Dean said the document discussed the environmental impact of large AI models but not research showing they could be made more efficient, and raised concerns about biased language without considering work on mitigating them. Some Google AI researchers disputed Dean’s characterization on Twitter; one accused him of spreading “misinformation and misconstruals.” Another Google researcher observed that his own papers were screened internally only for disclosure of sensitive information, not what work was cited. The disputed paper is undergoing peer review by an academic conference independent of Google and may still be published in some form.
Dean’s intervention fueled the anger felt by some AI researchers sympathetic to Gebru’s cause—something that could damage Google’s ability to retain and hire top AI talent pursued vigorously by all major tech companies.
“Even if we put aside the censorship of the article, firing a researcher that way is chilling,” says Julien Cornebise, an honorary associate professor at University College London who previously worked at Alphabet’s London AI lab DeepMind and has talked with AI researchers inside and outside of Google about Gebru’s plight. “There’s a feeling of incredulity, astonishment, and outrage.”