Margaret Mitchell was the co-leader of a group investigating ethics in AI, alongside Timnit Gebru, who said she was fired in December.
FOR THE SECOND time in three months, a prominent researcher on ethics in artificial intelligence says Google fired her.
On Friday, researcher Margaret Mitchell said she had been fired from the company’s AI lab, Google Brain, where she previously co-led a group working on ethical approaches to artificial intelligence.
Her former co-leader of that group, Timnit Gebru, departed Google in December. Gebru said she had been fired after refusing to retract or remove her name from a research paper that urged caution with AI systems that process text, including technology Google uses in its search engine. Gebru has said she believes that disagreement may have been used as a pretext for removing her because of her willingness to speak out about Google’s poor treatment of Black employees and women.
Mitchell learned she had been let go in an email Friday afternoon. Inside Google, her old team was informed by a manager that she would not be returning from a suspension that began last month. The wider world found out when Mitchell posted two words on Twitter: “I’m fired.”
In a statement, a Google spokesperson said Mitchell had shared “confidential business-sensitive documents and private data of other employees” outside the company. After Mitchell’s suspension last month, Google said activity in her account had triggered a security system. A source familiar with Mitchell’s suspension said she had been using a script to search her email for material related to Gebru’s time at the company.
Gebru, Mitchell, and their ethical AI team at Google were prominent contributors to the recent growth in research by AI experts seeking to understand and mitigate potential downsides of AI. They contributed to decisions by Google executives to limit some of the company’s AI offerings, such as by retiring a feature of an image recognition service that attempted to identify the gender of people in photos.
The two women’s acrimonious exits from Google have drawn new attention to the tensions inherent in companies seeking profits from AI while also retaining staff to investigate what limits should be placed on the technology. After Gebru’s departure, some AI experts said they were unsure whether to trust Google’s work on such questions.
Google’s AI research boss, Jeff Dean, has previously said the research paper that led to Gebru’s departure was of poor quality, and he did not mention some work on ways to fix flaws in AI text systems. Researchers inside and outside of Google have disputed that characterization. More than 2,600 Google employees signed a letter protesting Gebru’s treatment.
Late in December, the paper was accepted at a leading conference on fairness in machine learning to be presented next month. It has Gebru’s name attached, without a Google affiliation, alongside “Shmargaret Schmitchell,” an apparent pseudonym of Mitchell’s, and two researchers from the University of Washington. On Friday, Alex Hanna, a member of Google’s ethical AI group still at the company, tweeted that Google was “running a smear campaign” against Gebru and Mitchell.
On Thursday, Google claimed it is still dedicated to developing AI technology responsibly and announced that Marian Croak, a vice president at the company, would take over that work. In a company video, Croak said she would “consolidate” work on responsible AI at the company and institute a review of Google AI systems that have been deployed or are in development.
In an internal memo on Friday, seen by Wired, Dean said Google would also rework its process for reviewing research publications and require senior executives to show progress on improving diversity among employees at the company. Gebru has said she previously requested changes on both fronts with little success.
Dean’s memo also acknowledged that Gebru’s departure from the company had led some Googlers to question their futures at the company, and may have deterred Black technologists from working in the industry. “I understand we could have and should have handled this situation with more sensitivity,” he wrote. “For that, I am sorry.