A Second AI Researcher Says She Was Fired by Google

Margaret Mitchell was the co-leader of a group investigating ethics in AI, alongside Timnit Gebru, who said she was fired in December.

FOR THE SECOND time in three months, a prominent researcher on ethics in artificial intelligence says Google fired her.

On Friday, researcher Margaret Mitchell said she had been fired from the company’s AI lab, Google Brain, where she previously co-led a group working on ethical approaches to artificial intelligence.

Her former co-leader of that group, Timnit Gebru, departed Google in December. Gebru said she had been fired after refusing to retract or remove her name from a research paper that urged caution with AI systems that process text, including technology Google uses in its search engine. Gebru has said she believes that disagreement may have been used as a pretext for removing her because of her willingness to speak out about Google’s poor treatment of Black employees and women.

Mitchell learned she had been let go in an email Friday afternoon. Inside Google, her old team was informed by a manager that she would not be returning from a suspension that began last month. The wider world found out when Mitchell posted two words on Twitter: “I’m fired.”

In a statement, a Google spokesperson said Mitchell had shared “confidential business-sensitive documents and private data of other employees” outside the company. After Mitchell’s suspension last month, Google said activity in her account had triggered a security system. A source familiar with Mitchell’s suspension said she had been using a script to search her email for material related to Gebru’s time at the company.

Gebru, Mitchell, and their ethical AI team at Google were prominent contributors to the recent growth in research by AI experts seeking to understand and mitigate potential downsides of AI. They contributed to decisions by Google executives to limit some of the company’s AI offerings, such as by retiring a feature of an image recognition service that attempted to identify the gender of people in photos.

The two women’s acrimonious exits from Google have drawn new attention to the tensions inherent in companies seeking profits from AI while also retaining staff to investigate what limits should be placed on the technology. After Gebru’s departure, some AI experts said they were unsure whether to trust Google’s work on such questions.

Google’s AI research boss, Jeff Dean, has previously said the research paper that led to Gebru’s departure was of poor quality, and he did not mention some work on ways to fix flaws in AI text systems. Researchers inside and outside of Google have disputed that characterization. More than 2,600 Google employees signed a letter protesting Gebru’s treatment.

Late in December, the paper was accepted at a leading conference on fairness in machine learning to be presented next month. It has Gebru’s name attached, without a Google affiliation, alongside “Shmargaret Schmitchell,” an apparent pseudonym of Mitchell’s, and two researchers from the University of Washington. On Friday, Alex Hanna, a member of Google’s ethical AI group still at the company, tweeted that Google was “running a smear campaign” against Gebru and Mitchell.

On Thursday, Google claimed it is still dedicated to developing AI technology responsibly and announced that Marian Croak, a vice president at the company, would take over that work. In a company video, Croak said she would “consolidate” work on responsible AI at the company and institute a review of Google AI systems that have been deployed or are in development.

In an internal memo on Friday, seen by Wired, Dean said Google would also rework its process for reviewing research publications and require senior executives to show progress on improving diversity among employees at the company. Gebru has said she previously requested changes on both fronts with little success.

Dean’s memo also acknowledged that Gebru’s departure from the company had led some Googlers to question their futures at the company, and may have deterred Black technologists from working in the industry. “I understand we could have and should have handled this situation with more sensitivity,” he wrote. “For that, I am sorry.

Warning! Everything Is Going Deep: ‘The Age of Surveillance Capitalism’

Deep learning, deep insights, deep artificial minds — the list goes on and on. But with unprecedented promise comes some unprecedented peril.

Around the end of each year major dictionaries declare their “word of the year.” Last year, for instance, the most looked-up word at Merriam-Webster.com was “justice.” Well, even though it’s early, I’m ready to declare the word of the year for 2019.

The word is “deep.”

Why? Because recent advances in the speed and scope of digitization, connectivity, big data and artificial intelligence are now taking us “deep” into places and into powers that we’ve never experienced before — and that governments have never had to regulate before. I’m talking about

  • deep learning,
  • deep insights,
  • deep surveillance,
  • deep facial recognition,
  • deep voice recognition,
  • deep automation and
  • deep artificial minds.

..Which is why it may not be an accident that one of the biggest hit songs today is “Shallow,” from the movie “A Star Is Born.” The main refrain, sung by Lady Gaga and Bradley Cooper, is: “I’m off the deep end, watch as I dive in. … We’re far from the shallow now.”

.. We sure are. But the lifeguard is still on the beach and — here’s what’s really scary — he doesn’t know how to swim! More about that later. For now, how did we get so deep down where the sharks live?

The short answer: Technology moves up in steps, and each step, each new platform, is usually biased toward a new set of capabilities. Around the year 2000 we took a huge step up that was biased toward connectivity, because of the explosion of fiber-optic cable, wireless and satellites.

Suddenly connectivity became so fast, cheap, easy for you and ubiquitous that it felt like you could touch someone whom you could never touch before and that you could be touched by someone who could never touch you before.

Around 2007, we took another big step up. The iPhone, sensors, digitization, big data, the internet of things, artificial intelligence and cloud computing melded together and created a new platform that was biased toward abstracting complexity at a speed, scope and scale we’d never experienced before.

So many complex things became simplified. Complexity became so fast, free, easy to use and invisible that soon with one touch on Uber’s app you could page a taxi, direct a taxi, pay a taxi, rate a taxi driver and be rated by a taxi driver.

Why Human Chess Survives

when Kasparov next had to defend his title against a human challenger, match organizers found it much more difficult to raise a suitably large purse than in pre-Deep Blue days. Sponsors would invariably ask “Wait, what I am paying for, isn’t the computer the real-world champion?” Fast-forward to today, and the top players cannot easily beat their cell phone.

Yet, rather than dying, chess has thrived. This is partly because the advent of computers and computer databases has made chess a truly universal sport. Once dominated by Russia, Vishy Anand of India held the title before Carlsen, and China’s Ding Liren seems on track to be the next challenger. Parents, despondent over their children’s addiction to video, are much happier to see them playing chess against a computer.

.. The advent of computers has required some adjustments in top tournaments. It helps that even the best computer programs do not play chess perfectly, because the number of possible games is greater than the number of atoms in the universe. Moreover, computers think so differently that it is not always helpful to know the computer’s favored move unless one can tediously follow reams of subsequent analysis. It is not unusual for a player to comment, “

The advent of computers has required some adjustments in top tournaments. It helps that even the best computer programs do not play chess perfectly, because the number of possible games is greater than the number of atoms in the universe. Moreover, computers think so differently that it is not always helpful to know the computer’s favored move unless one can tediously follow reams of subsequent analysis. It is not unusual for a player to comment, “The computer says the best move is x, but I played the best human move.”

.. if someone is suspected of cheating, the organizers can check their moves against the choices of the top computer programs. If there is too high a correlation, the player is subject to ejection.

.. just as tied World Cup matches end with a shootout, chess championship can come down to an “Armageddon” where the games are speeded up so much that it is virtually impossible to avoid big mistakes. In the end, Carlsen convincingly prevailed in the tie-breaker, in very human fashion. But we should all celebrate.