Annotations of Google Tweet about AI Jobs

Translation: We fired and harassed out our top AI researchers from historically marginalized groups who did the work for us, and we are now looking for more people from historically marginalized groups to burn out, exploit, and expend.
Quote Tweet
@JeffDean
·
I encourage students from historically marginalized groups who are interested in learning to conduct research in AI/ML, CS or related areas to consider applying for our CSRMP mentorship program! We have 100s of researchers @GoogleAI who are excited to work with you. twitter.com/GoogleAI/statu…
@hondanhon
the birdhouse annotations on that tweet must be something to behold if birdhouse were actually a thing
Jean-Michel Plourde
@j_mplourde
Replying to

aka: we are looking for more obediant marginalized folks for the PR but yeah don’t cross the line or else…

Timnit Gebru’s Exit From Google Exposes a Crisis in AI

This year has held many things, among them bold claims of artificial intelligence breakthroughs. Industry commentators speculated that the language-generation model GPT-3 may have achieved “artificial general intelligence,” while others lauded Alphabet subsidiary DeepMind’s protein-folding algorithm—Alphafold—and its capacity to “transform biology.” While the basis of such claims is thinner than the effusive headlines, this hasn’t done much to dampen enthusiasm across the industry, whose profits and prestige are dependent on AI’s proliferation.

It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)

Google’s appalling treatment of Gebru exposes a dual crisis in AI research. The field is dominated by an elite, primarily white male workforce, and it is controlled and funded primarily by large industry players—Microsoft, Facebook, Amazon, IBM, and yes, Google. With Gebru’s firing, the civility politics that yoked the young effort to construct the necessary guardrails around AI have been torn apart, bringing questions about the racial homogeneity of the AI workforce and the inefficacy of corporate diversity programs to the center of the discourse. But this situation has also made clear that—however sincere a company like Google’s promises may seem—corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.

This should concern us all. With the proliferation of AI into domains such as health carecriminal justice, and education, researchers and advocates are raising urgent concerns. These systems make determinations that directly shape lives, at the same time that they are embedded in organizations structured to reinforce histories of racial discrimination. AI systems also concentrate power in the hands of those designing and using them, while obscuring responsibility (and liability) behind the veneer of complex computation. The risks are profound, and the incentives are decidedly perverse.

The current crisis exposes the structural barriers limiting our ability to build effective protections around AI systems. This is especially important because the populations subject to harm and bias from AI’s predictions and determinations are primarily BIPOC people, women, religious and gender minorities, and the poor—those who’ve borne the brunt of structural discrimination. Here we have a clear racialized divide between those benefiting—the corporations and the primarily white male researchers and developers—and those most likely to be harmed.

Take facial-recognition technologies, for instance, which have been shown to “recognize” darker skinned people less frequently than those with lighter skin. This alone is alarming. But these racialized “errors” aren’t the only problems with facial recognition. Tawana Petty, director of organizing at Data for Black Lives, points out that these systems are disproportionately deployed in predominantly Black neighborhoods and cities, while cities that have had success in banning and pushing back against facial recognition’s use are predominately white.

Without independent, critical research that centers the perspectives and experiences of those who bear the harms of these technologies, our ability to understand and contest the overhyped claims made by industry is significantly hampered. Google’s treatment of Gebru makes increasingly clear where the company’s priorities seem to lie when critical work pushes back on its business incentives. This makes it almost impossible to ensure that AI systems are accountable to the people most vulnerable to their damage.

Checks on the industry are further compromised by the close ties between tech companies and ostensibly independent academic institutions. Researchers from corporations and academia publish papers together and rub elbows at the same conferences, with some researchers even holding concurrent positions at tech companies and universities. This blurs the boundary between academic and corporate research and obscures the incentives underwriting such work. It also means that the two groups look awfully similar—AI research in academia suffers from the same pernicious racial and gender homogeneity issues as its corporate counterparts. Moreover, the top computer science departments accept copious amounts of Big Tech research funding. We have only to look to Big Tobacco and Big Oil for troubling templates that expose just how much influence over the public understanding of complex scientific issues large companies can exert when knowledge creation is left in their hands.

Gebru’s firing suggests this dynamic is at work once again. Powerful companies like Google have the ability to co-opt, minimize, or silence criticisms of their own large-scale AI systems—systems that are at the core of their profit motives. Indeed, according to a recent Reuters report, Google leadership went as far as to instruct researchers to “strike a positive tone” in work that examined technologies and issues sensitive to Google’s bottom line. Gebru’s firing also highlights the danger the rest of the public faces if we allow an elite, homogenous research cohort, made up of people who are unlikely to experience the negative effects of AI, to drive and shape the research on it from within corporate environments. The handful of people who are benefiting from AI’s proliferation are shaping the academic and public understanding of these systems, while those most likely to be harmed are shut out of knowledge creation and influence. This inequity follows predictable racial, gender, and class lines.

As the dust begins to settle in the wake of Gebru’s firing, one question resounds: What do we do to contest these incentives, and to continue critical work on AI in solidarity with the people most at risk of harm? To that question, we have a few, preliminary answers.

First and foremost, tech workers need a union. Organized workers are a key lever for change and accountability, and one of the few forces that has been shown capable of pushing back against large firms. This is especially true in tech, given that many workers have sought-after expertise and are not easily replaceable, giving them significant labor power. Such organizations can act as a check on retaliation and discrimination, and can be a force pushing back against morally reprehensible uses of tech. Just look at Amazon workers’ fight against climate change or Google employees’ resistance to military uses of AI, which changed company policies and demonstrated the power of self-organized tech workers. To be effective here, such an organization must be grounded in anti-racism and cross-class solidarity, taking a broad view of who counts as a tech worker, and working to prioritize the protection and elevation of BIPOC tech workers across the board. It should also use its collective muscle to push back on tech that hurts historically marginalized people beyond Big Tech’s boundaries, and to align with external advocates and organizers to ensure this.

We also need protections and funding for critical research outside of the corporate environment that’s free of corporate influence. Not every company has a Timnit Gebru prepared to push back against reported research censorship. Researchers outside of corporate environments must be guaranteed greater access to technologies currently hidden behind claims of corporate secrecy, such as access to training data sets, and policies and procedures related to data annotation and content moderation. Such spaces for protected, critical research should also prioritize supporting BIPOC, women, and other historically excluded researchers and perspectives, recognizing that racial and gender homogeneity in the field contribute to AI’s harms. This endeavor would need significant funding, which could be achieved through a tax levied on these companies.

Finally, the AI field desperately needs regulation. Local, state, and federal governments must step in and pass legislation that protects privacy and ensures meaningful consent around data collection and the use of AI; increases protections for workers, including whistle-blower protections and measures to better protect BIPOC workers and others subject to discrimination; and ensures that those most vulnerable to the risks of AI systems can contest—and refuse—their use.

This crisis makes clear that the current AI research ecosystem—constrained as it is by corporate influence and dominated by a privileged set of researchers—is not capable of asking and answering the questions most important to those who bear the harms of AI systems. Public-minded research and knowledge creation isn’t just important for its own sake, it provides essential information for those developing robust strategies for the democratic oversight and governance of AI, and for social movements that can push back on harmful tech and those who wield it. Supporting and protecting organized tech workers, expanding the field that examines AI, and nurturing well-resourced and inclusive research environments outside the shadow of corporate influence are essential steps in providing the space to address these urgent concerns.


A Second AI Researcher Says She Was Fired by Google

Margaret Mitchell was the co-leader of a group investigating ethics in AI, alongside Timnit Gebru, who said she was fired in December.

FOR THE SECOND time in three months, a prominent researcher on ethics in artificial intelligence says Google fired her.

On Friday, researcher Margaret Mitchell said she had been fired from the company’s AI lab, Google Brain, where she previously co-led a group working on ethical approaches to artificial intelligence.

Her former co-leader of that group, Timnit Gebru, departed Google in December. Gebru said she had been fired after refusing to retract or remove her name from a research paper that urged caution with AI systems that process text, including technology Google uses in its search engine. Gebru has said she believes that disagreement may have been used as a pretext for removing her because of her willingness to speak out about Google’s poor treatment of Black employees and women.

Mitchell learned she had been let go in an email Friday afternoon. Inside Google, her old team was informed by a manager that she would not be returning from a suspension that began last month. The wider world found out when Mitchell posted two words on Twitter: “I’m fired.”

In a statement, a Google spokesperson said Mitchell had shared “confidential business-sensitive documents and private data of other employees” outside the company. After Mitchell’s suspension last month, Google said activity in her account had triggered a security system. A source familiar with Mitchell’s suspension said she had been using a script to search her email for material related to Gebru’s time at the company.

Gebru, Mitchell, and their ethical AI team at Google were prominent contributors to the recent growth in research by AI experts seeking to understand and mitigate potential downsides of AI. They contributed to decisions by Google executives to limit some of the company’s AI offerings, such as by retiring a feature of an image recognition service that attempted to identify the gender of people in photos.

The two women’s acrimonious exits from Google have drawn new attention to the tensions inherent in companies seeking profits from AI while also retaining staff to investigate what limits should be placed on the technology. After Gebru’s departure, some AI experts said they were unsure whether to trust Google’s work on such questions.

Google’s AI research boss, Jeff Dean, has previously said the research paper that led to Gebru’s departure was of poor quality, and he did not mention some work on ways to fix flaws in AI text systems. Researchers inside and outside of Google have disputed that characterization. More than 2,600 Google employees signed a letter protesting Gebru’s treatment.

Late in December, the paper was accepted at a leading conference on fairness in machine learning to be presented next month. It has Gebru’s name attached, without a Google affiliation, alongside “Shmargaret Schmitchell,” an apparent pseudonym of Mitchell’s, and two researchers from the University of Washington. On Friday, Alex Hanna, a member of Google’s ethical AI group still at the company, tweeted that Google was “running a smear campaign” against Gebru and Mitchell.

On Thursday, Google claimed it is still dedicated to developing AI technology responsibly and announced that Marian Croak, a vice president at the company, would take over that work. In a company video, Croak said she would “consolidate” work on responsible AI at the company and institute a review of Google AI systems that have been deployed or are in development.

In an internal memo on Friday, seen by Wired, Dean said Google would also rework its process for reviewing research publications and require senior executives to show progress on improving diversity among employees at the company. Gebru has said she previously requested changes on both fronts with little success.

Dean’s memo also acknowledged that Gebru’s departure from the company had led some Googlers to question their futures at the company, and may have deterred Black technologists from working in the industry. “I understand we could have and should have handled this situation with more sensitivity,” he wrote. “For that, I am sorry.