Timnit Gebru’s Exit From Google Exposes a Crisis in AI

This year has held many things, among them bold claims of artificial intelligence breakthroughs. Industry commentators speculated that the language-generation model GPT-3 may have achieved “artificial general intelligence,” while others lauded Alphabet subsidiary DeepMind’s protein-folding algorithm—Alphafold—and its capacity to “transform biology.” While the basis of such claims is thinner than the effusive headlines, this hasn’t done much to dampen enthusiasm across the industry, whose profits and prestige are dependent on AI’s proliferation.

It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)

Google’s appalling treatment of Gebru exposes a dual crisis in AI research. The field is dominated by an elite, primarily white male workforce, and it is controlled and funded primarily by large industry players—Microsoft, Facebook, Amazon, IBM, and yes, Google. With Gebru’s firing, the civility politics that yoked the young effort to construct the necessary guardrails around AI have been torn apart, bringing questions about the racial homogeneity of the AI workforce and the inefficacy of corporate diversity programs to the center of the discourse. But this situation has also made clear that—however sincere a company like Google’s promises may seem—corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.

This should concern us all. With the proliferation of AI into domains such as health carecriminal justice, and education, researchers and advocates are raising urgent concerns. These systems make determinations that directly shape lives, at the same time that they are embedded in organizations structured to reinforce histories of racial discrimination. AI systems also concentrate power in the hands of those designing and using them, while obscuring responsibility (and liability) behind the veneer of complex computation. The risks are profound, and the incentives are decidedly perverse.

The current crisis exposes the structural barriers limiting our ability to build effective protections around AI systems. This is especially important because the populations subject to harm and bias from AI’s predictions and determinations are primarily BIPOC people, women, religious and gender minorities, and the poor—those who’ve borne the brunt of structural discrimination. Here we have a clear racialized divide between those benefiting—the corporations and the primarily white male researchers and developers—and those most likely to be harmed.

Take facial-recognition technologies, for instance, which have been shown to “recognize” darker skinned people less frequently than those with lighter skin. This alone is alarming. But these racialized “errors” aren’t the only problems with facial recognition. Tawana Petty, director of organizing at Data for Black Lives, points out that these systems are disproportionately deployed in predominantly Black neighborhoods and cities, while cities that have had success in banning and pushing back against facial recognition’s use are predominately white.

Without independent, critical research that centers the perspectives and experiences of those who bear the harms of these technologies, our ability to understand and contest the overhyped claims made by industry is significantly hampered. Google’s treatment of Gebru makes increasingly clear where the company’s priorities seem to lie when critical work pushes back on its business incentives. This makes it almost impossible to ensure that AI systems are accountable to the people most vulnerable to their damage.

Checks on the industry are further compromised by the close ties between tech companies and ostensibly independent academic institutions. Researchers from corporations and academia publish papers together and rub elbows at the same conferences, with some researchers even holding concurrent positions at tech companies and universities. This blurs the boundary between academic and corporate research and obscures the incentives underwriting such work. It also means that the two groups look awfully similar—AI research in academia suffers from the same pernicious racial and gender homogeneity issues as its corporate counterparts. Moreover, the top computer science departments accept copious amounts of Big Tech research funding. We have only to look to Big Tobacco and Big Oil for troubling templates that expose just how much influence over the public understanding of complex scientific issues large companies can exert when knowledge creation is left in their hands.

Gebru’s firing suggests this dynamic is at work once again. Powerful companies like Google have the ability to co-opt, minimize, or silence criticisms of their own large-scale AI systems—systems that are at the core of their profit motives. Indeed, according to a recent Reuters report, Google leadership went as far as to instruct researchers to “strike a positive tone” in work that examined technologies and issues sensitive to Google’s bottom line. Gebru’s firing also highlights the danger the rest of the public faces if we allow an elite, homogenous research cohort, made up of people who are unlikely to experience the negative effects of AI, to drive and shape the research on it from within corporate environments. The handful of people who are benefiting from AI’s proliferation are shaping the academic and public understanding of these systems, while those most likely to be harmed are shut out of knowledge creation and influence. This inequity follows predictable racial, gender, and class lines.

As the dust begins to settle in the wake of Gebru’s firing, one question resounds: What do we do to contest these incentives, and to continue critical work on AI in solidarity with the people most at risk of harm? To that question, we have a few, preliminary answers.

First and foremost, tech workers need a union. Organized workers are a key lever for change and accountability, and one of the few forces that has been shown capable of pushing back against large firms. This is especially true in tech, given that many workers have sought-after expertise and are not easily replaceable, giving them significant labor power. Such organizations can act as a check on retaliation and discrimination, and can be a force pushing back against morally reprehensible uses of tech. Just look at Amazon workers’ fight against climate change or Google employees’ resistance to military uses of AI, which changed company policies and demonstrated the power of self-organized tech workers. To be effective here, such an organization must be grounded in anti-racism and cross-class solidarity, taking a broad view of who counts as a tech worker, and working to prioritize the protection and elevation of BIPOC tech workers across the board. It should also use its collective muscle to push back on tech that hurts historically marginalized people beyond Big Tech’s boundaries, and to align with external advocates and organizers to ensure this.

We also need protections and funding for critical research outside of the corporate environment that’s free of corporate influence. Not every company has a Timnit Gebru prepared to push back against reported research censorship. Researchers outside of corporate environments must be guaranteed greater access to technologies currently hidden behind claims of corporate secrecy, such as access to training data sets, and policies and procedures related to data annotation and content moderation. Such spaces for protected, critical research should also prioritize supporting BIPOC, women, and other historically excluded researchers and perspectives, recognizing that racial and gender homogeneity in the field contribute to AI’s harms. This endeavor would need significant funding, which could be achieved through a tax levied on these companies.

Finally, the AI field desperately needs regulation. Local, state, and federal governments must step in and pass legislation that protects privacy and ensures meaningful consent around data collection and the use of AI; increases protections for workers, including whistle-blower protections and measures to better protect BIPOC workers and others subject to discrimination; and ensures that those most vulnerable to the risks of AI systems can contest—and refuse—their use.

This crisis makes clear that the current AI research ecosystem—constrained as it is by corporate influence and dominated by a privileged set of researchers—is not capable of asking and answering the questions most important to those who bear the harms of AI systems. Public-minded research and knowledge creation isn’t just important for its own sake, it provides essential information for those developing robust strategies for the democratic oversight and governance of AI, and for social movements that can push back on harmful tech and those who wield it. Supporting and protecting organized tech workers, expanding the field that examines AI, and nurturing well-resourced and inclusive research environments outside the shadow of corporate influence are essential steps in providing the space to address these urgent concerns.


Saagar Enjeti: We DESERVE Biden On Rogan, But We Won’t Get It. Here’s Why

Saagar Enjeti analyzes whether the presidential debates will be free and fair between President Trump and Joe Biden, as mainstream reporters fail to ask Biden tough questions.

Why corporate America loves Donald Trump

American executives are betting that the president is good for business. Not in the long run

MOST American elites believe that the Trump presidency is hurting their country. Foreign-policy mandarins are terrified that security alliances are being wrecked. Fiscal experts warn that borrowing is spiralling out of control. Scientists deplore the rejection of climate change. And some legal experts warn of a looming constitutional crisis.

.. Bosses reckon that the value of tax cuts, deregulation and potential trade concessions from China outweighs the hazy costs of weaker institutions and trade wars.

.. the investment surge is unlike any before—it is skewed towards tech giants, not firms with factories. When it comes to gauging the full costs of Mr Trump, America Inc is being short-sighted and sloppy.

.. The benefits for business of Mr Trump are clear, then: less tax and red tape, potential trade gains and a 6-8% uplift in earnings.

.. During the Obama years corporate America was convinced it was under siege when in fact, judged by the numbers, it was in a golden era, with average profits 31% above long-term levels.

Now bosses think they have entered a nirvana, when the reality is that the country’s system of commerce is lurching away from rules, openness and multilateral treaties towards arbitrariness, insularity and transient deals.

.. so far this month 200-odd listed American firms have discussed the financial impact of tariffs on their calls with investors. Over time, a mesh of distortions will build up.

.. American firms have $8trn of capital sunk abroad; foreign firms have $7trn in America; and there have been 15,000 inbound deals since 2008. The cost involved in monitoring all this activity could ultimately be vast. As America eschews global co-operation, its firms will also face more duplicative regulation abroad. Europe has already introduced new regimes this year for financial instruments and data.

.. The expense of re-regulating trade could even exceed the benefits of deregulation at home. That might be tolerable, were it not for the other big cost of the Trump era: unpredictability. At home the corporate-tax cuts will partly expire after 2022.

.. Bosses hope that the belligerence on trade is a ploy borrowed from “The Apprentice”, and that stable agreements will emerge. But imagine that America stitches up a deal with China and the bilateral trade deficit then fails to shrink, or Chinese firms cease buying American high-tech components as they become self-sufficient

.. Another reason for the growing unpredictability is Mr Trump’s urge to show off his power with acts of pure political discretion.

  • He has just asked the postal service to raise delivery prices for Amazon, his bête noire and the world’s second-most valuable listed firm.
  • He could easily strike out in anger at other Silicon Valley firms—after all, they increasingly control the flow of political information.
  • He wants the fate of ZTE, a Chinese telecoms firm banned in America for sanctions violations, to turn on his personal whim.

.. When policy becomes a rolling negotiation, lobbying explodes. The less predictable business environment that results will raise the cost of capital.

.. Mr Trump expects wages to rise, but 85% of firms in the S&P 500 are forecast to expand margins by 2019

.. Either shareholders, or workers and Mr Trump, are going to be disappointed.

.. In a downturn, American business may find that its fabled flexibility has been compromised because the politics of firing workers and slashing costs has become toxic.

.. American business may one day conclude that this was the moment when it booked all the benefits of the Trump era, while failing to account properly for the costs.

What Has Mitt Romney Learned?

Romney’s rhetoric on China and immigration was a more restrained version of Trump’s nationalist pitch, and here and there he tried to imitate Franklin Roosevelt’s promise, updated crudely by Trump, to be a traitor to his successful class.

.. the defining pitch of the Romney campaign was the tone-deaf “you built that,” which valorized entrepreneurs and ignored ordinary workers; the defining policy blueprint was a tax reform proposal that offered little or nothing to the middle class; and the defining gaffe was the famous “47 percent” line, in which Romney succumbed, before an audience of Richie Riches, to the Ayn Randian temptation to write off struggling Americans as losers.

.. that failure lay the opportunity that Trump intuited — for a Republican candidate who would rhetorically reject and even run against the kind of corporation-first conservatism that Romney seemed to embody and embrace.

.. Trump has mostly turned his back on his own economic populism

The best of the current Republicans (the Paul Ryans, the Ben Sasses, the Mitt Romneys) have certain common features that should be appealing to the electorate. They seem to have the home life of the family man. They have the discipline and diligence of the organization kid. They have the looks of the pretty boy. Yet the public still rejects them, because the voters find their ideas even more unpleasant than Donald Trump’s odious personality.

.. But he could also perform a service by showing that he has learned something from watching Trumpism succeed where his own campaign failed — which would mean steering a different and more populist course than those NeverTrump Republicans who pine for a party of the purest libertarianism, and those OkayFineTrump Republicans who are happy now that Trump has given them their corporate tax cut.

.. Right now there is a small caucus in the Republican Party for a different way, for a conservatism that seeks to cure itself of Romney Disease by becoming genuinely pro-worker rather than waiting for a worse demagogue than Trump to come along.