LONDON—Google is disbanding a panel here to review its artificial-intelligence work in health care, people familiar with the matter say, as disagreements about its effectiveness dogged one of the tech industry’s highest-profile efforts to govern itself.
The Alphabet Inc. GOOGL +0.23% unit is struggling with how best to set guidelines for its sometimes-sensitive work in AI—the ability for computers to replicate tasks that only humans could do in the past. It also highlights the challenges Silicon Valley faces in setting up self-governance systems as governments around the world scrutinize issues ranging from privacy and consent to the growing influence of social media and screen addiction among children.
AI has recently become a target in that stepped-up push for oversight as some sensitive decision-making—including employee recruitment, health-care diagnoses and law-enforcement profiling—is increasingly being outsourced to algorithms. The European Commission is proposing a set of AI ethical guidelines and researchers have urged companies to adopt similar rules. But industry efforts to conduct such oversight in-house have been mixed.
But the move also came amid disagreements between panel members and DeepMind, Google’s U.K.-based AI research unit, according to people familiar with the matter. Those differences centered on the review panel’s ability to access information about research and products, the binding power of their recommendations and the amount of independence that DeepMind could maintain from Google, according to these people.
A spokeswoman for DeepMind’s health-care unit in the U.K. declined to comment specifically about the board’s deliberations. After the reorganization, the company found that the board, called the Independent Review Panel, was “unlikely to be the right structure in the future.”
Google bought DeepMind in 2014, promising it a degree of autonomy to pursue its research on artificial intelligence. In 2016, DeepMind set up a special unit called DeepMind Health to focus on health-care-related opportunities. At the same time, DeepMind co-founder Mustafa Suleyman unveiled a board of nine veterans of government and industry, drawn from the arts, sciences and technology sectors, to meet once a quarter and scrutinize its work with the U.K.’s publicly funded health service. Among its tasks, the group had to produce a public annual report.
Google said it would build the app into an “AI-powered assistant for nurses and doctors everywhere.” That caused concern in public health and privacy circles because of previous assurances from Google and DeepMind that the two wouldn’t share health records.
DeepMind Health was renamed Google Health, becoming part of an umbrella division uniting Google’s other health-focused units like health-tracking platform Google Fit and Verily, a life-sciences research arm.
Inside the review board, many directors felt blindsided, according to people familiar with the matter. Some directors complained they could have played a helpful role in explaining the change of control of the Streams app to the public if given earlier insight.
The review panel still plans to publish a final “lessons learned” report, according to a person familiar with the matter, which will make recommendations about how better to set up such boards in the future.
These views are shaped by the reality that women experience the internet differently, just as the experience of walking down a dark alley, or even a busy street, is different for women than it is for men. One Pew study found that women are far more likely to be sexually harassed online and describe these interactions as extremely upsetting. The Department of Justice reports that about 75 percent of the victims of stalking and cyberstalking are women. And so women look over our shoulders online, just as we do in real life.
It isn’t just that real-life harassment also shows up online, it’s that the internet isn’t designed for women, even when the majority of users of some popular applications and platforms are women. In fact, some features of digital life have been constructed, intentionally or not, in ways that make women feel less safe.
For example, you can’t easily use Facebook’s WhatsApp messaging service without a phone number, which many women don’t want to share. Facebook’s chief executive, Mark Zuckerberg, has promised to build encrypted communication into all its platforms. Just as important is giving users the option to make their messages disappear, so that if a hostile ex somehow got into your phone there would be nothing to see.
Mark Zuckerberg doesn’t understand what privacy means — he can’t be trusted to define it for the rest of us
Zuckerberg’s new privacy essay shows why Facebook needs to be broken up
Mark Zuckerberg doesn’t understand what privacy means—he can’t be trusted to define it for the rest of us.
by Konstantin Kakaes March 7, 2019
In a letter published when his company went public in 2012, Mark Zuckerberg championed Facebook’s mission of making the world “more open and connected.” Businesses would become more authentic, human relationships stronger, and government more accountable. “A more open world is a better world,” he wrote.
Facebook’s CEO now claims to have had a major change of heart.
In “A Privacy-Focused Vision for Social Networking,” a 3,200-word essay that Zuckerberg posted to Facebook on March 6, he says he wants to “build a simpler platform that’s focused on privacy first.” In apparent surprise, he writes: “People increasingly also want to connect privately in the digital equivalent of the living room.”
Zuckerberg’s essay is a power grab disguised as an act of contrition. Read it carefully, and it’s impossible to escape the conclusion that if privacy is to be protected in any meaningful way, Facebook must be broken up.
Facebook grew so big, so quickly that it defies categorization. It is a newspaper. It is a post office and a telephone exchange. It is a forum for political debate, and it is a sports broadcaster. It’s a birthday-reminder service and a collective photo album. It is all of these things—and many others—combined, and so it is none of them.
Zuckerberg describes Facebook as a town square. It isn’t. Facebook is a company that brought in more than $55 billion in advertising revenue last year, with a 45% profit margin. This makes it one of the most profitable business ventures in human history. It must be understood as such.
Facebook has minted money because it has figured out how to commoditize privacy on a scale never before seen. A diminishment of privacy is its core product. Zuckerberg has made his money by performing a sort of arbitrage between how much privacy Facebook’s 2 billion users think they are giving up and how much he has been able to sell to advertisers. He says nothing of substance in his long essay about how he intends to keep his firm profitable in this supposed new era. That’s one reason to treat his Damascene moment with healthy skepticism.
“Frankly we don’t currently have a strong reputation for building privacy protective services,” Zuckerberg writes. But Facebook’s reputation is not the salient question: its business model is. If Facebook were to implement strong privacy protections across the board, it would have little left to sell to advertisers aside from the sheer size of its audience. Facebook might still make a lot of money, but they’d make a lot less of it.
Zuckerberg’s proposal is a bait-and-switch. What he’s proposing is essentially a beefed-up version of WhatsApp. Some of the improvements might be worthwhile. Stronger encryption can indeed be useful, and a commitment to not building data centers in repressive countries is laudable, as far as it goes. Other principles that Zuckerberg puts forth would concentrate his monopoly power in worrisome ways. The new “platforms for private sharing” are not instead of Facebook’s current offering: they’re in addition to it. “Public social networks will continue to be very important in people’s lives,” he writes, an assertion he never squares with the vague claim that “interacting with your friends and family across the Facebook network will become a fundamentally more private experience.”
By narrowly construing privacy to be almost exclusively about end-to-end encryption that would prevent a would-be eavesdropper from intercepting communications, he manages to avoid having to think about Facebook’s weaknesses and missteps. Privacy is not just about keeping secrets. It’s also about how flows of information shape us as individuals and as a society. What we say to whom and why is a function of context. Social networks change that context, and in so doing they change the nature of privacy, in ways that are both good and bad.
Russian propagandists used Facebook to sway the 2016 American election, perhaps decisively. Myanmarese military leaders used Facebook to incite an anti-Rohingya genocide. These are consequences of the ways in which Facebook has diminished privacy. They are not the result of failures of encryption.
“Privacy,” Zuckerberg writes, “gives people the freedom to be themselves.” This is true, but it is also incomplete. The self evolves over time. Privacy is important not simply because it allows us to be, but because it gives us space to become. As Georgetown University law professor Julie Cohen has written: “Conditions of diminished privacy also impair the capacity to innovate … Innovation requires room to tinker, and therefore thrives most fully in an environment that values and preserves spaces for tinkering.” If Facebook is constantly sending you push notifications, it diminishes the mental space you have available for tinkering and coming up with your own ideas. If Facebook bombards the gullible with misinformation, this too is an invasion of privacy. What has happened to privacy in the last couple of decades, and how to value it properly, are questions that are apparently beyond Zuckerberg’s ken.
He says Facebook is “committed to consulting with experts and discussing the best way forward,” and that it will make decisions “as openly and collaboratively as we can” because “many of these issues affect different parts of society.” But the flaw here is the centralized decision-making process. Even if Zuckerberg gets all the best advice that his billions can buy, the result is still deeply troubling. If his plan succeeds, it would mean that private communication between two individuals will be possible when Mark Zuckerberg decides that it ought to be, and impossible when he decides it ought not to be.
If that sounds alarmist, consider the principles that Zuckerberg laid out for Facebook’s new privacy focus. The most problematic of them is the way he discusses “interoperability.” Zuckerberg allows that people should have a choice between messaging services: some want to use Facebook Messenger, some prefer WhatsApp, and others like Instagram. It’s a hassle to use all of these, he says, so you should be able to send messages from one to the other.
But allowing communications that are outside Facebook’s control, he says, would be dangerous if users were allowed to send messages not subject to surveillance by Facebook’s “safety and security systems.” Which is to say we should be allowed to use any messaging service we like, so long as it’s controlled by Facebook for our protection. Zuckerberg is arguing for tighter and tighter integration of Facebook’s various properties.
Monopoly power is problematic even for companies that just make a lot of money selling widgets: it allows them to exert undue influence on regulators and to rip off consumers. But it’s particularly worrisome for a company like Facebook, whose product is information.
This is why it should be broken up. This wouldn’t answer every difficult question that Facebook’s existence raises. It isn’t easy to figure out how to protect free speech while limiting hate speech and deliberate misinformation campaigns, for example. But breaking up Facebook would provide space to come up with solutions that make sense to society as a whole, rather than to Zuckerberg and Facebook’s other shareholders.
At a minimum, splitting WhatsApp and Instagram from Facebook is a necessary first step. This makes the company smaller, and therefore less powerful when it comes to negotiating with other businesses and with regulators. Monopolies, as Louis Brandeis pointed out a century ago, and as Columbia University law professor Tim Wu, journalist Franklin Foer, and others have underscored more recently, simply accrue too much political and economic power to allow for the democratic process to find a balance in how to tackle issues like privacy.
Tellingly, Zuckerberg’s power has grown so great that he feels no need to hide his ambitions. “We can,” he writes, “create platforms for private sharing that could be even more important to people than the platforms we’ve already built to help people share and connect more openly.”
Only if we let him.
I’ve been reminded of this ancient history a lot in the last year or two as I’ve looked at news around abuse and hostile state activity on Facebook, YouTube and other social platforms, because much like the Microsoft macro viruses, the ‘bad actors’ on Facebook did things that were in the manual. They didn’t prise open a locked window at the back of the building – they knocked on the front door and walked in. They did things that you were supposed to be able to do, but combined them in an order and with malign intent that hadn’t really been anticipated.
It’s also interesting to compare the public discussion of Microsoft and of Facebook before these events. In the 1990s, Microsoft was the ‘evil empire’, and a lot of the narrative within tech focused on how it should be more open, make it easier for people to develop software that worked with the Office monopoly, and make it easier to move information in and out of its products. Microsoft was ‘evil’ if it did anything to make life harder for developers. Unfortunately, whatever you thought of this narrative, it pointed in the wrong direction when it came to this use case. Here, Microsoft was too open, not too closed.
Equally, in the last 10 years – that is is too hard to get your information out and too hard for researchers to pull information from across the platform. People have argued that Facebook was too restrictive on how third party developers could use the platform. And people have objected to Facebook’s attempts to enforce the single real identities of accounts. As for Microsoft, there may well have been justice in all of these arguments, but also as for Microsoft, they pointed in the wrong direction when it came to this particular scenario. For the Internet Research Agency, it was too easy to develop for Facebook, too easy to get data out, and too easy to change your identity. The walled garden wasn’t walled enough.
.. Conceptually, this is almost exactly what Facebook has done: try to remove existing opportunities for abuse and avoid creating new ones, and scan for bad actors.
Microsoft Remove openings for abuse Close down APIs and look for vulnerabilities Close down APIs and look for vulnerabilities Scan for bad behavior Virus and malware scanners Human moderation
(It’s worth noting that these steps were precisely what people had previously insisted was evil – Microsoft deciding what code you can run on your own computer and what APIs developers can use, and Facebook deciding (people demanding that Facebook decide) who and what it distributes.)
- .. If there is no data stored on your computer then compromising the computer doesn’t get an attacker much.
- An application can’t steal your data if it’s sandboxed and can’t read other applications’ data.
- An application can’t run in the background and steal your passwords if applications can’t run in the background.
- And you can’t trick a user into installing a bad app if there are no apps.
Of course, human ingenuity is infinite, and this change just led to the creation of new attack models, most obviously phishing, but either way, none of this had much to do with Microsoft. We ‘solved’ viruses by moving to new architectures that removed the mechanics that viruses need, and where Microsoft wasn’t present.
.. In other words, where Microsoft put better locks and a motion sensor on the windows, the world is moving to a model where the windows are 200 feet off the ground and don’t open.
.. Much like moving from Windows to cloud and ChromeOS, you could see this as an attempt to remove the problem rather than patch it.
- Russians can’t go viral in your newsfeed if there is no newsfeed.
- ‘Researchers’ can’t scrape your data if Facebook doesn’t have your data. You solve the problem by making it irrelevant.
This is one way to solve the problem by changing the core mechanics, but there are others. For example, Instagram does have a one-to-many feed but does not suggest content from people you don’t yourself follow in the main feed and does not allow you to repost into your friends’ feeds. There might be anti-vax content in your feed, but one of your actual friends has to have decided to share it with you. Meanwhile, problems such as the spread of dangerous rumours in India rely on messaging rather than sharing – messaging isn’t a panacea.
Indeed, as it stands Mr Zuckerberg’s memo raises as many questions as it answers – most obviously, how does advertising work? Is there advertising in messaging, and if so, how is it targeted? Encryption means Facebook doesn’t know what you’re talking about, but the Facebook apps on your phone necessarily would know (before they encrypt it), so does targeting happen locally? Meanwhile, encryption in particular poses problems for tackling other kinds of abuse: how do you help law enforcement deal with child exploitation if you can’t read the exploiters’ messages (the memo explicitly talks about this as a challenge)? Where does Facebook’s Blockchain project sit in all of this?
There are lots of big questions, though of course there would also have been lots of questions if in 2002 you’d said that all enterprise software would go to the cloud. But the difference here is that Facebook is trying (or talking about trying) to do the judo move itself, and to make a fundamental architectural change that Microsoft could not.