It’s Time to Break Up Facebook (Chris Hughes)

Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government. He controls three core communications platforms — Facebook, Instagram and WhatsApp — that billions of people use every day. Facebook’s board works more like an advisory committee than an overseer, because Mark controls around 60 percent of voting shares. Mark alone can decide how to configure Facebook’s algorithms to determine what people see in their News Feeds, what privacy settings they can use and even which messages get delivered. He sets the rules for how to distinguish violent and incendiary speech from the merely offensive, and he can choose to shut down a competitor by acquiring, blocking or copying it.

Mark is a good, kind person. But I’m angry that his focus on growth led him to sacrifice security and civility for clicks. I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders. And I’m worried that Mark has surrounded himself with a team that reinforces his beliefs instead of challenging them.

The government must hold Mark accountable. For too long, lawmakers have marveled at Facebook’s explosive growth and overlooked their responsibility to ensure that Americans are protected and markets are competitive. Any day now, the Federal Trade Commission is expected to impose a $5 billion fine on the company, but that is not enough; nor is Facebook’s offer to appoint some kind of privacy czar. After Mark’s congressional testimony last year, there should have been calls for him to truly reckon with his mistakes. Instead the legislators who questioned him were derided as too old and out of touch to understand how tech works. That’s the impression Mark wanted Americans to have, because it means little will change.

Facebook’s dominance is not an accident of history. The company’s strategy was to beat every competitor in plain view, and regulators and the government tacitly — and at times explicitly — approved. In one of the government’s few attempts to rein in the company, the F.T.C. in 2011 issued a consent decree that Facebook not share any private information beyond what users already agreed to. Facebook largely ignored the decree. Last month, the day after the company predicted in an earnings call that it would need to pay up to $5 billion as a penalty for its negligence — a slap on the wrist — Facebook’s shares surged 7 percent, adding $30 billion to its value, six times the size of the fine.

The F.T.C.’s biggest mistake was to allow Facebook to acquire Instagram and WhatsApp. In 2012, the newer platforms were nipping at Facebook’s heels because they had been built for the smartphone, where Facebook was still struggling to gain traction. Mark responded by buying them, and the F.T.C. approved.

Neither Instagram nor WhatsApp had any meaningful revenue, but both were incredibly popular. The Instagram acquisition guaranteed Facebook would preserve its dominance in photo networking, and WhatsApp gave it a new entry into mobile real-time messaging. Now, the founders of Instagram and WhatsApp have left the company after clashing with Mark over his management of their platforms. But their former properties remain Facebook’s, driving much of its recent growth.

.. When it hasn’t acquired its way to dominance, Facebook has used its monopoly position to shut out competing companies or has copied their technology.

The News Feed algorithm reportedlprioritized videos created through Facebook over videos from competitors, like YouTube and Vimeo. In 2012, Twitter introduced a video network called Vine that featured six-second videos. That same day, Facebook blocked Vine from hosting a tool that let its users search for their Facebook friends while on the new network. The decision hobbled Vine, which shut down four years later.

Snapchat posed a different threat. Snapchat’s Stories and impermanent messaging options made it an attractive alternative to Facebook and Instagram. And unlike Vine, Snapchat wasn’t interfacing with the Facebook ecosystem; there was no obvious way to handicap the company or shut it out. So Facebook simply copied it.

Facebook’s version of Snapchat’s stories and disappearing messages proved wildly successful, at Snapchat’s expense. At an all-hands meeting in 2016, Mark told Facebook employees not to let their pride get in the way of giving users what they want. According to Wired magazine, “Zuckerberg’s message became an informal slogan at Facebook: ‘Don’t be too proud to copy.’”

(There is little regulators can do about this tactic: Snapchat patented its “ephemeral message galleries,” but copyright law does not extend to the abstract concept itself.)

As a result of all this, would-be competitors can’t raise the money to take on Facebook. Investors realize that if a company gets traction, Facebook will copy its innovations, shut it down or acquire it for a relatively modest sum. So despite an extended economic expansion, increasing interest in high-tech start-ups, an explosion of venture capital and growing public distaste for Facebook, no major social networking company has been founded since the fall of 2011.

As markets become more concentrated, the number of new start-up businesses declines. This holds true in other high-tech areas dominated by single companies, like search (controlled by Google) and e-commerce (taken over by Amazon). Meanwhile, there has been plenty of innovation in areas where there is no monopolistic domination, such as in workplace productivity (Slack, Trello, Asana), urban transportation (Lyft, Uber, Lime, Bird) and cryptocurrency exchanges (Ripple, Coinbase, Circle).

I don’t blame Mark for his quest for domination. He has demonstrated nothing more nefarious than the virtuous hustle of a talented entrepreneur. Yet he has created a leviathan that crowds out entrepreneurship and restricts consumer choice. It’s on our government to ensure that we never lose the magic of the invisiblehand. How did we allow this to happen?

Since the 1970s, courts have become increasingly hesitant to break up companies or block mergers unless consumers are paying inflated prices that would be lower in a competitive market. But a narrow reliance on whether or not consumers have experienced price gouging fails to take into account the full cost of market domination. It doesn’t recognize that we also want markets to be competitive to encourage innovation and to hold power in check. And it is out of step with the history of antitrust law. Two of the last major antitrust suits, against AT&T and IBM in the 1980s, were grounded in the argument that they had used their size to stifle innovation and crush competition.

As the Columbia law professor Tim Wu writes, “It is a disservice to the laws and their intent to retain such a laserlike focus on price effects as the measure of all that antitrust was meant to do.

Facebook is the perfect case on which to reverse course, precisely because Facebook makes its money from targeted advertising, meaning users do not pay to use the service. But it is not actually free, and it certainly isn’t harmless.

Facebook’s business model is built on capturing as much of our attention as possible to encourage people to create and share more information about who they are and who they want to be. We pay for Facebook with our data and our attention, and by either measure it doesn’t come cheap.

I was on the original News Feed team (my name is on the patent), and that product now gets billions of hours of attention and pulls in unknowable amounts of data each year. The average Facebook user spends an hour a day on the platform; Instagram users spend 53 minutes a day scrolling through pictures and videos. They create immense amounts of data — not just likes and dislikes, but how many seconds they watch a particular video — that Facebook uses to refine its targeted advertising. Facebook also collects data from partner companies and apps, without most users knowing about it, according to testing by The Wall Street Journal.

Some days, lying on the floor next to my 1-year-old son as he plays with his dinosaurs, I catch myself scrolling through Instagram, waiting to see if the next image will be more beautiful than the last. What am I doing? I know it’s not good for me, or for my son, and yet I do it anyway.

The choice is mine, but it doesn’t feel like a choice. Facebook seeps into every corner of our lives to capture as much of our attention and data as possible and, without any alternative, we make the trade.

The vibrant marketplace that once drove Facebook and other social media companies to compete to come up with better products has virtually disappeared. This means there’s less chance of start-ups developing healthier, less exploitative social media platforms. It also means less accountability on issues like privacy.

Just last month, Facebook seemingly tried to bury news that it had stored tens of millions of user passwords in plain text format, which thousands of Facebook employees could see. Competition alone wouldn’t necessarily spur privacy protection — regulation is required to ensure accountability — but Facebook’s lock on the market guarantees that users can’t protest by moving to alternative platforms.

The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.

Facebook engineers write algorithms that select which users’ comments or experiences end up displayed in the News Feeds of friends and family. These rules are proprietary and so complex that many Facebook employees themselves don’t understand them.

In 2014, the rules favored curiosity-inducing “clickbait” headlines. In 2016, they enabled the spread of fringe political views and fake news, which made it easier for Russian actors to manipulate the American electorate. In January 2018, Mark announced that the algorithms would favor non-news content shared by friends and news from “trustworthy” sources, which his engineers interpreted — to the confusion of many — as a boost for anything in the category of “politics, crime, tragedy.”

Facebook has responded to many of the criticisms of how it manages speech by hiring thousands of contractors to enforce the rules that Mark and senior executives develop. After a few weeks of training, these contractors decide which videos count as hate speech or free speech, which images are erotic and which are simply artistic, and which live streams are too violent to be broadcast. (The Verge reported that some of these moderators, working through a vendor in Arizona, were paid $28,800 a year, got limited breaks and faced significant mental health risks.)

As if Facebook’s opaque algorithms weren’t enough, last year we learned that Facebook executives had permanently deleted their own messages from the platform, erasing them from the inboxes of recipients; the justification was corporate security concerns. When I look at my years of Facebook messages with Mark now, it’s just a long stream of my own light-blue comments, clearly written in response to words he had once sent me. (Facebook now offers a limited version of this feature to all users.)

The most extreme example of Facebook manipulating speech happened in Myanmar in late 2017Mark said in a Vox interview that he personally made the decision to delete the private messages of Facebook users who were encouraging genocide there. “I remember, one Saturday morning, I got a phone call,” he said, “and we detected that people were trying to spread sensational messages through — it was Facebook Messenger in this case — to each side of the conflict, basically telling the Muslims, ‘Hey, there’s about to be an uprising of the Buddhists, so make sure that you are armed and go to this place.’ And then the same thing on the other side.”

Mark made a call: “We stop those messages from going through.” Most people would agree with his decision, but it’s deeply troubling that he made it with no accountability to any independent authority or government. Facebook could, in theory, delete en masse the messages of Americans, too, if its leadership decided it didn’t like them.

Mark used to insist that Facebook was just a “social utility,” a neutral platform for people to communicate what they wished. Now he recognizes that Facebook is both a platform and a publisher and that it is inevitably making decisions about values. The company’s own lawyers have argued in court that Facebook is a publisher and thus entitled to First Amendment protection.

No one at Facebook headquarters is choosing what single news story everyone in America wakes up to, of course. But they do decide whether it will be an article from a reputable outlet or a clip from “The Daily Show,” a photo from a friend’s wedding or an incendiary call to kill others.

Mark knows that this is too much power and is pursuing a twofold strategy to mitigate it.

  1. He is pivoting Facebook’s focus toward encouraging more private, encrypted messaging that Facebook’s employees can’t see, let alone control.
  2. Second, he is hoping for friendly oversight from regulators and other industry executives.

Late last year, he proposed an independent commission to handle difficult content moderation decisions by social media platforms. It would afford an independent check, Mark argued, on Facebook’s decisions, and users could appeal to it if they disagreed. But its decisions would not have the force of law, since companies would voluntarily participate.

In an op-ed essay in The Washington Post in March, he wrote, “Lawmakers often tell me we have too much power over speech, and I agree.” And he went even further than before, calling for more government regulation — not just on speech, but also on privacy and interoperability, the ability of consumers to seamlessly leave one network and transfer their profiles, friend connections, photos and other data to another.

I don’t think these proposals were made in bad faith. But I do think they’re an attempt to head off the argument that regulators need to go further and break up the company. Facebook isn’t afraid of a few more rules. It’s afraid of an antitrust case and of the kind of accountability that real government oversight would bring.

We don’t expect calcified rules or voluntary commissions to work to regulate drug companies, health care companies, car manufacturers or credit card providers. Agencies oversee these industries to ensure that the private market works for the public good. In these cases, we all understand that government isn’t an external force meddling in an organic market; it’s what makes a dynamic and fair market possible in the first place. This should be just as true for social networking as it is for air travel or pharmaceuticals.

In the summer of 2006, Yahoo offered us $1 billion for Facebook. I desperately wanted Mark to say yes. Even my small slice of the company would have made me a millionaire several times over. For a 22-year-old scholarship kid from small-town North Carolina, that kind of money was unimaginable. I wasn’t alone — just about every other person at the company wanted the same.

It was taboo to talk about it openly, but I finally asked Mark when we had a moment alone, “How are you feeling about Yahoo?” I got a shrug and a one-line answer: “I just don’t know if I want to work for Terry Semel,” Yahoo’s chief executive.

Outside of a couple of gigs in college, Mark had never had a real boss and seemed entirely uninterested in the prospect. I didn’t like the idea much myself, but I would have traded having a boss for several million dollars any day of the week. Mark’s drive was infinitely stronger. Domination meant domination, and the hustle was just too delicious.

Mark may never have a boss, but he needs to have some check on his power. The American government needs to do two things: break up Facebook’s monopoly and regulate the company to make it more accountable to the American people.

First, Facebook should be separated into multiple companies. The F.T.C., in conjunction with the Justice Department, should enforce antitrust laws by undoing the Instagram and WhatsApp acquisitions and banning future acquisitions for several years. The F.T.C. should have blocked these mergers, but it’s not too late to act. There is precedent for correcting bad decisions — as recently as 2009, Whole Foods settled antitrust complaints by selling off the Wild Oats brand and stores that it had bought a few years earlier.

There is some evidence that we may be headed in this direction. Senator Elizabeth Warren has called for reversing the Facebook mergers, and in February, the F.T.C. announced the creation of a task force to monitor competition among tech companies and review previous mergers.

How would a breakup work? Facebook would have a brief period to spin off the Instagram and WhatsApp businesses, and the three would become distinct companies, most likely publicly traded. Facebook shareholders would initially hold stock in the new companies, although Mark and other executives would probably be required to divest their management shares.

Until recently, WhatsApp and Instagram were administered as independent platforms inside the parent company, so that should make the process easier. But time is of the essence: Facebook is working quickly to integrate the three, which would make it harder for the F.T.C. to split them up.

Some economists are skeptical that breaking up Facebook would spur that much competition, because Facebook, they say, is a “natural” monopoly. Natural monopolies have emerged in areas like water systems and the electrical grid, where the price of entering the business is very high — because you have to lay pipes or electrical lines — but it gets cheaper and cheaper to add each additional customer. In other words, the monopoly arises naturally from the circumstances of the business, rather than a company’s illegal maneuvering. In addition, defenders of natural monopolies often make the case that they benefit consumers because they are able to provide services more cheaply than anyone else.

Facebook is indeed more valuable when there are more people on it: There are more connections for a user to make and more content to be shared. But the cost of entering the social network business is not that high. And unlike with pipes and electricity, there is no good argument that the country benefits from having only one dominant social networking company.

Facebook is indeed more valuable when there are more people on it: There are more connections for a user to make and more content to be shared. But the cost of entering the social network business is not that high. And unlike with pipes and electricity, there is no good argument that the country benefits from having only one dominant social networking company.

Still others worry that the breakup of Facebook or other American tech companies could be a national security problem. Because advancements in artificial intelligence require immense amounts of data and computing power, only large companies like Facebook, Google and Amazon can afford these investments, they say. If American companies become smaller, the Chinese will outpace us.

While serious, these concerns do not justify inaction. Even after a breakup, Facebook would be a hugely profitable business with billions to invest in new technologies — and a more competitive market would only encourage those investments. If the Chinese did pull ahead, our government could invest in research and development and pursue tactical trade policy, just as it is doing today to hold China’s 5G technology at bay.

The cost of breaking up Facebook would be next to zero for the government, and lots of people stand to gain economically. A ban on short-term acquisitions would ensure that competitors, and the investors who take a bet on them, would have the space to flourish. Digital advertisers would suddenly have multiple companies vying for their dollars.

Even Facebook shareholders would probably benefit, as shareholders often do in the years after a company’s split. The value of the companies that made up Standard Oil doubled within a year of its being dismantled and had increased by fivefold a few years later. Ten years after the 1984 breakup of AT&T, the value of its successor companies had tripled.

But the biggest winners would be the American people. Imagine a competitive market in which they could choose among one network that

  • offered higher privacy standards, another that
  • cost a fee to join but had little advertising and another that would allow users to
  • customize and tweak their feeds as they saw fit.

No one knows exactly what Facebook’s competitors would offer to differentiate themselves. That’s exactly the point.

The Justice Department faced similar questions of social costs and benefits with AT&T in the 1950s. AT&T had a monopoly on phone services and telecommunications equipment. The government filed suit under antitrust laws, and the case ended with a consent decree that required AT&T to release its patents and refrain from expanding into the nascent computer industry. This resulted in an explosion of innovation, greatly increasing follow-on patents and leading to the development of the semiconductor and modern computing. We would most likely not have iPhones or laptops without the competitive markets that antitrust action ushered in.

Adam Smith was right: Competition spurs growth and innovation.

Just breaking up Facebook is not enough. We need a new agency, empowered by Congress to regulate tech companies. Its first mandate should be to protect privacy.

The Europeans have made headway on privacy with the General Data Protection Regulation, a law that guarantees users a minimal level of protectionA landmark privacy bill in the United States should specify exactly what control Americans have over their digital information, require clearer disclosure to users and provide enough flexibility to the agency to exercise effective oversight over time. The agency should also be charged with guaranteeing basic interoperability across platforms.

Finally, the agency should create guidelines for acceptable speech on social media. This idea may seem un-American — we would never stand for a government agency censoring speech. But we already have limits on

  • yelling “fire” in a crowded theater,
  • child pornography,
  • speech intended to provoke violence and false statements to manipulate stock prices.

We will have to create similar standards that tech companies can use. These standards should of course be subject to the review of the courts, just as any other limits on speech are. But there is no constitutional right to harass others or live-stream violence.

These are difficult challenges. I worry that government regulators will not be able to keep up with the pace of digital innovation. I worry that more competition in social networking might lead to a conservative Facebook and a liberal one, or that newer social networks might be less secure if government regulation is weak. But sticking with the status quo would be worse: If we don’t have public servants shaping these policies, corporations will.

Some people doubt that an effort to break up Facebook would win in the courts, given the hostility on the federal bench to antitrust action, or that this divided Congress would ever be able to muster enough consensus to create a regulatory agency for social media.

But even if breakup and regulation aren’t immediately successful, simply pushing for them will bring more oversight. The government’s case against Microsoft — that it illegally used its market power in operating systems to force its customers to use its web browser, Internet Explorer — ended in 2001 when George W. Bush’s administration abandoned its effort to break up the company. Yet that prosecution helped rein in Microsoft’s ambitions to dominate the early web.

Similarly, the Justice Department’s 1970s suit accusing IBM of illegally maintaining its monopoly on computer sales ended in a stalemate. But along the way, IBM changed many of its behaviors. It stopped bundling its hardware and software, chose an extremely open design for the operating system in its personal computers and did not exercise undue control over its suppliers. Professor Wu has written that this “policeman at the elbow” led IBM to steer clear “of anything close to anticompetitive conduct, for fear of adding to the case against it.”

We can expect the same from even an unsuccessful suit against Facebook.

Finally, an aggressive case against Facebook would persuade other behemoths like Google and Amazon to think twice about stifling competition in their own sectors, out of fear that they could be next. If the government were to use this moment to resurrect an effective competition standard that takes a broader view of the full cost of “free” products, it could affect a whole host of industries.

The alternative is bleak. If we do not take action, Facebook’s monopoly will become even more entrenched. With much of the world’s personal communications in hand, it can mine that data for patterns and trends, giving it an advantage over competitors for decades to come.

I take responsibility for not sounding the alarm earlier. Don Graham, a former Facebook board member, has accused those who criticize the company now as having “all the courage of the last man leaping on the pile at a football game.” The financial rewards I reaped from working at Facebook radically changed the trajectory of my life, and even after I cashed out, I watched in awe as the company grew. It took the 2016 election fallout and Cambridge Analytica to awaken me to the dangers of Facebook’s monopoly. But anyone suggesting that Facebook is akin to a pinned football player misrepresents its resilience and power.

An era of accountability for Facebook and other monopolies may be beginning. Collective anger is growing, and a new cohort of leaders has begun to emerge. On Capitol Hill, Representative David Cicilline has taken a special interest in checking the power of monopolies, and Senators Amy Klobuchar and Ted Cruz have joined Senator Warren in calling for more oversight. Economists like Jason Furman, a former chairman of the Council of Economic Advisers, are speaking out about monopolies, and a host of legal scholars like Lina Khan, Barry Lynn and Ganesh Sitaraman are plotting a way forward.

This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.

Facebook Co-Founder Chris Hughes Says Company Should Be Broken Up

In a nearly 6,000 word opinion essay published online Thursday in the New York Times, Mr. Hughes said the Facebook chief executive has gained power that is both “unprecedented and un-American.”

.. In 2017, Sean Parker, Facebook’s founding president, told Axios that the platform was designed around social validation.

Chamath Palihapitiya, the company’s former vice president of growth, took a harsher tone in a talk at Stanford University, saying “short-term, dopamine-driven feedback loops that we have created are destroying how society works.” He later softened his comments after being rebuked by Facebook.

.. In his essay, Mr. Hughes said he hasn’t seen Mr. Zuckerberg in person in nearly two years. He said his former Harvard classmate is a “good, kind person” whom the U.S. government needs to hold more accountable for the immense power Facebook wields.

“For too long, lawmakers have marveled at Facebook’s explosive growth and overlooked their responsibility to ensure that Americans are protected and markets are competitive,” Mr. Hughes wrote.

Zuckerberg’s new privacy essay shows why Facebook needs to be broken up

Mark Zuckerberg doesn’t understand what privacy means — he can’t be trusted to define it for the rest of us

Zuckerberg’s new privacy essay shows why Facebook needs to be broken up
Mark Zuckerberg doesn’t understand what privacy means—he can’t be trusted to define it for the rest of us.
by Konstantin Kakaes March 7, 2019

In a letter published when his company went public in 2012, Mark Zuckerberg championed Facebook’s mission of making the world “more open and connected.” Businesses would become more authentic, human relationships stronger, and government more accountable. “A more open world is a better world,” he wrote.

Facebook’s CEO now claims to have had a major change of heart.

In “A Privacy-Focused Vision for Social Networking,” a 3,200-word essay that Zuckerberg posted to Facebook on March 6, he says he wants to “build a simpler platform that’s focused on privacy first.” In apparent surprise, he writes: “People increasingly also want to connect privately in the digital equivalent of the living room.”

Zuckerberg’s essay is a power grab disguised as an act of contrition. Read it carefully, and it’s impossible to escape the conclusion that if privacy is to be protected in any meaningful way, Facebook must be broken up.

Facebook grew so big, so quickly that it defies categorization. It is a newspaper. It is a post office and a telephone exchange. It is a forum for political debate, and it is a sports broadcaster. It’s a birthday-reminder service and a collective photo album. It is all of these things—and many others—combined, and so it is none of them.

Zuckerberg describes Facebook as a town square. It isn’t. Facebook is a company that brought in more than $55 billion in advertising revenue last year, with a 45% profit margin. This makes it one of the most profitable business ventures in human history. It must be understood as such.

Facebook has minted money because it has figured out how to commoditize privacy on a scale never before seen. A diminishment of privacy is its core product. Zuckerberg has made his money by performing a sort of arbitrage between how much privacy Facebook’s 2 billion users think they are giving up and how much he has been able to sell to advertisers. He says nothing of substance in his long essay about how he intends to keep his firm profitable in this supposed new era. That’s one reason to treat his Damascene moment with healthy skepticism.

“Frankly we don’t currently have a strong reputation for building privacy protective services,” Zuckerberg writes. But Facebook’s reputation is not the salient question: its business model is. If Facebook were to implement strong privacy protections across the board, it would have little left to sell to advertisers aside from the sheer size of its audience. Facebook might still make a lot of money, but they’d make a lot less of it.

Zuckerberg’s proposal is a bait-and-switch. What he’s proposing is essentially a beefed-up version of WhatsApp. Some of the improvements might be worthwhile. Stronger encryption can indeed be useful, and a commitment to not building data centers in repressive countries is laudable, as far as it goes. Other principles that Zuckerberg puts forth would concentrate his monopoly power in worrisome ways. The new “platforms for private sharing” are not instead of Facebook’s current offering: they’re in addition to it. “Public social networks will continue to be very important in people’s lives,” he writes, an assertion he never squares with the vague claim that “interacting with your friends and family across the Facebook network will become a fundamentally more private experience.”

By narrowly construing privacy to be almost exclusively about end-to-end encryption that would prevent a would-be eavesdropper from intercepting communications, he manages to avoid having to think about Facebook’s weaknesses and missteps. Privacy is not just about keeping secrets. It’s also about how flows of information shape us as individuals and as a society. What we say to whom and why is a function of context. Social networks change that context, and in so doing they change the nature of privacy, in ways that are both good and bad.

Russian propagandists used Facebook to sway the 2016 American election, perhaps decisively. Myanmarese military leaders used Facebook to incite an anti-Rohingya genocide. These are consequences of the ways in which Facebook has diminished privacy. They are not the result of failures of encryption.

“Privacy,” Zuckerberg writes, “gives people the freedom to be themselves.” This is true, but it is also incomplete. The self evolves over time. Privacy is important not simply because it allows us to be, but because it gives us space to become. As Georgetown University law professor Julie Cohen has written: “Conditions of diminished privacy also impair the capacity to innovate … Innovation requires room to tinker, and therefore thrives most fully in an environment that values and preserves spaces for tinkering.” If Facebook is constantly sending you push notifications, it diminishes the mental space you have available for tinkering and coming up with your own ideas. If Facebook bombards the gullible with misinformation, this too is an invasion of privacy. What has happened to privacy in the last couple of decades, and how to value it properly, are questions that are apparently beyond Zuckerberg’s ken.

He says Facebook is “committed to consulting with experts and discussing the best way forward,” and that it will make decisions “as openly and collaboratively as we can” because “many of these issues affect different parts of society.” But the flaw here is the centralized decision-making process. Even if Zuckerberg gets all the best advice that his billions can buy, the result is still deeply troubling. If his plan succeeds, it would mean that private communication between two individuals will be possible when Mark Zuckerberg decides that it ought to be, and impossible when he decides it ought not to be.

If that sounds alarmist, consider the principles that Zuckerberg laid out for Facebook’s new privacy focus. The most problematic of them is the way he discusses “interoperability.” Zuckerberg allows that people should have a choice between messaging services: some want to use Facebook Messenger, some prefer WhatsApp, and others like Instagram. It’s a hassle to use all of these, he says, so you should be able to send messages from one to the other.

But allowing communications that are outside Facebook’s control, he says, would be dangerous if users were allowed to send messages not subject to surveillance by Facebook’s “safety and security systems.” Which is to say we should be allowed to use any messaging service we like, so long as it’s controlled by Facebook for our protection. Zuckerberg is arguing for tighter and tighter integration of Facebook’s various properties.

Monopoly power is problematic even for companies that just make a lot of money selling widgets: it allows them to exert undue influence on regulators and to rip off consumers. But it’s particularly worrisome for a company like Facebook, whose product is information.

This is why it should be broken up. This wouldn’t answer every difficult question that Facebook’s existence raises. It isn’t easy to figure out how to protect free speech while limiting hate speech and deliberate misinformation campaigns, for example. But breaking up Facebook would provide space to come up with solutions that make sense to society as a whole, rather than to Zuckerberg and Facebook’s other shareholders.

At a minimum, splitting WhatsApp and Instagram from Facebook is a necessary first step. This makes the company smaller, and therefore less powerful when it comes to negotiating with other businesses and with regulators. Monopolies, as Louis Brandeis pointed out a century ago, and as Columbia University law professor Tim Wu, journalist Franklin Foer, and others have underscored more recently, simply accrue too much political and economic power to allow for the democratic process to find a balance in how to tackle issues like privacy.

Tellingly, Zuckerberg’s power has grown so great that he feels no need to hide his ambitions. “We can,” he writes, “create platforms for private sharing that could be even more important to people than the platforms we’ve already built to help people share and connect more openly.”

Only if we let him.

The Ethical Dilemma Facing Silicon Valley’s Next Generation

Stanford has established itself as the epicenter of computer science, and a farm system for the tech giants. Following major scandals at Facebook, Google, and others, how is the university coming to grips with a world in which many of its students’ dream jobs are now vilified?

At Stanford University’s business school, above the stage where Elizabeth Holmes once regurgitated the myths of Silicon Valley, there now hangs a whistle splattered in blood. More than 500 people have gathered to hear the true story of Theranos, the $9 billion blood-testing company Holmes launched in 2004 as a Stanford dropout with the help of one of the school’s famed chemical engineering professors.

When Holmes was weaving the elaborate lies that ultimately led to the dissolution of her company, she leaned heavily on tech truisms that treat dogged pursuit of market domination as a virtue. “The minute that you have a backup plan, you’ve admitted that you’re not going to succeed,” she said onstage in 2015. But Shultz and Cheung, who faced legal threats from Theranos for speaking out, push back against the idea of pursuing a high-minded vision at all costs. “We don’t know how to handle new technologies anymore,” Cheung says, “and we don’t know the consequences necessarily that they’ll have.”

The words resonate in the jam-packed auditorium, where students line up afterward to nab selfies with and autographs from the whistleblowers. Kendall Costello, a junior at Stanford, idolized Holmes in high school and imagined working for Theranos one day. Now she’s more interested in learning how to regulate tech than building the next product that promises to change the world. “I really aspired to kind of be like her in a sense,” Costello says. “Then two years later, in seeing her whole empire crumble around her, in addition to other scandals like Facebook’s Cambridge Analytica and all these things that are coming forward, I was just kind of disillusioned.”

..But the endless barrage of negative news in tech, ranging from Facebook fueling propaganda campaigns by Russian trolls to Amazon selling surveillance software to governments, has forced Stanford to reevaluate its role in shaping the Valley’s future leaders. Students are reconsidering whether working at Google or Facebook is landing a dream job or selling out to craven corporate interests. Professors are revamping courses to address the ethical challenges tech companies are grappling with right now. And university president Marc Tessier-Lavigne has made educating students on the societal impacts of technology a tentpole of his long-term strategic plan.

As tech comes to dominate an ever-expanding portion of our daily lives, Stanford’s role as an educator of the industry’s engineers and a financier of its startups grows increasingly important. The school may not be responsible for creating our digital world, but it trains the architects. And right now, students are weighing tough decisions about how they plan to make a living in a world that was clearly constructed the wrong way. “To me it seemed super empowering that a line of code that I wrote could be used by millions of people the next day,” says Matthew Sun, a junior majoring in computer science and public policy, who helped organize the Theranos event. “Now we’re realizing that’s maybe not always a good thing.”

.. Because membership costs $21,000 per year, the career fairs tend to attract only the most renowned firms.

Honestly, I think they’re horrific,” says Vicki Niu, a 2018 Stanford graduate who majored in computer science. She recalls her first career fair being as hectic as a Black Friday sale, with the put-on exclusivity of a night club. (Students must present their Stanford IDs to enter the tent.) But like other freshmen, she found herself swept up in the pursuit of an internship at a large, prestigious tech firm. “Everybody is trying to get interviews at Google and Facebook and Palantir,” she says. “There’s all this hype around them. Part of my mind-set coming in was that I wanted to learn, but I think there was definitely also this big social pressure and this desire to prove yourself and to prove to people that you’re smart.”

Stanford’s computer science department has long been revered for its graduate programs—Google was famously built as a research project by Ph.D. students Larry Page and Sergey Brin—but the intense interest among undergrads is relatively new. In 2007, the school conferred more bachelor’s degrees in English (92) than computer science (70). The next year, though, Stanford revamped its CS curriculum from a one-size-fits-all education to a more flexible framework that funneled students along specialized tracks such as graphics, human-computer interaction, and artificial intelligence. “We needed to make the major more attractive, to show that computer science isn’t just sitting in a cube all day,” Mehran Sahami, a computer science professor who once worked at Google, said later.

The change in curriculum coincided with an explosion of wealth and perceived self-importance in the Valley. The iPhone opened up the potential for thousands of new businesses built around apps, and when its creator died he earned rapturous comparisons to Thomas Edison. Facebook emerged as the fastest-growing internet company of all time, and the Arab Spring made its influence seem benign rather than ominous. As the economy recovered from the recession, investors decided to park their money in startups like Uber and Airbnb that might one day become the next Google or Amazon. A 2013 video by the nonprofit Code.org featured CEOs, Chris Bosh, and will.i.am comparing computer programmers to wizards, superheroes, and rock stars.

Stanford and its students eagerly embraced this cultural shift. John Hennessy, a computer science professor who became president of the university from 2000 to 2016, served on Google’s board of directors and is now the executive chairman of Google parent company Alphabet. LinkedIn founder and Stanford alum Reid Hoffman introduced a new computer science course called Blitzscaling and brought in high-profile entrepreneurs to teach students how to “build massive organizations, user bases, and businesses, and to do so at a dizzyingly rapid pace.” (Elizabeth Holmes was among the speakers.) Mark Zuckerberg became an annual guest in Sahami’s popular introductory computer science class. “It just continued to emphasize how privileged Stanford students are in so many ways, that we have the CEO of Facebook taking time out of his day to come talk to us,” says Vinamrata Singal, a 2016 graduate who had Zuckerberg visit her class freshman year. “It felt really surreal and it did make me excited to want to continue studying computer science.”

In 2013, Stanford began directly investing in students’ companies, much like a venture capital firm. Even without direct Stanford funding, the school’s proximity to wealth helped plenty of big ideas get off the ground. Evan Spiegel, who was a junior at Stanford in 2011 when he started working on Snapchat, connected with his first investor via a Stanford alumni network on Facebook. “Instead of starting a band or trying to make an independent movie or blogging, people would get into code,” says Billy Gallagher, a 2014 graduate who was the editor-in-chief of the school newspaper. “It was a similar idea to, ‘Here’s our band’s vinyl or our band’s tape. Come see us play.’”

..But it’s not just that coding was a creative outlet, as is often depicted in tech origin stories. Working at a big Silicon Valley company also became a path to a specific kind of upper-crust success that students at top schools are groomed for. “Why do so many really bright young kids go into consulting and banking?” asks Gallagher. “They’re prestigious so your parents can be proud of you, they pay really well, and they put you on a career path to open up new doors. Now we’re seeing that’s happening a lot with Google and Facebook.”

By the time Niu arrived in 2014, computer science had become the most popular major on campus and 90 percent of undergrads were taking at least one CS course. As a high schooler, her knowledge of Silicon Valley didn’t extend much further than The Internship, a Vince Vaughn–Owen Wilson comedy about working at Google that doubled as a promotional tool for the search giant. She soon came to realize that landing a job at one of the revered tech giants or striking it rich with an app were Stanford’s primary markers of success. Her coursework was largely technical, focusing on the process of coding and not so much on the outcomes. And in the rare instances when Niu heard ethics discussed in class, it was often framed around the concerns of tech’s super-elite, like killer robots destroying humanity in the future. “In my computer science classes and just talking to other people who were interested in technology, it didn’t seem like anybody really cared about social impact,” she says. “Or if they did, they weren’t talking about it.”

In the spring of her freshman year, Niu and two other students hosted a meeting to gauge interest in a new group focused on socially beneficial uses of technology. The computer science department provided funding for red curry and pad thai. Niu was shocked when the food ran out, as more than 100 students showed up for the event. “Everybody had the same experience: ‘I’m a computer science student. I’m doing this because I want to create an impact. I feel like I’m alone.’”

From this meeting sprang the student organization CS + Social Good. It aimed to expose students to professional opportunities that existed outside the tech giants and the hyperaggressive startups that aspired to their stature. In its first year, the group developed new courses about social-impact work, brought in speakers to discuss positive uses of technology, and offered summer fellowships to get students interning at nonprofits instead of Apple or Google. Hundreds of students and faculty engaged with the organization’s programming.

In Niu’s mind, “social good” referred mainly to the positive applications of technology. But stopping bad uses of tech is just as important as promoting good ones. That’s a lesson the entire Valley has been forced to reckon with as its benevolent reputation has unraveled. “Most of our programming had been, ‘Look at these great ways you can use technology to help kids learn math,’” Niu says. “There was this real need to not only talk about that, but to also be like, ‘It’s not just that technology is neutral. It can actually be really harmful.’”

Many students find it difficult to pinpoint a specific transgression that flipped their perception of Silicon Valley, simply because there have been so many.

The torrid pace of bad news has been jarring for students who entered school with optimistic views of tech. Nichelle Hall, a senior majoring in computer science, viewed Google as the ideal landing spot for an aspiring software engineer when she started college. “I associated it so much with success,” she says. “It’s the first thing I thought about when I thought about technology.” But when she was offered an on-site interview for a potential job at the search giant in the fall, she declined. Project Dragonfly, Google’s (reportedly abandoned) effort to bring a censored search engine to China, gave her pause. It wasn’t just that she objected to the idea on principle. She sensed that working for such a large corporation would likely put her personal morals and corporate directives in conflict. “They say don’t do evil and then they do things like that,” she says. “I wasn’t really into the big-company idea for that reason. … You don’t necessarily know what the intentions of your executives are.”

  • ..Google has hardly been the most damaged brand during the techlash. (The company says it has not seen a year-over-year decline in Stanford recruits to this point.)
  • Students repeatedly bring up Facebook as a company that’s fallen out of favor.
  • Uber, with its cascade of controversies, now has to “fight to try and get people in,” according to junior Jose Giron.
  • And Palantir, the secretive data-mining company started by Stanford alum Peter Thiel, has also lost traction due to Thiel’s ties to Trump and worries that the company could help the president develop tech to advance his draconian immigration policies. “There’s a growing concern over your personal decision where to work after graduation,” Sun says.

There’s a lot of personal guilt around pursuing CS. If you do that, people call you a sellout or you might view yourself as a sellout. If you take a high-paying job, people might say, ‘Oh, you’re just going to work for a big tech company. All you care about is yourself.’”

Landing a job at a major tech firm is often as much about prestige as passion, which is one reason the CS major has expanded so dramatically. But a company’s tarnished reputation can transfer to its employees. Students debate whether fewer of their peers are actually taking gigs at Facebook, or whether they’re just less vocal in bragging about it. At lunch at a Burmese restaurant on campus, Hall and Sun summed up the transition succinctly. “No one’s like, ‘I got an internship at Uber!’” Sun says. Hall follows up: “They’re like, ‘I got an internship … at Uber …’”

The concerns are bigger than which companies rise or fall in the estimation of up-and-coming engineers. Stanford and computer science programs across the country may not be adequately equipped to wade through the ethical minefield that is expanding along with tech’s influence. Sahami acknowledges that many computer science classes are designed to teach students how to solve technical problems rather than to think about the real-world issues that a solution might create. Part of the challenge comes from computer science being a young discipline compared to other engineering fields, meaning that practical examples of malpractice are emerging in real time from today’s headlines.

Vik Pattabi, a senior majoring in computer science, originally studied mechanical engineering. In those classes, students are constantly reminded of the 1940 collapse of Tacoma Narrows Bridge: A modern marvel was destroyed because its highly educated engineers did not foresee all the possible threats to their creation (in that case, the wind). Pattabi’s CS coursework hasn’t yet included a comparable example. “A lot of the second- and third-order effects that we see [in] Silicon Valley have happened in the last two or three years,” Pattabi says. “The department is trying to react as fast as it can, but they don’t have 30 years of case studies to work with.”

Another issue is the longstanding divide on campus between the engineering types—known as “techies”—and the humanities or social sciences majors, known as “fuzzies.” Though the school has focused more on interdisciplinary studies in recent years, there remains a gap in understanding that’s often filled in by stereotype. This sort of divide is a common aspect of college life, but the stakes feel higher when some of the students will one day be programming the algorithms that govern the digital world. “There’s things [said] like, ‘You can’t spell fascist without CS.’ People will tell you things like that,” Hall says. “I think people may feel antagonized.”

The school’s deep ties to the Bay Area’s corporate giants, long a much-touted recruitment tool, suddenly look different in light of the problems that the industry has created. At the January career fair, members of Students for the Liberation of All Peoples (SLAP), an activist group on campus that aims to disrupt Stanford’s “culture of apathy,” handed out flyers that urged students not to work at Amazon and Salesforce because of their commercial ties to ICE and the United States Border Patrol. (Employees at the companies have raised similar concerns.) “REFUSE to be part of the Stanford → racist tech pipeline,” the flyer reads, in part.

Two students in the group said they were asked to leave the career fair by Computer Forum officials. When the students refused to comply, they say they were escorted out by campus police under threat of arrest for disrupting a private event. A Stanford spokesperson confirmed the incident. “The protesting students were disruptive and asked by police to leave,” the spokesperson said in an emailed statement. “The students were given the option to protest outside the event or in White Plaza. They chose to leave.”

For members of SLAP, the exchange reinforced the ways in which Stanford institutionally and culturally cuts itself off from the issues occurring in the real world. “You might hear this idea of the ‘Stanford bubble,’ where Stanford students kind of just stay on campus and they just do what they need to do for their classes and their jobs,” says Kimiko Hirota, a SLAP member and junior majoring in sociology and comparative studies in race and ethnicity, who participated in the career fair protest. She said many of the students she talked to had no idea about the tech firms’ government contracts. “To me the amount of students on campus that are politically engaged and are actively using their Stanford privilege for a greater good is extremely small.”

The computer science major includes a “technology in society” course requirement that can be fulfilled by a number of ethics classes, and teaching students about their ethical responsibilities is a component of the department’s accreditation process. CS + Social Good has expanded its footprint on campus, teaching more classes and organizing more events like the Theranos talk starring the whistleblowers. Yet the flexibility of the CS major cuts both ways. It means that students who care to take a holistic approach to the discipline can combine rigorous training in code with an education in ethics; it also means that it remains all too easy for some students to avoid engaging with the practical ramifications of their work. “You can very much come to Stanford feeling very apathetic about the impact of the technology and leave just that way without any effort,” Hall says. “I don’t feel as though we are forced to encounter the impact.”

On a Wednesday afternoon, students spill into a lobby in front of a standing-room-only auditorium in the School of Education, where Jeremy Weinstein is talking about the promise and perils of using algorithms in criminal justice. Next year Californians will vote on a bill that would replace cash bail with a computerized risk-assessment system that calculates an arrested person’s likelihood of returning for a court appearance. The idea is to give people who can’t afford to make bail another way to get out of jail through a fairer policy. But such algorithms have been found to reinforce racial biases in the criminal justice system, according to a ProPublicainvestigation. Instead of being a solution to an unfair process, poorly implemented software could create an entirely new form of systemic discrimination. Students were asked to vote on whether they supported cash bail or the algorithm. The class was evenly split. Unlike in most CS classes, Weinstein could not offer students the comfort of a “correct” answer. “We need to deconstruct these algorithms in order to help people see that technology is not just something to be trusted,” he says. “It’s not just something that’s objective and fair because it’s numerical, but it actually reflects a set of choices that people make.”

Though Weinstein is a political science professor, he’s one of three educators leading the new version of the CS department’s flagship ethics course, CS181. Teaching with him are Sahami, the computer science professor, and Rob Reich, a political science professor and philosopher. The trio devised the course structure over a series of coffee-fueled meetings as the tech backlash unfolded during the past year and a half. After discussing algorithmic bias, the class will explore privacy in the age of facial recognition, the social impacts of autonomous technology, and the responsibilities of private platforms in regard to free speech. The coursework is meant to be hands-on. During the current unit, students must build their own risk-assessment algorithm using an actual criminal history data set, then assess it for fairness. “We run it like a talk show,” Weinstein says. “There’s a lot of call-and-response, asking questions, getting people to talk in small groups.”

While Stanford’s computer science program has had an ethics component for decades, this course marks the first time that experts in other fields are so directly involved in the curriculum. About 300 students have enrolled in the course, including majors in history, philosophy, and biology. It provides an opportunity for the techies and fuzzies to learn from one another, and for professors removed from the Valley’s tech culture to contextualize the industry’s societal impacts. In the course overview materials, the moral reckoning occurring in the tech sector today is compared to the advent of the nuclear bomb.

The course’s popularity is a sign that the gravity of the moment is weighing on many Stanford minds. Antigone Xenopoulos, a junior majoring in symbolic systems (a techie-fuzzie hybrid major that incorporates computer science, linguistics, and philosophy), is a research assistant for CS181. She wasn’t the only student who quoted a line from Spider-Man to me—with great power comes great responsibilitywhen referencing the current landscape. “If they’re going to give students the tools to have such immense influence and capabilities, [Stanford] should also guide those students in developing ethical compasses,” she says.

 ..While the early years of the decade saw prominent tech executives like Holmes and Zuckerberg teaching students how to lifehack their way to success, the new ethics course will bring in guest speakers from WhatsApp, Facebook, and the NSA to answer “hard questions,” Sahami says “I wouldn’t say industry is influencing Stanford,” he says. “I would say the relationship with industry allows us to have more authentic conversations where we’re really bringing in people who are decision-makers in these areas.”
.. Some of the more critical voices from within the industry are also taking on more permanent roles at Stanford. Alex Stamos, the former chief security officer at Facebook, taught a “hack lab” for non-CS majors last fall, helping them understand cybersecurity threats. He’s now developing a more advanced computer science course, to be piloted later this year, that explores trust and safety issues in the era of misinformation and widespread online harassment. Stamos led Facebook’s internal investigation into Russian political interference on the platform and clashed with top executives over how much of that information should be made public. He left the company in August to join Stanford, where he hopes to impart lessons from his time battling a digital attack that was waged not through hacking, but through ad purchases, incendiary memes, and politically charged Facebook events. “One of the things we don’t teach computer science students is all of the non-technically advanced abuses of technology that cause real harm,” Stamos says. “I want to expose students to [things like], ‘These are the mistakes that were made before, these are the kinds of problems that existed, and these are the company’s reactions to those mistakes.’”

Stamos rejects the idea that ethics is the correct framework to think about addressing tech’s most pressing issues. “The problem here is not that people are making decisions that are straight-up evil,” he says. “The problem is that people are not foreseeing the outcomes of their actions. Part of that is a lack of paranoia. One of our problems in Silicon Valley is we build products to be used the right way. … It’s hard to envision all the misuses unless you understand all the things that have come before.”

While he says that Stanford bears some responsibility for the Valley’s tunnel vision, he praises the school for welcoming tech leaders with recent, relevant experiences to help students prepare for emerging threats. “When I was going to school, computers were important, but we weren’t talking about building companies that might change history,” Stamos says. “The students who come to me are really interested in the impact of what they do on society.”

Stamos regularly fields questions from students about whether to work at Facebook or Google. He tells them that they should, not in spite of the companies’ mounting issues, but because of them.If you actually care about making communication technologies compatible with democracy, then the place to be is at one of the companies that actually has the problems,” he says. “Not working at the big places that could actually solve it does not make things better.”

The tech giants continue to consolidate power even as they face withering criticism. Facebook’s user base growth accelerated last quarter despite its scandals. Uber will go public this year at a valuation as high as $120 billion. Apple, Amazon, and Google are all planning to open large new offices around the country in the near future. And for all the optimistic talk of working at ethically minded startups among students, creation of nascent businesses is at roughly a 40-year low in the United States. Small firms that enter the terrain of the Frightful Five are typically acquired or destroyed.

It is hard to find a Stanford computer science student, even among the ethically minded set CS + Social Good has helped cultivate, who will publicly proclaim that they’ll never work for one of these dominant companies, as all of them offer opportunities for high pay, engaging individual work, and comforting job security. International students have to worry about securing work visas however they can; students on financial aid may need to make enough money to support other family members. And for many others, it’s not clear that anything that’s happened in the Valley is truly beyond the pale. In that sense, the engineers are just like us, aghast at the headlines but still clicking away inside a system that’s come to feel inescapable. “These events feel too big for most students to take into account,” says Jason He, a master’s student in electrical engineering. “At the end of the day, I think for a lot of students who have been paying a lot of money for their education, if the six-figure salary is offered, it’s pretty hard for students to turn down.”

There is still an opportunity, the thinking at Stanford goes, for every company to do good. Nichelle Hall, the senior who declined the Google interview, landed a job working on Medium’s trust and safety team. But she recognizes that she may have set her qualms aside if Google had been her only employment option. “Some of the feedback that CS + Social Good gets is, ‘Oh, the members end up working for Facebook, they end up working for Google,’” says Hall, who’s been involved with the organization since 2017. “People who care about this intersection of social impact and computer science will go to these companies and do a better job than if they weren’t interested in this stuff.”

Impact is the word that I heard more than any other while on campus. It’s how students framed their decision to go to Stanford, to pursue a career in computer science, to do good in the world after graduating. It’s a word that Hoffman used to describe his Blitzscaling class, and one Holmes used to explain to students why she dropped out of school. “I had the tools that I needed to be able to go out and begin making this impact,” she said. It’s the currency of Silicon Valley, that people spend for good and for ill.

The ability to create impact with a few lines of code has long been what separated software engineers from the rest of us, and turned the Valley into a self-proclaimed utopia of young rebels using technology to save the world from its older, antiquated self. But that’s not the image anymore. Now aspiring engineers draw a comparison between their chosen profession and investment banking. The finance industry wrecked the world a decade ago because of its misunderstanding of complex, automated systems that spun out of control—and its confidence that someone else would ultimately pay the price if things went wrong. You see this confidence in Zuckerberg’s incredulous response when anyone suggests that he resign, and in Google CEO Sundar Pichai’s initial refusal to testify before Congress. And you can see it at Stanford, where the endowment has never been higher, the fundraising has never been easier, and the career fair is still filled with slogans vowing to make the world a better place.

Perhaps this entire strip of land known as the Valley will fully calcify into a West Coast Wall Street, where the people with all the insider knowledge profit off the muppets who can’t stop using their products. If today’s young tech skeptics turn to cynics when they enter the working world, such a future is easy to imagine. But—and this is the hopeful, intoxicating, dangerous thing about technology—there’s always bright minds out there who think they can build a solution that just might fix this mess we’ve made. And people, especially young people, will always be enthralled by the romance of a new idea. “We’re creating things that haven’t necessarily existed before, and so we won’t be able to anticipate all the challenges that we have,” says Hall, who graduates in just four months. “But once we do, it’s important that we can reconcile them with grace and humility. I’m sure it will be a hard job, but it’s important that it’s hard. I’m up for the challenge.”