Why we’re coming apart, and how we might come together again.
New fractures are forming within the American evangelical movement, fractures that do not run along the usual regional, denominational, ethnic, or political lines. Couples, families, friends, and congregations once united in their commitment to Christ are now dividing over seemingly irreconcilable views of the world. In fact, they are not merely dividing but becoming incomprehensible to one another.
Recently, a group of my college friends, all raised and nurtured in healthy evangelical families and congregations, reconnected online in search of understanding. One person mourned that she could no longer understand her parents or how their views of the world had so suddenly and painfully shifted. Another described friends who were demographically identical, who had once stood beside him on practically every issue, but who now promoted ideas he found shocking. Still another said her church was breaking up, driven apart by mutual suspicion and misunderstanding.
“These were my people,” one said, “but now I don’t know who they are, or maybe I don’t know who I am.”
What do you do when you feel you’re losing the people you love to a false reality? What do you do with the humbling truth that they have precisely the same fear about you?
The quandary is not unique to evangelicals. But fellow believers who once stood shoulder to shoulder now find that tectonic shifts have thrust them apart, their continents are separating, and they cannot find a bridge back to common ground. How could our views of reality diverge so dramatically—and is there anything we can do to draw together again?
The plausibility curve and the information curve
Among the most persistent interests of my academic career was the question of how people form beliefs. Not how they should form beliefs, in some idealized vision of perfected rationality, but how they actually form beliefs as embodied creatures embedded in communities and cultures. I want to introduce a simple conceptual tool, influenced in part by the work of Peter Berger, that may help us understand what is happening.
Imagine a horizontal plane that curves downward into a bowl, rises back again, and returns to a horizontal plane. The curve, from one end of the bowl to the other, represents the range of claims an individual finds believable. Let’s call it a plausibility curve. Claims that fall in the center of the curve will be perceived as most plausible; they require little evidence or argumentation before an individual will consent to believe. Claims falling near the edges are increasingly implausible as they deviate from the center, requiring progressively more persuasion. Claims falling entirely outside the plausibility curve are beyond the range of what a person might believe at a given point in time, and no amount of evidence or logic will be sufficient.
What determines the plausibility of a given claim is how well it conforms to what an individual experiences, already believes, and wants to believe. The full range of a person’s beliefs is rather like a photomosaic (see an example here): Thousands of experiences and perceptions of reality are joined together, and out of those thousands emerge larger patterns and impressions, higher-order beliefs about the nature of reality, the grand narratives of history, the nature of right and wrong, good and evil, and so forth. Attempts to change a single belief can feel fruitless when it is embedded in countless others. Where does one begin to address a thousand interlocking disagreements at once? Evidence to the contrary is almost irrelevant when a claim “fits” with an entire network of reinforcing beliefs. This is part of what gives a plausibility curve its enduring strength and resistance to change.
Desire plays a particularly complicated role in the plausibility curve. We may desire not to believe a claim because it would separate us from those we love, confront us with painful truths, require a change in our behavior, impose a social cost, or so on. We may desire to believe a certain claim because it would be fashionable, confirm our prejudices, set us apart from those around us, anger our parents, or for countless other reasons. We will require more persuasion for claims we do not want to believe, and less for those we do.
Like the Overton window in political theory, a plausibility curve can expand, contract, and shift. Friends or family members whose plausibility curves were once identical may find that they diverge over the course of time. Claims one person finds immediately plausible are almost inconceivable to the other. But how does this happen? That’s where the information curve comes in.
Imagine a mirror-image bowl above the plausibility curve. This is the information curve, and it reflects the individual’s external sources of information about the world—such as communities, authorities, and media. Those sources in the center of the information curve are deemed most trustworthy; claims that come from these sources are accepted almost without question. Sources of information on the outer ends of the bowl are considered less trustworthy, so their claims will be held up to greater scrutiny. Sources outside the curve entirely are, at least for this individual, so lacking in credibility that their claims are dismissed out of hand.
The center of the information curve will generally align with the center of the plausibility curve. The relationship is mutually reinforcing. Sources are considered more trustworthy when they deliver claims we find plausible, and claims are considered more plausible when they come from sources we trust. A source of information that consistently delivers claims in the center of the plausibility curve will come to be believed implicitly.
Change can begin on the level of the plausibility curve. Perhaps an individual joins a religious community and finds it is more loving and reasonable than she had expected. She will no longer find it plausible when a source claims that all religious communities are irrational and prejudiced, and this will gradually shift her information curve in favor of more reliable sources. Or another person experiences the loss of a child, and no longer desires to believe that death is the end of consciousness. He is more open to other claims, expands his sources of information, and slowly his beliefs shift.
Change can also begin on the level of the information curve. An individual raised in a certain community with well-established authorities, such as her parents and pastors, goes to college and is introduced to new communities and authorities. If she judges them to be trustworthy sources of information, this new information curve will likely shift her plausibility curve. As her set of beliefs changes, she may even reach a point where the sources that once supplied most of her beliefs are no longer considered trustworthy at all. Or imagine a person who has lived his entire life consuming far-left media sources. He begins to listen to conservative media sources and finds their claims resonate with his experience—only slightly at first, but in increasing measure. Gradually he consumes more and more conservative media, expanding or shifting his information curve, and this in turn expands or shifts his plausibility curve. He may reach a point where his broader perceptions of the world—the deeper forces at work in history, the optimal ways of organizing societies and economies, the forces for good and evil in the world—have been wholly overturned.
Consider the 9/11 Truth movement and the QAnon movement. Most Americans will find the notion that the Bush administration orchestrated a massive terrorist attack in order to invade the Middle East and enrich their friends in the oil industry, or that global liberal elites would construct an international child trafficking operation for the purpose of pedophilia and cannibalism, beyond the bounds of their plausibility curve. Others, however, will find that one conspiracy or the other resonates with their plausibility curve, or their information curve may shift over time in such a way that brings their plausibility curve with it. Claims that once seemed impossible to contemplate came to appear conceivable, then plausible, then reasonable, and finally self-evident. Of course conservatives would sacrifice thousands of innocent lives to justify a “war for oil” because conservatives are greedy and that’s what conservatives do. Of course liberals would sacrifice thousands of children in order to advance their own health and power because liberals are perverse and that’s what liberals do.
As a final definitional note, let’s call the whole structure, the plausibility curve and the information curve, an informational world. An informational world encompasses how an individual or a community of individuals receives and processes information. Differing informational worlds will have differing facts and sources. Our challenge today is that we occupy multiple informational worlds with little in common and much hostility between them.
What does all of this have to do with the evangelical movement? A great deal.
The evangelical crises
The American evangelical movement has never been comprised of a single community. Depending on the criteria, estimates generally put the number of American evangelicals at 80-100 million. Even if we split the difference at 90 million, this would make the American evangelical population larger than every European nation save Russia. It is also diverse, reaching across all regions, races, and socioeconomic levels. What held the movement together historically was not only a shared set of moral and theological commitments, but a broadly similar view of the world and common sources of information. Their plausibility curves and information curves largely overlapped. There were some matters on which they differed, but the ground they shared in the middle served as a basis of mutual understanding and fellowship.
This sense of commonality grew increasingly strained as groups not formerly identified as evangelical came to be lumped together, defining the category “evangelical” less in theological terms and more in social, cultural, and political terms. This broader evangelical movement today is dividing into separate communities that still hold some moral and theological commitments in common but differ dramatically on their sources of information and their broader view of the world. Their informational worlds have little overlap. They can only discuss a narrow range of topics if they do not want to fall into painful and exasperated disagreement.
One group within American evangelicalism believes our religious liberties have never been more firmly established; another that they have never been at greater risk. One group believes racism is still systemic in American society; another that the “systemic racism” push is a progressive program to redistribute wealth and power to angry radicals. One is more concerned with the insurrection at the Capitol; another with the riots that followed the killing of George Floyd. One believes the Trump presidency was generationally damaging to Christian witness; another that it was enormously beneficial. One believes the former president attempted a coup; another that the Democrats stole the election. One believes masks and vaccines are marks of Christian love; another that the rejection of the same is a mark of Christian courage.
There are countless groups in between, of course, but these examples illustrate the tension: We occupy the same reality but starkly different worlds. There is a real question whether these worlds can (or should) draw back together again. This is a critical moment for our movement.
What, then, can be done? The model itself suggests where to start. If we move the information curves toward a common center, the plausibility curve will follow. Information comes through three sources: media, authorities, and community. One reason for our disunity is that these three sources are in crisis in American evangelicalism. I will only briefly outline these points.
First, the crisis of media is acute. Even as media today has grown more powerful and pervasive, it has also grown more fragmented and polarizing. The dynamics of modern media reward content that is immediate, angry, and hyperbolic, rendering the media into a marketplace for scorn sellers and hate merchants. Evangelicals find themselves torn between social media platforms and legacy media sources that openly advocate progressive causes and cancel conservative voices and far-right sources that traffic in paranoia and misinformation. In short, the digital media landscape has evolved to profit from our vices more than our virtues, and it has become incredibly effective at dividing audiences into hermetic media spheres that deliver only the information and commentary that confirms the audiences’ anxieties and antipathies.
This presents an extraordinary challenge for Christian discipleship. Media consumption has been climbing for years, and it soared amid the pandemic. Members of our congregations may spend a few hours a week in the Word of God (which should always be the Christian’s most important source of information and authority) but 40 hours or more mainlining the animosities of the day. Once the information curve begins a leftward or rightward drift, the algorithms of digital media and the manipulations of politicians and profiteers accelerate the momentum. Soon Christian communities that once shared a broader view of the world find they only agree on the bare essentials of faith. It will be difficult to address other parts of the information curve until we have brought some semblance of sanity into our media consumption. The longer we live in separate media worlds, the deeper and broader our divisions will become. The longer we give ourselves to media gluttony, skimping on the deeper nourishment that cultivates Christ within us, the less we will have in common.
The media crisis reaches across the whole of society, but the evangelical movement also faces an authority crisis of its own making. A generation of evangelical leaders who commanded immense respect, at least across the broad middle of American evangelicalism, have passed away. The current generation of evangelical institutional leaders, though markedly more diverse than their forebears, struggle to rise above the rampant ideological othering of our time. Moreover, the movement has seen countless leaders fall from grace in spectacularly destructive ways. At the same time, we have seen the rise of the celebrity pastor. It was once the case that a long obedience in the same direction, a life of humble study and service, earned a person a modicum of spiritual authority and a modest living. Today, a dashing profile and a talent for self-promotion can earn wealth and stardom in the Christian celebrity marketplace.
The consequence is disillusionment and division. While younger generations head for the exits, those who remain in our churches become further entrenched in their own ideological camps. If it is ever to be true again that broadly respected authorities form an important part of our shared information curve, it will be because we turn from a culture of celebrity to a culture of sanctification, where leadership is less about building a platform and more about carrying the cross of Christ. It will be because we remember the words of Jesus that “whoever wants to become great among you must be your servant” (Matt. 20:26). It will also be because we relearn how to listen to men and women of wisdom, leaders as well as neighbors, without crucifying them over political differences.
The third way to shift the information curve is to address our crisis of community. Community is essential to Christian life. It deepens our knowledge of the Word, forges our shared identity in Christ, cultivates Christian character, and disciples our young. Yet the pressures, temptations, and glowing distractions of contemporary life have strained the ties that bind us, replacing the warmth and depth of incarnate community with a cold digital imitation. The pandemic has only deepened our isolation, causing many to look outside their churches to political tribes or conspiracist communities for a sense of purpose and belonging. Further, the hyper-politicization of the American evangelical movement has led to a political sorting. Congregants who do not like their pastors’ stances depart for other churches whose politics are the same as theirs. But congregations comprised of individuals whose informational worlds are nearly identical will tend toward rigidity and increasing radicalism—what Cass Sunstein calls the Law of Group Polarization.
Rather than withdrawing into communities of common loathing, the church should be offering a community of common love, a sanctuary from the fragmentation and polarization, from the loneliness and isolation of the present moment. The church should model what it means to care for one another in spite of our differences on social and political matters and affirm the incomparably deeper rootedness of our identity in Christ.
Michael O. Emerson, a sociologist and scholar of American religion at the University of Illinois at Chicago, recently said he has studied religious congregations for 30 years but has “never seen” such an extraordinary level of conflict. “What is different now?” he asked. “The conflict is over entire worldviews—politics, race, how we are to be in the world, and even what religion and faith are for.” What I have offered above is a model for understanding how we have come to such a pass, and a mere suggestion of how we might begin the generational project before us.
We are not without hope. Lies ring hollow at the end of the day. Hatred is a poor imitation of purpose, celebrity a poor replacement for wisdom, and political tribes a poor comparison to authentic Christian community. We are a people defined by the resurrection of the Son of God. We are called to be redeemers and reconcilers.
So perhaps we can begin to build bridges across our informational worlds. Perhaps we can nurture a healthy media ecosystem that offers a balanced view of the world and a generous conversation about it. Perhaps we can restore a culture of leadership defined by humility over celebrity and integrity over influence. Perhaps we can invite those who have found counterfeit community in their political tribes to rediscover a richer and more robust community in Christ. All of these things will be essential to rebuilding a shared understanding of the world God created and what it means to follow Christ within it.
Since the invention of writing, human innovation has transformed how we formulate new ideas, organize our societies, and communicate with one another. But in an age of rapid-fire social media and nonstop algorithm-generated outrage, technology is no longer helping to expand or enrich the public sphere.
BERKELEY – Since 1900, human technology and organization have been evolving at a blistering pace. The degree of change that occurs in just one year would have taken 50 years or more before 1500. War and politics used to be the meat of human history, with advances in technology and organization unfolding very slowly – if at all – in the background. Now, the inverse is true.
The impact of technological innovation on the marketplace of ideas has brought about some of the most consequential changes. The shift from the age of handwritten and hand-copied manuscripts to that of the Gutenberg press ushered in the Copernican Revolution (along with almost two centuries of genocidal religious war). Pamphlets and coffee houses broadened the public sphere and positioned public opinion as a powerful constraint on political rulers’ behavior.
As John Adams, the second president of the United States, later pointed out, the “[American] Revolution was effected before the war commenced … in the minds and hearts of the people.” The decisive intellectual battle, we now know, was won by the English-born printer Thomas Paine’s pamphlet Common Sense. Still, even during the revolutionary period, the pace of change was far slower than it is today. In the space of just two human lifetimes, we have gone from mass-market newspapers and press lords to radio and network television, and then on to the Internet and today’s social media-driven public sphere. And most of us will live long enough to witness whatever comes next.
There is now a near-consensus – at least among those who are not completely steeped in social-media propaganda – that the current public sphere does not serve us well. “Social media is broken,” the American author Annalee Newitz wrote in a recent commentary for the New York Times. “It has poisoned the way we communicate with each other and undermined the democratic process. Many of us just want to get away from it, but we can’t imagine a world without it.”
Western societies have experienced a similar sentiment before. In the 1930s, my great-uncles listened to their elders complain about how radio had allowed demagogues like Adolf Hitler, Charles Coughlin, and Franklin D. Roosevelt (that “communist”) to short-circuit the normal processes of public discourse. No longer were public debates kept sober and rational by traditional gatekeepers. In the new age of broadcast, unapproved memes could spread far and wide without interference. Politicians and ideologues who may not have had the public interest in mind could get right into people’s ears and hijack their brains.
Nowadays, the problem is not a single demagogue, but a public sphere beset by swarms of “influencers,” propagandists, and bots, all semi-coordinated by the dynamics of the medium itself. Once again, ideas of dubious quality and provenance are shaping people’s thoughts without having been subjected to adequate evaluation and analysis.
We should have seen this coming. A generation ago, when the “net” was limited to universities and research institutes, there was an annual “September” phenomenon. Each year, new arrivals to the institution would be given an email account and/or user profile, whereupon they would rapidly find their online communities. They would begin to talk, and someone, inevitably, would get annoyed. For the next month, whatever informational or discursive use the net might have had would be sidelined by continuous vitriolic exchanges.
Then things would calm down. People would remember to put on their asbestos underwear before logging on; they learned not to take the newbies so seriously. Trolls would find themselves banned from the forums they loved to disrupt. And, in any case, most who experimented with the troll lifestyle realized that it has little to recommend it. For the next 11 months, the net would serve its purpose, significantly extending each user’s cultural, conversational, and intellectual range, and adding to the collective stock of human intelligence.
But as the Internet began to spread to each household and then to each smartphone, fears about the danger of an “eternal September” have been confirmed. There is more money to be made by stoking outrage than by providing sound information and encouraging the social-learning process that once taught net newbies to calm down. And yet, today’s Internet does offer valuable information, so much so that few of us could imagine doing without it. To access that information, we have tacitly agreed to allow the architects at Facebook, Twitter, Google (especially YouTube), and elsewhere to shape the public sphere with their outrage- and clickbait-generating algorithms.
Meanwhile, others have found that there is a great deal of money and power to be gained by shaping public opinion online. If you want to get your views out there, it is easier to piggyback on the outrage machine than to develop a comprehensive rational argument – especially when those views are self-serving and deleterious to the public good.
For her part, Newitz ends her recent commentary on a hopeful note. “Public life has been irrevocably changed by social media; now it’s time for something else,” she writes. “We need to stop handing off responsibility for maintaining public space to corporations and algorithms – and give it back to human beings. We may need to slow down, but we’ve created democracies out of chaos before. We can do it again.”
Such hope may be necessary for journalists these days. Unfortunately, a rational evaluation of our situation suggests that it is unjustified. The eternal September of our discontent has arrived.
Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government. He controls three core communications platforms — Facebook, Instagram and WhatsApp — that billions of people use every day. Facebook’s board works more like an advisory committee than an overseer, because Mark controls around 60 percent of voting shares. Mark alone can decide how to configure Facebook’s algorithms to determine what people see in their News Feeds, what privacy settings they can use and even which messages get delivered. He sets the rules for how to distinguish violent and incendiary speech from the merely offensive, and he can choose to shut down a competitor by acquiring, blocking or copying it.
Mark is a good, kind person. But I’m angry that his focus on growth led him to sacrifice security and civility for clicks. I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders. And I’m worried that Mark has surrounded himself with a team that reinforces his beliefs instead of challenging them.
The government must hold Mark accountable. For too long, lawmakers have marveled at Facebook’s explosive growth and overlooked their responsibility to ensure that Americans are protected and markets are competitive. Any day now, the Federal Trade Commission is expected to impose a $5 billion fine on the company, but that is not enough; nor is Facebook’s offer to appoint some kind of privacy czar. After Mark’s congressional testimony last year, there should have been calls for him to truly reckon with his mistakes. Instead the legislators who questioned him were derided as too old and out of touch to understand how tech works. That’s the impression Mark wanted Americans to have, because it means little will change.
Facebook’s dominance is not an accident of history. The company’s strategy was to beat every competitor in plain view, and regulators and the government tacitly — and at times explicitly — approved. In one of the government’s few attempts to rein in the company, the F.T.C. in 2011 issued a consent decree that Facebook not share any private information beyond what users already agreed to. Facebook largely ignored the decree. Last month, the day after the company predicted in an earnings call that it would need to pay up to $5 billion as a penalty for its negligence — a slap on the wrist — Facebook’s shares surged 7 percent, adding $30 billion to its value, six times the size of the fine.
The F.T.C.’s biggest mistake was to allow Facebook to acquire Instagram and WhatsApp. In 2012, the newer platforms were nipping at Facebook’s heels because they had been built for the smartphone, where Facebook was still struggling to gain traction. Mark responded by buying them, and the F.T.C. approved.
Neither Instagram nor WhatsApp had any meaningful revenue, but both were incredibly popular. The Instagram acquisition guaranteed Facebook would preserve its dominance in photo networking, and WhatsApp gave it a new entry into mobile real-time messaging. Now, the founders of Instagram and WhatsApp have left the company after clashing with Mark over his management of their platforms. But their former properties remain Facebook’s, driving much of its recent growth.
.. When it hasn’t acquired its way to dominance, Facebook has used its monopoly position to shut out competing companies or has copied their technology.
The News Feed algorithm reportedly prioritized videos created through Facebook over videos from competitors, like YouTube and Vimeo. In 2012, Twitter introduced a video network called Vine that featured six-second videos. That same day, Facebook blocked Vine from hosting a tool that let its users search for their Facebook friends while on the new network. The decision hobbled Vine, which shut down four years later.
Snapchat posed a different threat. Snapchat’s Stories and impermanent messaging options made it an attractive alternative to Facebook and Instagram. And unlike Vine, Snapchat wasn’t interfacing with the Facebook ecosystem; there was no obvious way to handicap the company or shut it out. So Facebook simply copied it.
Facebook’s version of Snapchat’s stories and disappearing messages proved wildly successful, at Snapchat’s expense. At an all-hands meeting in 2016, Mark told Facebook employees not to let their pride get in the way of giving users what they want. According to Wired magazine, “Zuckerberg’s message became an informal slogan at Facebook: ‘Don’t be too proud to copy.’”
(There is little regulators can do about this tactic: Snapchat patented its “ephemeral message galleries,” but copyright law does not extend to the abstract concept itself.)
As a result of all this, would-be competitors can’t raise the money to take on Facebook. Investors realize that if a company gets traction, Facebook will copy its innovations, shut it down or acquire it for a relatively modest sum. So despite an extended economic expansion, increasing interest in high-tech start-ups, an explosion of venture capital and growing public distaste for Facebook, no major social networking company has been founded since the fall of 2011.
As markets become more concentrated, the number of new start-up businesses declines. This holds true in other high-tech areas dominated by single companies, like search (controlled by Google) and e-commerce (taken over by Amazon). Meanwhile, there has been plenty of innovation in areas where there is no monopolistic domination, such as in workplace productivity (Slack, Trello, Asana), urban transportation (Lyft, Uber, Lime, Bird) and cryptocurrency exchanges (Ripple, Coinbase, Circle).
I don’t blame Mark for his quest for domination. He has demonstrated nothing more nefarious than the virtuous hustle of a talented entrepreneur. Yet he has created a leviathan that crowds out entrepreneurship and restricts consumer choice. It’s on our government to ensure that we never lose the magic of the invisiblehand. How did we allow this to happen?
Since the 1970s, courts have become increasingly hesitant to break up companies or block mergers unless consumers are paying inflated prices that would be lower in a competitive market. But a narrow reliance on whether or not consumers have experienced price gouging fails to take into account the full cost of market domination. It doesn’t recognize that we also want markets to be competitive to encourage innovation and to hold power in check. And it is out of step with the history of antitrust law. Two of the last major antitrust suits, against AT&T and IBM in the 1980s, were grounded in the argument that they had used their size to stifle innovation and crush competition.
As the Columbia law professor Tim Wu writes, “It is a disservice to the laws and their intent to retain such a laserlike focus on price effects as the measure of all that antitrust was meant to do.”
Facebook is the perfect case on which to reverse course, precisely because Facebook makes its money from targeted advertising, meaning users do not pay to use the service. But it is not actually free, and it certainly isn’t harmless.
Facebook’s business model is built on capturing as much of our attention as possible to encourage people to create and share more information about who they are and who they want to be. We pay for Facebook with our data and our attention, and by either measure it doesn’t come cheap.
I was on the original News Feed team (my name is on the patent), and that product now gets billions of hours of attention and pulls in unknowable amounts of data each year. The average Facebook user spends an hour a day on the platform; Instagram users spend 53 minutes a day scrolling through pictures and videos. They create immense amounts of data — not just likes and dislikes, but how many seconds they watch a particular video — that Facebook uses to refine its targeted advertising. Facebook also collects data from partner companies and apps, without most users knowing about it, according to testing by The Wall Street Journal.
Some days, lying on the floor next to my 1-year-old son as he plays with his dinosaurs, I catch myself scrolling through Instagram, waiting to see if the next image will be more beautiful than the last. What am I doing? I know it’s not good for me, or for my son, and yet I do it anyway.
The choice is mine, but it doesn’t feel like a choice. Facebook seeps into every corner of our lives to capture as much of our attention and data as possible and, without any alternative, we make the trade.
The vibrant marketplace that once drove Facebook and other social media companies to compete to come up with better products has virtually disappeared. This means there’s less chance of start-ups developing healthier, less exploitative social media platforms. It also means less accountability on issues like privacy.
Just last month, Facebook seemingly tried to bury news that it had stored tens of millions of user passwords in plain text format, which thousands of Facebook employees could see. Competition alone wouldn’t necessarily spur privacy protection — regulation is required to ensure accountability — but Facebook’s lock on the market guarantees that users can’t protest by moving to alternative platforms.
The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.
Facebook engineers write algorithms that select which users’ comments or experiences end up displayed in the News Feeds of friends and family. These rules are proprietary and so complex that many Facebook employees themselves don’t understand them.
In 2014, the rules favored curiosity-inducing “clickbait” headlines. In 2016, they enabled the spread of fringe political views and fake news, which made it easier for Russian actors to manipulate the American electorate. In January 2018, Mark announced that the algorithms would favor non-news content shared by friends and news from “trustworthy” sources, which his engineers interpreted — to the confusion of many — as a boost for anything in the category of “politics, crime, tragedy.”
Facebook has responded to many of the criticisms of how it manages speech by hiring thousands of contractors to enforce the rules that Mark and senior executives develop. After a few weeks of training, these contractors decide which videos count as hate speech or free speech, which images are erotic and which are simply artistic, and which live streams are too violent to be broadcast. (The Verge reported that some of these moderators, working through a vendor in Arizona, were paid $28,800 a year, got limited breaks and faced significant mental health risks.)
As if Facebook’s opaque algorithms weren’t enough, last year we learned that Facebook executives had permanently deleted their own messages from the platform, erasing them from the inboxes of recipients; the justification was corporate security concerns. When I look at my years of Facebook messages with Mark now, it’s just a long stream of my own light-blue comments, clearly written in response to words he had once sent me. (Facebook now offers a limited version of this feature to all users.)
The most extreme example of Facebook manipulating speech happened in Myanmar in late 2017. Mark said in a Vox interview that he personally made the decision to delete the private messages of Facebook users who were encouraging genocide there. “I remember, one Saturday morning, I got a phone call,” he said, “and we detected that people were trying to spread sensational messages through — it was Facebook Messenger in this case — to each side of the conflict, basically telling the Muslims, ‘Hey, there’s about to be an uprising of the Buddhists, so make sure that you are armed and go to this place.’ And then the same thing on the other side.”
Mark made a call: “We stop those messages from going through.” Most people would agree with his decision, but it’s deeply troubling that he made it with no accountability to any independent authority or government. Facebook could, in theory, delete en masse the messages of Americans, too, if its leadership decided it didn’t like them.
Mark used to insist that Facebook was just a “social utility,” a neutral platform for people to communicate what they wished. Now he recognizes that Facebook is both a platform and a publisher and that it is inevitably making decisions about values. The company’s own lawyers have argued in court that Facebook is a publisher and thus entitled to First Amendment protection.
No one at Facebook headquarters is choosing what single news story everyone in America wakes up to, of course. But they do decide whether it will be an article from a reputable outlet or a clip from “The Daily Show,” a photo from a friend’s wedding or an incendiary call to kill others.
Mark knows that this is too much power and is pursuing a twofold strategy to mitigate it.
- He is pivoting Facebook’s focus toward encouraging more private, encrypted messaging that Facebook’s employees can’t see, let alone control.
- Second, he is hoping for friendly oversight from regulators and other industry executives.
Late last year, he proposed an independent commission to handle difficult content moderation decisions by social media platforms. It would afford an independent check, Mark argued, on Facebook’s decisions, and users could appeal to it if they disagreed. But its decisions would not have the force of law, since companies would voluntarily participate.
In an op-ed essay in The Washington Post in March, he wrote, “Lawmakers often tell me we have too much power over speech, and I agree.” And he went even further than before, calling for more government regulation — not just on speech, but also on privacy and interoperability, the ability of consumers to seamlessly leave one network and transfer their profiles, friend connections, photos and other data to another.
I don’t think these proposals were made in bad faith. But I do think they’re an attempt to head off the argument that regulators need to go further and break up the company. Facebook isn’t afraid of a few more rules. It’s afraid of an antitrust case and of the kind of accountability that real government oversight would bring.
We don’t expect calcified rules or voluntary commissions to work to regulate drug companies, health care companies, car manufacturers or credit card providers. Agencies oversee these industries to ensure that the private market works for the public good. In these cases, we all understand that government isn’t an external force meddling in an organic market; it’s what makes a dynamic and fair market possible in the first place. This should be just as true for social networking as it is for air travel or pharmaceuticals.
In the summer of 2006, Yahoo offered us $1 billion for Facebook. I desperately wanted Mark to say yes. Even my small slice of the company would have made me a millionaire several times over. For a 22-year-old scholarship kid from small-town North Carolina, that kind of money was unimaginable. I wasn’t alone — just about every other person at the company wanted the same.
It was taboo to talk about it openly, but I finally asked Mark when we had a moment alone, “How are you feeling about Yahoo?” I got a shrug and a one-line answer: “I just don’t know if I want to work for Terry Semel,” Yahoo’s chief executive.
Outside of a couple of gigs in college, Mark had never had a real boss and seemed entirely uninterested in the prospect. I didn’t like the idea much myself, but I would have traded having a boss for several million dollars any day of the week. Mark’s drive was infinitely stronger. Domination meant domination, and the hustle was just too delicious.
Mark may never have a boss, but he needs to have some check on his power. The American government needs to do two things: break up Facebook’s monopoly and regulate the company to make it more accountable to the American people.
First, Facebook should be separated into multiple companies. The F.T.C., in conjunction with the Justice Department, should enforce antitrust laws by undoing the Instagram and WhatsApp acquisitions and banning future acquisitions for several years. The F.T.C. should have blocked these mergers, but it’s not too late to act. There is precedent for correcting bad decisions — as recently as 2009, Whole Foods settled antitrust complaints by selling off the Wild Oats brand and stores that it had bought a few years earlier.
There is some evidence that we may be headed in this direction. Senator Elizabeth Warren has called for reversing the Facebook mergers, and in February, the F.T.C. announced the creation of a task force to monitor competition among tech companies and review previous mergers.
How would a breakup work? Facebook would have a brief period to spin off the Instagram and WhatsApp businesses, and the three would become distinct companies, most likely publicly traded. Facebook shareholders would initially hold stock in the new companies, although Mark and other executives would probably be required to divest their management shares.
Until recently, WhatsApp and Instagram were administered as independent platforms inside the parent company, so that should make the process easier. But time is of the essence: Facebook is working quickly to integrate the three, which would make it harder for the F.T.C. to split them up.
Some economists are skeptical that breaking up Facebook would spur that much competition, because Facebook, they say, is a “natural” monopoly. Natural monopolies have emerged in areas like water systems and the electrical grid, where the price of entering the business is very high — because you have to lay pipes or electrical lines — but it gets cheaper and cheaper to add each additional customer. In other words, the monopoly arises naturally from the circumstances of the business, rather than a company’s illegal maneuvering. In addition, defenders of natural monopolies often make the case that they benefit consumers because they are able to provide services more cheaply than anyone else.
Facebook is indeed more valuable when there are more people on it: There are more connections for a user to make and more content to be shared. But the cost of entering the social network business is not that high. And unlike with pipes and electricity, there is no good argument that the country benefits from having only one dominant social networking company.
Facebook is indeed more valuable when there are more people on it: There are more connections for a user to make and more content to be shared. But the cost of entering the social network business is not that high. And unlike with pipes and electricity, there is no good argument that the country benefits from having only one dominant social networking company.
Still others worry that the breakup of Facebook or other American tech companies could be a national security problem. Because advancements in artificial intelligence require immense amounts of data and computing power, only large companies like Facebook, Google and Amazon can afford these investments, they say. If American companies become smaller, the Chinese will outpace us.
While serious, these concerns do not justify inaction. Even after a breakup, Facebook would be a hugely profitable business with billions to invest in new technologies — and a more competitive market would only encourage those investments. If the Chinese did pull ahead, our government could invest in research and development and pursue tactical trade policy, just as it is doing today to hold China’s 5G technology at bay.
The cost of breaking up Facebook would be next to zero for the government, and lots of people stand to gain economically. A ban on short-term acquisitions would ensure that competitors, and the investors who take a bet on them, would have the space to flourish. Digital advertisers would suddenly have multiple companies vying for their dollars.
Even Facebook shareholders would probably benefit, as shareholders often do in the years after a company’s split. The value of the companies that made up Standard Oil doubled within a year of its being dismantled and had increased by fivefold a few years later. Ten years after the 1984 breakup of AT&T, the value of its successor companies had tripled.
But the biggest winners would be the American people. Imagine a competitive market in which they could choose among one network that
- offered higher privacy standards, another that
- cost a fee to join but had little advertising and another that would allow users to
- customize and tweak their feeds as they saw fit.
No one knows exactly what Facebook’s competitors would offer to differentiate themselves. That’s exactly the point.
The Justice Department faced similar questions of social costs and benefits with AT&T in the 1950s. AT&T had a monopoly on phone services and telecommunications equipment. The government filed suit under antitrust laws, and the case ended with a consent decree that required AT&T to release its patents and refrain from expanding into the nascent computer industry. This resulted in an explosion of innovation, greatly increasing follow-on patents and leading to the development of the semiconductor and modern computing. We would most likely not have iPhones or laptops without the competitive markets that antitrust action ushered in.
Adam Smith was right: Competition spurs growth and innovation.
Just breaking up Facebook is not enough. We need a new agency, empowered by Congress to regulate tech companies. Its first mandate should be to protect privacy.
The Europeans have made headway on privacy with the General Data Protection Regulation, a law that guarantees users a minimal level of protection. A landmark privacy bill in the United States should specify exactly what control Americans have over their digital information, require clearer disclosure to users and provide enough flexibility to the agency to exercise effective oversight over time. The agency should also be charged with guaranteeing basic interoperability across platforms.
Finally, the agency should create guidelines for acceptable speech on social media. This idea may seem un-American — we would never stand for a government agency censoring speech. But we already have limits on
- yelling “fire” in a crowded theater,
- child pornography,
- speech intended to provoke violence and false statements to manipulate stock prices.
We will have to create similar standards that tech companies can use. These standards should of course be subject to the review of the courts, just as any other limits on speech are. But there is no constitutional right to harass others or live-stream violence.
These are difficult challenges. I worry that government regulators will not be able to keep up with the pace of digital innovation. I worry that more competition in social networking might lead to a conservative Facebook and a liberal one, or that newer social networks might be less secure if government regulation is weak. But sticking with the status quo would be worse: If we don’t have public servants shaping these policies, corporations will.
Some people doubt that an effort to break up Facebook would win in the courts, given the hostility on the federal bench to antitrust action, or that this divided Congress would ever be able to muster enough consensus to create a regulatory agency for social media.
But even if breakup and regulation aren’t immediately successful, simply pushing for them will bring more oversight. The government’s case against Microsoft — that it illegally used its market power in operating systems to force its customers to use its web browser, Internet Explorer — ended in 2001 when George W. Bush’s administration abandoned its effort to break up the company. Yet that prosecution helped rein in Microsoft’s ambitions to dominate the early web.
Similarly, the Justice Department’s 1970s suit accusing IBM of illegally maintaining its monopoly on computer sales ended in a stalemate. But along the way, IBM changed many of its behaviors. It stopped bundling its hardware and software, chose an extremely open design for the operating system in its personal computers and did not exercise undue control over its suppliers. Professor Wu has written that this “policeman at the elbow” led IBM to steer clear “of anything close to anticompetitive conduct, for fear of adding to the case against it.”
We can expect the same from even an unsuccessful suit against Facebook.
Finally, an aggressive case against Facebook would persuade other behemoths like Google and Amazon to think twice about stifling competition in their own sectors, out of fear that they could be next. If the government were to use this moment to resurrect an effective competition standard that takes a broader view of the full cost of “free” products, it could affect a whole host of industries.
The alternative is bleak. If we do not take action, Facebook’s monopoly will become even more entrenched. With much of the world’s personal communications in hand, it can mine that data for patterns and trends, giving it an advantage over competitors for decades to come.
I take responsibility for not sounding the alarm earlier. Don Graham, a former Facebook board member, has accused those who criticize the company now as having “all the courage of the last man leaping on the pile at a football game.” The financial rewards I reaped from working at Facebook radically changed the trajectory of my life, and even after I cashed out, I watched in awe as the company grew. It took the 2016 election fallout and Cambridge Analytica to awaken me to the dangers of Facebook’s monopoly. But anyone suggesting that Facebook is akin to a pinned football player misrepresents its resilience and power.
An era of accountability for Facebook and other monopolies may be beginning. Collective anger is growing, and a new cohort of leaders has begun to emerge. On Capitol Hill, Representative David Cicilline has taken a special interest in checking the power of monopolies, and Senators Amy Klobuchar and Ted Cruz have joined Senator Warren in calling for more oversight. Economists like Jason Furman, a former chairman of the Council of Economic Advisers, are speaking out about monopolies, and a host of legal scholars like Lina Khan, Barry Lynn and Ganesh Sitaraman are plotting a way forward.
This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.
President Trump is operating from an ancient political playbook.
Gail: Perhaps my single favorite revelation was that our self-made billionaire was earning $200,000 a year from the family empire when he was a 3-year-old. Do you think all the nation’s toddlers are now eyeing their parents and wondering, “O.K., where’s my income stream?”
.. The idea that the president is self-made is almost as laughable as the notion that he writes (or, for that matter, reads) his own books.
.. The Democrats need to come up with a compelling, forward-looking agenda for the 2020 elections. Demanding an end to tax loopholes would allow them to talk about real change for the future while eviscerating Donald Trump at the same time.
.. Do Trump fans care about this stuff? Thanks to our colleagues, we know he’s a phony billionaire who represents all the things they in theory hate about the New York economy. But he’s already run a populist anti-immigration campaign that managed to jump right over the undocumented workers he’s hired.
.. Mainly, though, they don’t care because it’s an investigation that dwells on the past, while the presidency is about the present and the future. And while the rest of us were busy tearing our hair out over Brett Kavanaugh and Christine Blasey Ford, the unemployment rate dipped to its lowest level since Neil Armstrong walked on the moon, the Dow hit another record and trade disaster was averted when a new Nafta agreement was reached by Mexico, Canada and the U.S.
All of which is to say that you’re right. Democrats really do need to come up with a forward-looking agenda for 2020, because if all they have to talk about are the Trump family’s tax dodges from 30 years ago, or if they try to relitigate the 2016 election, they will lose again... Back in days of yore the media was mainly TV networks and big newspapers that wanted to communicate with a large audience. Now the stars are — people who yell. Blogs, Twitter — we’ve been painfully aware since 2016 that power belongs to whoever can get their followers really, really worked up... There’s a lot of talk about divisions between left and center among the Democrats, but it doesn’t compare to what’s happened to the Republicans. It’s really two parties, with the establishment so terrified of the Trump train, they’re afraid to peep... I wish the G.O.P. were more divided: One of the most depressing political facts of our day is the extent to which Trump has captured the party, leaving conservatives like me who oppose him feeling politically homeless... But you’re right. It does feel different this time. And I think the difference is that the fights aren’t really about policy. They’re about our personal experiences and deepest fears... By the end a vote for Kavanaugh was a vote for a guy who went out of his way to rally the troops by turning the nomination into a partisan us-against-the-Democrats battle. I realize the Democrats were not exactly working above the fray themselves. But the Supreme Court is about transcending partisanship. That’s supposed to be the whole point. And if the justices don’t always live up to that goal, that doesn’t mean you pick a new guy who’s given up the fight before he starts... something tells me that if the nominee were Amy Coney Barrett, Democrats would look for a reason to postpone the vote, hope to retake the Senate and then … take revenge for Merrick Garland by refusing to hold a vote.
.. Gail: Not necessarily agreeing, but Merrick Garland is an open sore. Particularly since Mitch McConnell keeps gloating about it.
.. One thing we can probably agree on is that the process managed to degrade and demean just about everyone who participated in it. Blasey never intended to go public, but Washington can’t keep a secret so her name got leaked. I doubt Kavanaugh intended to go on the attack quite the way he did, but, yet he was advised by White House counsel Don McGahn to “channel his outrage and indignation.”
.. The news media reported stories that otherwise violated normal journalistic standards. And most senators made fools of themselves one way or another: The low point for me was Connecticut’s Richard Blumenthal, who famously lied about his military service, lecturing Kavanaugh on the legal concept of “false in one thing, false in everything,” as I noted in my column.
.. my constant preoccupations: the way this country is organized to disenfranchise urban voters and empower people from rural areas.
.. The 59 million people in California and New York are going to elect Democratic senators. But they’ll be completely canceled out if the less than two million people in Wyoming and Montana decide to go Republican.
Content recommendation algorithms reward engagement metrics. One of the metrics they reward is getting a user’s attention, briefly. In the real world, someone can get my attention by screaming that there is a fire. Belief that there is a fire and interest in fire are not necessary for my attention to be grabbed by a warning of fire. All that is needed is a desire for self-preservation and a degree of trust in the source of the knowledge.
Compounding the problem, since engagement is improved and people make money off of videos, there is an incentive in place encouraging the proliferation of attention grabbing false information.
In a better world, this behavior would not be incentivized. In a better world, reputation metrics would allow a person to realize that the boy who cried wolf was the one who had posted the attention grabbing video. Humanity has known for a long time that there are consequences for repeated lying. We have fables about that, warning liars away from lying.
I don’t think making that explicit, like it is in many real world cases of lying publicly in the most attention attracting way possible, would be unreasonable.
.. Google recommends that stuff to me, and I don’t believe in it or watch it. Watch math videos, get flat earth recommendations. Watch a few videos about the migration of Germanic tribes in Europe during the decline of the Roman Empire, get white supremacist recommendations.
My best guess? They want you to sit and watch YouTube for hours, so they recommend stuff watched by people who sit and watch YouTube for hours.
This stuff reminds of the word “excitotoxins,” which is based on a silly idea yet seems to capture the addictive effect of stimulation. People are stimulated by things that seem novel, controversial, and dangerous. People craving stimulation will prefer provocative junk over unsurprising truth.
I very much hope this isn’t just words. The core problem, IMO, is that content that makes us angry, anxious or jealous is a much better driver of clicks than content that makes us happy. I’m sure Facebook knows this. If they really mean it, they’ll accept that they will make less money as a result of this change. It would be the right decision in the long term, but the short term will hurt.