How the Trump Campaign’s Mobile App Is Collecting Massive Amounts of Voter Data

Since the Trump campaign set up a shell company called American Made Media Consultants, in 2018, an entity it describes as a “vendor responsible for arranging and executing media buys and related services at fair market value,” it’s been nearly impossible to know whom the campaign is paying, for what, and how much. But, on May 27th, Alan Knitowski, the C.E.O. of Phunware, an Austin-based ad broker and software company, announced a “strategic relationship with American Made Media Consultants on the development, launch and ongoing management and evolution of the Trump-Pence 2020 Reelection Campaign’s mobile application portfolio.” Although Phunware never showed up in the campaign’s F.E.C. reports, Phunware’s S.E.C. filings show that, since last year, it has been paid around four million dollars by A.M.M.C.

On its face, Phunware seems like a strange choice to develop the campaign’s app. Before working for President Trump, Phunware’s software was being used in relatively few applications, the most popular being a horoscope app. And, since 2019, it has been embroiled in a lawsuit with Uber, a former client of the company’s ad-placement business. The dispute stems from a yearlong investigation by two former Phunware employees who discovered that the company was pretending to place Uber ads on Web sites like CNN when, in fact, they were appearing on pornography sites, among others, if they appeared at all. But, according to former Phunware employees and business associates, the company’s value to the Trump campaign is not in software development. “The Trump campaign is not paying Phunware four million dollars for an app,” a former business partner of the company told me. “They are paying for data. They are paying for targeted advertising services. Imagine if every time I open my phone I see a campaign message that Joe Biden’s America means we’re going to have war in the streets. That’s the service the Trump campaign and Brad Parscale”—the Trump campaign’s senior adviser for data and digital operations—“have bought from Phunware. An app is just part of the package.”

The Trump 2020 app is a massive data-collection tool in its own right. When it launched, on April 23rd, Parscale, who was then Trump’s campaign manager, urged his followers on Facebook to “download the groundbreaking Official Trump 2020 App—unlike other lame political apps you’ve seen.” Despite the hype, the 2020 app recapitulates many of the functions found on the 2016 app. There’s a news feed with Trump’s social-media posts, an events calendar, and recorded videos. The “gaming” features that distinguished the 2016 app are still prominent—a “Trump’s army” member who accumulates a hundred thousand points by sharing contacts or raising money is promised a photograph with the President, while other members can use points to get discounts on maga gear. Users are prompted to invite friends to download the app—more points!—and can use the app to sign up to make calls on behalf of the campaign, to be a poll watcher, to register voters, and to get tickets to virtual and in-person events.

The most obvious new feature on the 2020 app is a live news broadcast, carefully curated by the campaign to push the President’s talking points. It is hosted by a cast of campaign surrogates, including Lara Trump, Eric Trump’s wife, and Kimberly Guilfoyle, Donald Trump, Jr.,’s girlfriend and the campaign’s national finance chair. There are also channels aimed at particular demographic groups, among them Women for Trump, Black Voices for Trump, and Latinos for Trump. Though it is a crude approximation of a traditional news outlet, the Trump app enables users to stay fully sequestered within the fact-optional Trump universe. “I think everything we do is to counter the media,” Parscale told Reuters in June. “This is another tool in the tool shed to fight that fight, and it’s a big tool.” In May, after Twitter labelled one of Trump’s tweets as being in violation of its standards, sparking renewed claims of liberal-media censorship of conservatives (despite the fact that the tweet was not taken down), downloads of the campaign app soared.

To access the Trump app, users must share their cell-phone numbers with the campaign. “The most important, golden thing in politics is a cellphone number,” Parscale told Reuters. “When we receive cellphone numbers, it really allows us to identify them across the databases. Who are they, voting history, everything.” Michael Marinaccio, the chief operating officer of Data Trust, a private Republican data company, said recently that “what’s new this year, or at least a sense of urgency, is getting as many cell-phone numbers as we can in the voter file data.” An effective way to do that is to entice supporters to share not only their own cell-phone numbers with the campaign but those of their contacts as well. One estimate, by Eliran Sapir, the C.E.O. of Apptopia, a mobile-analytics company, is that 1.4 million app downloads could provide upward of a hundred million phone numbers. This will enable the Trump campaign to find and target people who have not consented to handing over their personal information. It’s not unlike how Cambridge Analytica was able to harvest the data of nearly ninety million unsuspecting Facebook users, only this time it is one’s friends, family, and acquaintances who are willfully handing over the data for a chance to get a twenty-five-dollar discount on a maga hat.

By contrast, the new Biden app still collects data on users, but it outlines the specific uses of that data and doesn’t automatically collect the e-mail and phone numbers of users’ friends and family. “Unlike the Biden app, which seeks to provide users with awareness and control of the specific uses of their data, the Trump app collects as much as it can using an opt-out system and makes no promises as to the specific uses of that data,” Samuel Woolley, the director of the propaganda research project at the University of Texas’s Center for Media Engagement, told me. “They just try to get people to turn over as much as possible.”

A Trump spokesperson told me, “The Trump 2020 app was built by Phunware as a one-stop destination with a variety of tools to get voters engaged with President Trump’s reëlection campaign.” Among its main contributions to the app’s data-mining capabilities is a “location experience kit,” which the company had previously marketed to hospitals and malls to help people navigate unfamiliar buildings. Visitors could pair their phone’s Bluetooth with beacons set up throughout the facility. Initially, the Trump 2020 app was built around big rallies, where this feature would have been useful. According to one former employee, however, the company’s location software, which functions even when the app is not open, may be capable of sucking up more than geographic coördinates. It could potentially “sniff out all of the information you have on your phone. Any sort of registration data, your name, your phone number, potentially your Social Security number, and other pieces of data. It could sniff out how many apps you have on your phone, what type of apps you have on your phone, what apps you deleted recently, how much time you’ve spent in an app, and your dwell time at various specific locations. It could give a very intimate picture of that individual and their relationship with that mobile device.” (Phunware did not respond to multiple requests for comment.)

In 2017, as Phunware was moving into the election space, the company’s Web site announced, “As soon as the first few campaigns recognize the value of mobile ad targeting for voter engagement, the floodgates will open. Which campaign will get there first and strike it rich?” A year later, according to people familiar with the effort, the company used its location-tracking capabilities to create a lobbying campaign on behalf of a health-care company aiming to influence legislators in Georgia. It put a “geofencearound the governor’s mansion that recorded the I.D. of every device that went in and out of the building, and then used the I.D.s to send targeted messages to those phones (likely including the governor’s) about the legislation it was aiming to influence. The legislation passed. Phunware’s leadership has also discussed their ability to geofence polling places, according to people who were present during these discussions, in order to send targeted campaign ads to voters as they step into the voting booth. While it is illegal to advertise in the vicinity of the polls, using location data in this way to send targeted ads could enable a campaign to breach that border surreptitiously.

Phunware’s data collection on behalf of the Trump campaign likely extends beyond the app as well. According to Phunware’s chief operating officer, Randall Crowder, the company has created a “data exchange” that “enables digital marketers to design custom audiences within minutes using geographic, interest, intent, and demographic segments . . . high-quality G.P.S. location data points from one hundred million-plus devices in the United States to increase scale of location-based audiences.” In its promotional materials, the company also claims to have unique device I.D.s for more than a billion mobile devices worldwide, and to have developed what it calls a Knowledge Graph—a “consumer-centric collection of actions, preferences, characteristics and predicted behavior” from the data it has siphoned from mobile phones and tablets. Much like Facebook’s social graph, which has been described as “the global mapping of everybody and how they’re related,” this enables the company to quickly sort through large data sets, uncovering connections and relations that otherwise would be obscured. For example: middle-aged women who live alone, rarely vote, own guns, and live in a border state.

So how did Phunware obtain a billion unique device I.D.s? As the company described it to the S.E.C., they were collected from phones and tablets that use Phunware’s software. But, according to people who have worked with the company, in addition to the data it obtains through its software, Phunware has been using its ad-placement business as a wholesale data-mining operation. When it bids to place an ad in an app like, for example, Pandora, it scoops up the I.D. of every phone and tablet that would have been exposed to the ad, even if it loses the bid. By collecting and storing this information, the company is able to compile a fairly comprehensive picture of every app downloaded on those devices, and any registration data a user has shared in order to use the app.

This information can yield rich demographic data. If a campaign is looking for young men with an affinity for guns, for instance, it might look at who has downloaded both Call of Duty and CCW, the Concealed Carry Fifty State app. Then, using the location data associated with the device I.D., the data can be unmasked and linked to an individual. Once a campaign knows who someone is, and where a person lives, it is not difficult to start building a voter file, and using this information to tailor ads and messages.

Tom Wheeler, the former chair of the Federal Communications Commission, told me, “These are Cambridge Analytica-like techniques. It’s collecting the descriptive power of data from multiple sources, most of which the consumer doesn’t even know are being collected. And that’s what Cambridge Analytica did.”

In late July, a group of lawmakers, led by Senator Bill Cassidy, Republican of Louisiana, and his Democratic colleague Ron Wyden, of Oregon, sent a letter to the chair of the Federal Trade Commission asking him to investigate whether using bidding information in this way constitutes an unfair and deceptive practice. “Few Americans realize that companies are siphoning off and sharing that ‘bidstream’ data to compile exhaustive dossiers about them,” they wrote, “which are then sold to hedge funds, political campaigns, and even to the government without court orders.” According to Charles Manning, the C.E.O. of Kochava, a data marketplace, “There are no regulatory bodies that appear to be aware of the technological foundations upon which digital advertising operates. This is a challenge, because without understanding how programmatic ads are bought and sold, regulators face an uphill battle in applying regulation that deals with opaque supply chains where fraudulent behavior can flourish.”

The Trump app, at least, is explicit about what it expects from its users: “You may be asked to provide certain information, including your name, username, password, e-mail, date of birth, gender, address, employment information, and other descriptive information,” the app’s privacy policy states. “The Services [of the app] may include features that rely on the use of information stored on, or made available through, your mobile Device. . . . We . . . reserve the right to store any information about the people you contact via the Services. . . . We reserve the right to use, share, exchange and/or disclose to DJTFP affiliated committee and third parties any of your information for any lawful purpose.” (When I asked Woolley why the campaign was asking supporters to share their contacts, since it already had access to them through the app’s permissions, he pointed out that, when a user shares their contacts to earn points, “that actually sends out messages to your contacts asking them to download the app. So rather than just getting data on your friends and family, they are able to also reach out to them using you as a reference.”)

The policy also notes that the campaign will be collecting information gleaned from G.P.S. and other location services, and that users will be tracked as they move around the Internet. Users also agree to give the campaign access to the phone’s Bluetooth connection, calendar, storage, and microphone, as well as permission to read the contents of their memory card, modify or delete the contents of the card, view the phone status and identity, view its Wi-Fi connections, and prevent the phone from going to sleep. These permissions give the Trump data operation access to the intimate details of users’ lives, the ability to listen in on those lives, and to follow users everywhere they go. It’s a colossal—and essentially free—data-mining enterprise. As Woolley and his colleague Jacob Gursky wrote in MIT Technology Review, the Trump 2020 app is “a voter surveillance tool of extraordinary power.”

I learned this firsthand after downloading the Trump 2020 app on a burner phone I bought in order to examine it, using an alias and a new e-mail address. Two days later, the President sent me a note, thanking me for joining his team. Lara Trump invited me (for a small donation) to become a Presidential adviser. Eric Trump called me one of his father’s “FIERCEST supporters from the beginning.” But the messages I began getting from the Trump campaign every couple of hours were sent not only to the name and address I’d used to access the app. They were also sent to the e-mail address and name associated with the credit card I’d used to buy the phone and its sim card, neither of which I had shared with the campaign. Despite my best efforts, they knew who I was and where to reach me.

Shoshana Zuboff on surveillance capitalism | VPRO Documentary

Harvard professor Shoshana Zuboff wrote a monumental book about the new economic order that is alarming. “The Age of Surveillance Capitalism,” reveals how the biggest tech companies deal with our data. How do we regain control of our data? What is surveillance capitalism?

In this documentary, Zuboff takes the lid off Google and Facebook and reveals a merciless form of capitalism in which no natural resources, but the citizen itself, serves as a raw material. How can citizens regain control of their data?

It is 2000, and the dot.com crisis has caused deep wounds. How will startup Google survive the bursting of the internet bubble? Founders Larry Page and Sergey Brin don’t know anymore how to turn the tide. By chance, Google discovers that the “residual data” that people leave behind in their searches on the internet is very precious and tradable.

This residual data can be used to predict the behavior of the internet user. Internet advertisements can, therefore, be used in a very targeted and effective way. A completely new business model is born: “surveillance capitalism.”

Full Interview: Edward Snowden On Trump, Privacy, And Threats To Democracy | The 11th Hour | MSNBC

On the eve of his memoir ‘Permanent Record’ being published, NSA whistleblower Edward Snowden talked at length from Moscow with MSNBC’s Brian Williams in an exclusive interview. This is their discussion in its entirety, edited down slightly for clarity.

How the U.S. Government Obtains and Uses Cellphone Location Data

The U.S. government is using app-generated marketing data based on the movements of millions of cellphones around the country for some forms of law enforcement. We explain how such data is being gathered and sold. Photo: Justin Lane/Shutterstock

Consumers Are Becoming Wise to Your Nudge

“Only 2 rooms left? They don’t expect me to believe that do they? You see that everywhere.”

I leave with a wry smile. The client won’t be happy, but at least the project findings are becoming clear. Companies in certain sectors use the same behavioral interventions repeatedly. Hotel booking websites are one example. Their sustained, repetitive use of scarcity (e.g., “Only two rooms left!”) and social proof (“16 other people viewed this room”) messaging is apparent even to a casual browser.

For Chris the implication was clear: this “scarcity” was just a sales ploy, not to be taken seriously.

My colleagues and I at Trinity McQueen, an insight consultancy, wondered, was Chris’s reaction exceptional, or would the general public spot a pattern in the way that marketers are using behavioral interventions to influence their behavior? Are scarcity and social proof messages so overused in travel websites that the average person does not believe them? Do they undermine brand trust?

The broader question, one essential to both academics and practitioners, is how a world saturated with behavioral interventions might no longer resemble the one in which those interventions were first studied. Are we aiming at a moving target?

.. We started by asking participants to consider a hypothetical scenario: using a hotel booking website to find a room to stay in the following week. We then showed a series of nine real-world scarcity and social proof claims made by an unnamed hotel booking website.

Two thirds of the British public (65 percent) interpreted examples of scarcity and social proof claims used by hotel booking websites as sales pressure. Half said they were likely to distrust the company as a result of seeing them (49 percent). Just one in six (16 percent) said they believed the claims. 

The results surprised us. We had expected there to be cynicism among a subgroup—perhaps people who booked hotels regularly, for example. The verbatim commentary from participants showed people see scarcity and social proof claims frequently online, most commonly in the travel, retail, and fashion sectors. They questioned truth of these ads, but were resigned to their use:

“It’s what I’ve seen often on hotel websites—it’s what they do to tempt you.”

“Have seen many websites do this kind of thing so don’t really feel differently when I do see it.”

In a follow up question, a third (34 percent) expressed a negative emotional reaction to these messages, choosing words like contempt and disgust from a precoded list. Crucially, this was because they ascribed bad intentions to the website. The messages were, in their view, designed to induce anxiety:

 “… almost certainly fake to try and panic you into buying without thinking.”

“I think this type of thing is to pressure you into booking for fear of losing out and not necessarily true.”

For these people, not only are these behavioral interventions not working but they’re having the reverse effect. We hypothesize psychological reactance is at play: people kick back when they feel they are being coerced. Several measures in our study support this. A large minority (40 percent) of the British public agreed that that“when someone forces me to do something, I feel like doing the opposite.” This is even more pronounced in the commercial domain: seven in ten agreed that “when I see a big company dominating a market I want to use a competitor.” Perhaps we Brits are a cynical bunch, but any behavioral intervention can backfire if people think it is a cynical ploy.

Heuristics are dynamic, not static

Stepping back from hotel booking websites, this is a reminder that heuristics are not fixed, unchanging. The context for any behavioral intervention is dynamic, operating in “a coadapting loop between mind and world.” Repeated exposure to any tactic over time educates you about its likely veracity in that context. Certain tactics (e.g., scarcity claims) in certain situations (e.g., in hotel booking websites) have been overused. Our evidence suggests their power is now diminished in these contexts.

Two questions for the future

In our study, we focused on a narrow commercial domain. It would be unwise to make blanket generalizations about the efficacy of all behavioral interventions based on this evidence alone. And yet nagging doubts remain.

#1: Like antibiotic resistance, could overuse in one domain undermine the effectiveness of interventions for everyone?

If so, the toolkit of interventions could conceivably shrink over time as commercial practitioners overuse interventions to meet their short-term goals. Most would agree that interventions used to boost prosocial behavior in sectors such as healthcare have much more consequential outcomes. In time, prosocial practitioners may be less able to rely on the most heavily used tactics from the commercial domains such as social proof and scarcity messaging.

#2 : How will the growing backlash against big tech and “surveillance capitalism” affect behavioral science?

Much of the feedback from the public relates to behavioral interventions they have seen online, not offline. Many of the strategies for which big tech companies are critiqued center on the undermining of a user’s self-determination. The public may conflate the activities of these seemingly ubiquitous companies (gathering customer data in order to predict and control behavior) with those of the behavioral science community. If so, practitioners might find themselves under much greater scrutiny.

Feedback loops matter

There probably was never an era when simple behavioral interventions gave easy rewards. Human behavior—context-dependent, and driven by a multitude of interacting influences—will remain gloriously unpredictable.

Marketers should design nudges with more than the transaction in mind, not only because it is ethical or because they will be more effective over time but also because they bear responsibility toward the practitioner community as a whole.

The lesson I take from our study? Feedback loops affect the efficacy of behavioral interventions more than we realize. Just because an intervention was successful five years ago does not mean it will be successful today. Practitioners should pay as much attention to the ecosystem their interventions operate in as their customers do. There’s no better place to start than spending time with them—talking, observing, and empathizing.

We should also consider our responsibilities as we use behavioral interventions. Marketers should design nudges with more than the transaction in mind, not only because it is ethical or because they will be more effective over time but also because they bear responsibility toward the practitioner community as a whole. We owe an allegiance to the public, but also to each other.

The Ethical Dilemma Facing Silicon Valley’s Next Generation

Stanford has established itself as the epicenter of computer science, and a farm system for the tech giants. Following major scandals at Facebook, Google, and others, how is the university coming to grips with a world in which many of its students’ dream jobs are now vilified?

At Stanford University’s business school, above the stage where Elizabeth Holmes once regurgitated the myths of Silicon Valley, there now hangs a whistle splattered in blood. More than 500 people have gathered to hear the true story of Theranos, the $9 billion blood-testing company Holmes launched in 2004 as a Stanford dropout with the help of one of the school’s famed chemical engineering professors.

When Holmes was weaving the elaborate lies that ultimately led to the dissolution of her company, she leaned heavily on tech truisms that treat dogged pursuit of market domination as a virtue. “The minute that you have a backup plan, you’ve admitted that you’re not going to succeed,” she said onstage in 2015. But Shultz and Cheung, who faced legal threats from Theranos for speaking out, push back against the idea of pursuing a high-minded vision at all costs. “We don’t know how to handle new technologies anymore,” Cheung says, “and we don’t know the consequences necessarily that they’ll have.”

The words resonate in the jam-packed auditorium, where students line up afterward to nab selfies with and autographs from the whistleblowers. Kendall Costello, a junior at Stanford, idolized Holmes in high school and imagined working for Theranos one day. Now she’s more interested in learning how to regulate tech than building the next product that promises to change the world. “I really aspired to kind of be like her in a sense,” Costello says. “Then two years later, in seeing her whole empire crumble around her, in addition to other scandals like Facebook’s Cambridge Analytica and all these things that are coming forward, I was just kind of disillusioned.”

..But the endless barrage of negative news in tech, ranging from Facebook fueling propaganda campaigns by Russian trolls to Amazon selling surveillance software to governments, has forced Stanford to reevaluate its role in shaping the Valley’s future leaders. Students are reconsidering whether working at Google or Facebook is landing a dream job or selling out to craven corporate interests. Professors are revamping courses to address the ethical challenges tech companies are grappling with right now. And university president Marc Tessier-Lavigne has made educating students on the societal impacts of technology a tentpole of his long-term strategic plan.

As tech comes to dominate an ever-expanding portion of our daily lives, Stanford’s role as an educator of the industry’s engineers and a financier of its startups grows increasingly important. The school may not be responsible for creating our digital world, but it trains the architects. And right now, students are weighing tough decisions about how they plan to make a living in a world that was clearly constructed the wrong way. “To me it seemed super empowering that a line of code that I wrote could be used by millions of people the next day,” says Matthew Sun, a junior majoring in computer science and public policy, who helped organize the Theranos event. “Now we’re realizing that’s maybe not always a good thing.”

.. Because membership costs $21,000 per year, the career fairs tend to attract only the most renowned firms.

Honestly, I think they’re horrific,” says Vicki Niu, a 2018 Stanford graduate who majored in computer science. She recalls her first career fair being as hectic as a Black Friday sale, with the put-on exclusivity of a night club. (Students must present their Stanford IDs to enter the tent.) But like other freshmen, she found herself swept up in the pursuit of an internship at a large, prestigious tech firm. “Everybody is trying to get interviews at Google and Facebook and Palantir,” she says. “There’s all this hype around them. Part of my mind-set coming in was that I wanted to learn, but I think there was definitely also this big social pressure and this desire to prove yourself and to prove to people that you’re smart.”

Stanford’s computer science department has long been revered for its graduate programs—Google was famously built as a research project by Ph.D. students Larry Page and Sergey Brin—but the intense interest among undergrads is relatively new. In 2007, the school conferred more bachelor’s degrees in English (92) than computer science (70). The next year, though, Stanford revamped its CS curriculum from a one-size-fits-all education to a more flexible framework that funneled students along specialized tracks such as graphics, human-computer interaction, and artificial intelligence. “We needed to make the major more attractive, to show that computer science isn’t just sitting in a cube all day,” Mehran Sahami, a computer science professor who once worked at Google, said later.

The change in curriculum coincided with an explosion of wealth and perceived self-importance in the Valley. The iPhone opened up the potential for thousands of new businesses built around apps, and when its creator died he earned rapturous comparisons to Thomas Edison. Facebook emerged as the fastest-growing internet company of all time, and the Arab Spring made its influence seem benign rather than ominous. As the economy recovered from the recession, investors decided to park their money in startups like Uber and Airbnb that might one day become the next Google or Amazon. A 2013 video by the nonprofit Code.org featured CEOs, Chris Bosh, and will.i.am comparing computer programmers to wizards, superheroes, and rock stars.

Stanford and its students eagerly embraced this cultural shift. John Hennessy, a computer science professor who became president of the university from 2000 to 2016, served on Google’s board of directors and is now the executive chairman of Google parent company Alphabet. LinkedIn founder and Stanford alum Reid Hoffman introduced a new computer science course called Blitzscaling and brought in high-profile entrepreneurs to teach students how to “build massive organizations, user bases, and businesses, and to do so at a dizzyingly rapid pace.” (Elizabeth Holmes was among the speakers.) Mark Zuckerberg became an annual guest in Sahami’s popular introductory computer science class. “It just continued to emphasize how privileged Stanford students are in so many ways, that we have the CEO of Facebook taking time out of his day to come talk to us,” says Vinamrata Singal, a 2016 graduate who had Zuckerberg visit her class freshman year. “It felt really surreal and it did make me excited to want to continue studying computer science.”

In 2013, Stanford began directly investing in students’ companies, much like a venture capital firm. Even without direct Stanford funding, the school’s proximity to wealth helped plenty of big ideas get off the ground. Evan Spiegel, who was a junior at Stanford in 2011 when he started working on Snapchat, connected with his first investor via a Stanford alumni network on Facebook. “Instead of starting a band or trying to make an independent movie or blogging, people would get into code,” says Billy Gallagher, a 2014 graduate who was the editor-in-chief of the school newspaper. “It was a similar idea to, ‘Here’s our band’s vinyl or our band’s tape. Come see us play.’”

..But it’s not just that coding was a creative outlet, as is often depicted in tech origin stories. Working at a big Silicon Valley company also became a path to a specific kind of upper-crust success that students at top schools are groomed for. “Why do so many really bright young kids go into consulting and banking?” asks Gallagher. “They’re prestigious so your parents can be proud of you, they pay really well, and they put you on a career path to open up new doors. Now we’re seeing that’s happening a lot with Google and Facebook.”

By the time Niu arrived in 2014, computer science had become the most popular major on campus and 90 percent of undergrads were taking at least one CS course. As a high schooler, her knowledge of Silicon Valley didn’t extend much further than The Internship, a Vince Vaughn–Owen Wilson comedy about working at Google that doubled as a promotional tool for the search giant. She soon came to realize that landing a job at one of the revered tech giants or striking it rich with an app were Stanford’s primary markers of success. Her coursework was largely technical, focusing on the process of coding and not so much on the outcomes. And in the rare instances when Niu heard ethics discussed in class, it was often framed around the concerns of tech’s super-elite, like killer robots destroying humanity in the future. “In my computer science classes and just talking to other people who were interested in technology, it didn’t seem like anybody really cared about social impact,” she says. “Or if they did, they weren’t talking about it.”

In the spring of her freshman year, Niu and two other students hosted a meeting to gauge interest in a new group focused on socially beneficial uses of technology. The computer science department provided funding for red curry and pad thai. Niu was shocked when the food ran out, as more than 100 students showed up for the event. “Everybody had the same experience: ‘I’m a computer science student. I’m doing this because I want to create an impact. I feel like I’m alone.’”

From this meeting sprang the student organization CS + Social Good. It aimed to expose students to professional opportunities that existed outside the tech giants and the hyperaggressive startups that aspired to their stature. In its first year, the group developed new courses about social-impact work, brought in speakers to discuss positive uses of technology, and offered summer fellowships to get students interning at nonprofits instead of Apple or Google. Hundreds of students and faculty engaged with the organization’s programming.

In Niu’s mind, “social good” referred mainly to the positive applications of technology. But stopping bad uses of tech is just as important as promoting good ones. That’s a lesson the entire Valley has been forced to reckon with as its benevolent reputation has unraveled. “Most of our programming had been, ‘Look at these great ways you can use technology to help kids learn math,’” Niu says. “There was this real need to not only talk about that, but to also be like, ‘It’s not just that technology is neutral. It can actually be really harmful.’”

Many students find it difficult to pinpoint a specific transgression that flipped their perception of Silicon Valley, simply because there have been so many.

The torrid pace of bad news has been jarring for students who entered school with optimistic views of tech. Nichelle Hall, a senior majoring in computer science, viewed Google as the ideal landing spot for an aspiring software engineer when she started college. “I associated it so much with success,” she says. “It’s the first thing I thought about when I thought about technology.” But when she was offered an on-site interview for a potential job at the search giant in the fall, she declined. Project Dragonfly, Google’s (reportedly abandoned) effort to bring a censored search engine to China, gave her pause. It wasn’t just that she objected to the idea on principle. She sensed that working for such a large corporation would likely put her personal morals and corporate directives in conflict. “They say don’t do evil and then they do things like that,” she says. “I wasn’t really into the big-company idea for that reason. … You don’t necessarily know what the intentions of your executives are.”

  • ..Google has hardly been the most damaged brand during the techlash. (The company says it has not seen a year-over-year decline in Stanford recruits to this point.)
  • Students repeatedly bring up Facebook as a company that’s fallen out of favor.
  • Uber, with its cascade of controversies, now has to “fight to try and get people in,” according to junior Jose Giron.
  • And Palantir, the secretive data-mining company started by Stanford alum Peter Thiel, has also lost traction due to Thiel’s ties to Trump and worries that the company could help the president develop tech to advance his draconian immigration policies. “There’s a growing concern over your personal decision where to work after graduation,” Sun says.

There’s a lot of personal guilt around pursuing CS. If you do that, people call you a sellout or you might view yourself as a sellout. If you take a high-paying job, people might say, ‘Oh, you’re just going to work for a big tech company. All you care about is yourself.’”

Landing a job at a major tech firm is often as much about prestige as passion, which is one reason the CS major has expanded so dramatically. But a company’s tarnished reputation can transfer to its employees. Students debate whether fewer of their peers are actually taking gigs at Facebook, or whether they’re just less vocal in bragging about it. At lunch at a Burmese restaurant on campus, Hall and Sun summed up the transition succinctly. “No one’s like, ‘I got an internship at Uber!’” Sun says. Hall follows up: “They’re like, ‘I got an internship … at Uber …’”

The concerns are bigger than which companies rise or fall in the estimation of up-and-coming engineers. Stanford and computer science programs across the country may not be adequately equipped to wade through the ethical minefield that is expanding along with tech’s influence. Sahami acknowledges that many computer science classes are designed to teach students how to solve technical problems rather than to think about the real-world issues that a solution might create. Part of the challenge comes from computer science being a young discipline compared to other engineering fields, meaning that practical examples of malpractice are emerging in real time from today’s headlines.

Vik Pattabi, a senior majoring in computer science, originally studied mechanical engineering. In those classes, students are constantly reminded of the 1940 collapse of Tacoma Narrows Bridge: A modern marvel was destroyed because its highly educated engineers did not foresee all the possible threats to their creation (in that case, the wind). Pattabi’s CS coursework hasn’t yet included a comparable example. “A lot of the second- and third-order effects that we see [in] Silicon Valley have happened in the last two or three years,” Pattabi says. “The department is trying to react as fast as it can, but they don’t have 30 years of case studies to work with.”

Another issue is the longstanding divide on campus between the engineering types—known as “techies”—and the humanities or social sciences majors, known as “fuzzies.” Though the school has focused more on interdisciplinary studies in recent years, there remains a gap in understanding that’s often filled in by stereotype. This sort of divide is a common aspect of college life, but the stakes feel higher when some of the students will one day be programming the algorithms that govern the digital world. “There’s things [said] like, ‘You can’t spell fascist without CS.’ People will tell you things like that,” Hall says. “I think people may feel antagonized.”

The school’s deep ties to the Bay Area’s corporate giants, long a much-touted recruitment tool, suddenly look different in light of the problems that the industry has created. At the January career fair, members of Students for the Liberation of All Peoples (SLAP), an activist group on campus that aims to disrupt Stanford’s “culture of apathy,” handed out flyers that urged students not to work at Amazon and Salesforce because of their commercial ties to ICE and the United States Border Patrol. (Employees at the companies have raised similar concerns.) “REFUSE to be part of the Stanford → racist tech pipeline,” the flyer reads, in part.

Two students in the group said they were asked to leave the career fair by Computer Forum officials. When the students refused to comply, they say they were escorted out by campus police under threat of arrest for disrupting a private event. A Stanford spokesperson confirmed the incident. “The protesting students were disruptive and asked by police to leave,” the spokesperson said in an emailed statement. “The students were given the option to protest outside the event or in White Plaza. They chose to leave.”

For members of SLAP, the exchange reinforced the ways in which Stanford institutionally and culturally cuts itself off from the issues occurring in the real world. “You might hear this idea of the ‘Stanford bubble,’ where Stanford students kind of just stay on campus and they just do what they need to do for their classes and their jobs,” says Kimiko Hirota, a SLAP member and junior majoring in sociology and comparative studies in race and ethnicity, who participated in the career fair protest. She said many of the students she talked to had no idea about the tech firms’ government contracts. “To me the amount of students on campus that are politically engaged and are actively using their Stanford privilege for a greater good is extremely small.”

The computer science major includes a “technology in society” course requirement that can be fulfilled by a number of ethics classes, and teaching students about their ethical responsibilities is a component of the department’s accreditation process. CS + Social Good has expanded its footprint on campus, teaching more classes and organizing more events like the Theranos talk starring the whistleblowers. Yet the flexibility of the CS major cuts both ways. It means that students who care to take a holistic approach to the discipline can combine rigorous training in code with an education in ethics; it also means that it remains all too easy for some students to avoid engaging with the practical ramifications of their work. “You can very much come to Stanford feeling very apathetic about the impact of the technology and leave just that way without any effort,” Hall says. “I don’t feel as though we are forced to encounter the impact.”

On a Wednesday afternoon, students spill into a lobby in front of a standing-room-only auditorium in the School of Education, where Jeremy Weinstein is talking about the promise and perils of using algorithms in criminal justice. Next year Californians will vote on a bill that would replace cash bail with a computerized risk-assessment system that calculates an arrested person’s likelihood of returning for a court appearance. The idea is to give people who can’t afford to make bail another way to get out of jail through a fairer policy. But such algorithms have been found to reinforce racial biases in the criminal justice system, according to a ProPublicainvestigation. Instead of being a solution to an unfair process, poorly implemented software could create an entirely new form of systemic discrimination. Students were asked to vote on whether they supported cash bail or the algorithm. The class was evenly split. Unlike in most CS classes, Weinstein could not offer students the comfort of a “correct” answer. “We need to deconstruct these algorithms in order to help people see that technology is not just something to be trusted,” he says. “It’s not just something that’s objective and fair because it’s numerical, but it actually reflects a set of choices that people make.”

Though Weinstein is a political science professor, he’s one of three educators leading the new version of the CS department’s flagship ethics course, CS181. Teaching with him are Sahami, the computer science professor, and Rob Reich, a political science professor and philosopher. The trio devised the course structure over a series of coffee-fueled meetings as the tech backlash unfolded during the past year and a half. After discussing algorithmic bias, the class will explore privacy in the age of facial recognition, the social impacts of autonomous technology, and the responsibilities of private platforms in regard to free speech. The coursework is meant to be hands-on. During the current unit, students must build their own risk-assessment algorithm using an actual criminal history data set, then assess it for fairness. “We run it like a talk show,” Weinstein says. “There’s a lot of call-and-response, asking questions, getting people to talk in small groups.”

While Stanford’s computer science program has had an ethics component for decades, this course marks the first time that experts in other fields are so directly involved in the curriculum. About 300 students have enrolled in the course, including majors in history, philosophy, and biology. It provides an opportunity for the techies and fuzzies to learn from one another, and for professors removed from the Valley’s tech culture to contextualize the industry’s societal impacts. In the course overview materials, the moral reckoning occurring in the tech sector today is compared to the advent of the nuclear bomb.

The course’s popularity is a sign that the gravity of the moment is weighing on many Stanford minds. Antigone Xenopoulos, a junior majoring in symbolic systems (a techie-fuzzie hybrid major that incorporates computer science, linguistics, and philosophy), is a research assistant for CS181. She wasn’t the only student who quoted a line from Spider-Man to me—with great power comes great responsibilitywhen referencing the current landscape. “If they’re going to give students the tools to have such immense influence and capabilities, [Stanford] should also guide those students in developing ethical compasses,” she says.

 ..While the early years of the decade saw prominent tech executives like Holmes and Zuckerberg teaching students how to lifehack their way to success, the new ethics course will bring in guest speakers from WhatsApp, Facebook, and the NSA to answer “hard questions,” Sahami says “I wouldn’t say industry is influencing Stanford,” he says. “I would say the relationship with industry allows us to have more authentic conversations where we’re really bringing in people who are decision-makers in these areas.”
.. Some of the more critical voices from within the industry are also taking on more permanent roles at Stanford. Alex Stamos, the former chief security officer at Facebook, taught a “hack lab” for non-CS majors last fall, helping them understand cybersecurity threats. He’s now developing a more advanced computer science course, to be piloted later this year, that explores trust and safety issues in the era of misinformation and widespread online harassment. Stamos led Facebook’s internal investigation into Russian political interference on the platform and clashed with top executives over how much of that information should be made public. He left the company in August to join Stanford, where he hopes to impart lessons from his time battling a digital attack that was waged not through hacking, but through ad purchases, incendiary memes, and politically charged Facebook events. “One of the things we don’t teach computer science students is all of the non-technically advanced abuses of technology that cause real harm,” Stamos says. “I want to expose students to [things like], ‘These are the mistakes that were made before, these are the kinds of problems that existed, and these are the company’s reactions to those mistakes.’”

Stamos rejects the idea that ethics is the correct framework to think about addressing tech’s most pressing issues. “The problem here is not that people are making decisions that are straight-up evil,” he says. “The problem is that people are not foreseeing the outcomes of their actions. Part of that is a lack of paranoia. One of our problems in Silicon Valley is we build products to be used the right way. … It’s hard to envision all the misuses unless you understand all the things that have come before.”

While he says that Stanford bears some responsibility for the Valley’s tunnel vision, he praises the school for welcoming tech leaders with recent, relevant experiences to help students prepare for emerging threats. “When I was going to school, computers were important, but we weren’t talking about building companies that might change history,” Stamos says. “The students who come to me are really interested in the impact of what they do on society.”

Stamos regularly fields questions from students about whether to work at Facebook or Google. He tells them that they should, not in spite of the companies’ mounting issues, but because of them.If you actually care about making communication technologies compatible with democracy, then the place to be is at one of the companies that actually has the problems,” he says. “Not working at the big places that could actually solve it does not make things better.”

The tech giants continue to consolidate power even as they face withering criticism. Facebook’s user base growth accelerated last quarter despite its scandals. Uber will go public this year at a valuation as high as $120 billion. Apple, Amazon, and Google are all planning to open large new offices around the country in the near future. And for all the optimistic talk of working at ethically minded startups among students, creation of nascent businesses is at roughly a 40-year low in the United States. Small firms that enter the terrain of the Frightful Five are typically acquired or destroyed.

It is hard to find a Stanford computer science student, even among the ethically minded set CS + Social Good has helped cultivate, who will publicly proclaim that they’ll never work for one of these dominant companies, as all of them offer opportunities for high pay, engaging individual work, and comforting job security. International students have to worry about securing work visas however they can; students on financial aid may need to make enough money to support other family members. And for many others, it’s not clear that anything that’s happened in the Valley is truly beyond the pale. In that sense, the engineers are just like us, aghast at the headlines but still clicking away inside a system that’s come to feel inescapable. “These events feel too big for most students to take into account,” says Jason He, a master’s student in electrical engineering. “At the end of the day, I think for a lot of students who have been paying a lot of money for their education, if the six-figure salary is offered, it’s pretty hard for students to turn down.”

There is still an opportunity, the thinking at Stanford goes, for every company to do good. Nichelle Hall, the senior who declined the Google interview, landed a job working on Medium’s trust and safety team. But she recognizes that she may have set her qualms aside if Google had been her only employment option. “Some of the feedback that CS + Social Good gets is, ‘Oh, the members end up working for Facebook, they end up working for Google,’” says Hall, who’s been involved with the organization since 2017. “People who care about this intersection of social impact and computer science will go to these companies and do a better job than if they weren’t interested in this stuff.”

Impact is the word that I heard more than any other while on campus. It’s how students framed their decision to go to Stanford, to pursue a career in computer science, to do good in the world after graduating. It’s a word that Hoffman used to describe his Blitzscaling class, and one Holmes used to explain to students why she dropped out of school. “I had the tools that I needed to be able to go out and begin making this impact,” she said. It’s the currency of Silicon Valley, that people spend for good and for ill.

The ability to create impact with a few lines of code has long been what separated software engineers from the rest of us, and turned the Valley into a self-proclaimed utopia of young rebels using technology to save the world from its older, antiquated self. But that’s not the image anymore. Now aspiring engineers draw a comparison between their chosen profession and investment banking. The finance industry wrecked the world a decade ago because of its misunderstanding of complex, automated systems that spun out of control—and its confidence that someone else would ultimately pay the price if things went wrong. You see this confidence in Zuckerberg’s incredulous response when anyone suggests that he resign, and in Google CEO Sundar Pichai’s initial refusal to testify before Congress. And you can see it at Stanford, where the endowment has never been higher, the fundraising has never been easier, and the career fair is still filled with slogans vowing to make the world a better place.

Perhaps this entire strip of land known as the Valley will fully calcify into a West Coast Wall Street, where the people with all the insider knowledge profit off the muppets who can’t stop using their products. If today’s young tech skeptics turn to cynics when they enter the working world, such a future is easy to imagine. But—and this is the hopeful, intoxicating, dangerous thing about technology—there’s always bright minds out there who think they can build a solution that just might fix this mess we’ve made. And people, especially young people, will always be enthralled by the romance of a new idea. “We’re creating things that haven’t necessarily existed before, and so we won’t be able to anticipate all the challenges that we have,” says Hall, who graduates in just four months. “But once we do, it’s important that we can reconcile them with grace and humility. I’m sure it will be a hard job, but it’s important that it’s hard. I’m up for the challenge.”