No More Apologies: Inside Facebook’s Push to Defend Its Image

Mark Zuckerberg, the chief executive, has signed off on an effort to show users pro-Facebook stories and to distance himself from scandals.

Mark Zuckerberg, Facebook’s chief executive, signed off last month on a new initiative code-named Project Amplify.

The effort, which was hatched at an internal meeting in January, had a specific purpose: to use Facebook’s News Feed, the site’s most important digital real estate, to show people positive stories about the social network.

The idea was that pushing pro-Facebook news items — some of them written by the company — would improve its image in the eyes of its users, three people with knowledge of the effort said. But the move was sensitive because Facebook had not previously positioned the News Feed as a place where it burnished its own reputation. Several executives at the meeting were shocked by the proposal, one attendee said.

Project Amplify punctuated a series of decisions that Facebook has made this year to aggressively reshape its image. Since that January meeting, the company has begun a multipronged effort to change its narrative by distancing Mr. Zuckerberg from scandals, reducing outsiders’ access to internal data, burying a potentially negative report about its content and increasing its own advertising to showcase its brand.

The moves amount to a broad shift in strategy. For years, Facebook confronted crisis after crisis over privacymisinformation and hate speech on its platform by publicly apologizing. Mr. Zuckerberg personally took responsibility for Russian interference on the site during the 2016 presidential election and has loudly stood up for free speech online. Facebook also promised transparency into the way that it operated.

But the drumbeat of criticism on issues as varied as racist speech and vaccine misinformation has not relented. Disgruntled Facebook employees have added to the furor by speaking out against their employer and leaking internal documents. Last week, The Wall Street Journal published articles based on such documents that showed Facebook knew about many of the harms it was causing.

So Facebook executives, concluding that their methods had done little to quell criticism or win supporters, decided early this year to go on the offensive, said six current and former employees, who declined to be identified for fear of reprisal.

“They’re realizing that no one else is going to come to their defense, so they need to do it and say it themselves,” said Katie Harbath, a former Facebook public policy director.

The changes have involved Facebook executives from its marketing, communications, policy and integrity teams. Alex Schultz, a 14-year company veteran who was named chief marketing officer last year, has also been influential in the image reshaping effort, said five people who worked with him. But at least one of the decisions was driven by Mr. Zuckerberg, and all were approved by him, three of the people said.

Alex Schultz, Facebook’s chief marketing officer, has been influential in reshaping the company’s image.
Credit…Tommaso Boddi/Getty Images

Joe Osborne, a Facebook spokesman, denied that the company had changed its approach.

“People deserve to know the steps we’re taking to address the different issues facing our company — and we’re going to share those steps widely,” he said in a statement.

For years, Facebook executives have chafed at how their company appeared to receive more scrutiny than Google and Twitter, said current and former employees. They attributed that attention to Facebook’s leaving itself more exposed with its apologies and providing access to internal data, the people said.

So in January, executives held a virtual meeting and broached the idea of a more aggressive defense, one attendee said. The group discussed using the News Feed to promote positive news about the company, as well as running ads that linked to favorable articles about Facebook. They also debated how to define a pro-Facebook story, two participants said.

That same month, the communications team discussed ways for executives to be less conciliatory when responding to crises and decided there would be less apologizing, said two people with knowledge of the plan.

Mr. Zuckerberg, who had become intertwined with policy issues including the 2020 election, also wanted to recast himself as an innovator, the people said. In January, the communications team circulated a document with a strategy for distancing Mr. Zuckerberg from scandals, partly by focusing his Facebook posts and media appearances on new products, they said.

The Information, a tech news site, previously reported on the document.

The impact was immediate. On Jan. 11, Sheryl Sandberg, Facebook’s chief operating officer — and not Mr. Zuckerberg — told Reuters that the storming of the U.S. Capitol a week earlier had little to do with Facebook. In July, when President Biden said the social network was “killing people” by spreading Covid-19 misinformation, Guy Rosen, Facebook’s vice president for integrity, disputed the characterization in a blog post and pointed out that the White House had missed its coronavirus vaccination goals.

“Facebook is not the reason this goal was missed,” Mr. Rosen wrote.

A mob climbed the walls of the U.S. Capitol on Jan. 6. Sheryl Sandberg, Facebook’s chief operating officer, later said the insurrection was not organized on the social network.

Credit…Jason Andrew for The New York Times

Mr. Zuckerberg’s personal Facebook and Instagram accounts soon changed. Rather than addressing corporate controversies, Mr. Zuckerberg’s posts have recently featured a video of himself riding an electric surfboard with an American flag and messages about new virtual reality and hardware devices.

Facebook also started cutting back the availability of data that allowed academics and journalists to study how the platform worked. In April, the company told its team behind CrowdTangle, a tool that provides data on the engagement and popularity of Facebook posts, that it was being broken up. While the tool still exists, the people who worked on it were moved to other teams.

Part of the impetus came from Mr. Schultz, who had grown frustrated with news coverage that used CrowdTangle data to show that Facebook was spreading misinformation, said two people involved in the discussions.

For academics who relied on CrowdTangle, it was a blow. Cameron Hickey, a misinformation researcher at the National Conference on Citizenship, a nonprofit focused on civic engagement, said he was “particularly angry” because he felt the CrowdTangle team was being punished for giving an unfiltered view of engagement on Facebook.

Mr. Schultz argued that Facebook should publish its own information about the site’s most popular content rather than supply access to tools like CrowdTangle, two people said. So in June, the company compiled a report on Facebook’s most-viewed posts for the first three months of 2021.

But Facebook did not release the report. After the policy communications team discovered that the top-viewed link for the period was a news story with a headline that suggested a doctor had died after receiving the Covid-19 vaccine, they feared the company would be chastised for contributing to vaccine hesitancy, according to internal emails reviewed by The New York Times.

A day before the report was supposed to be published, Mr. Schultz was part of a group that voted to shelve the document, according to the emails. He later posted an internal message about his role at Facebook, which was reviewed by The Times, saying, “I do care about protecting the company’s reputation, but I also care deeply about rigor and transparency.”

Facebook also worked to stamp out employee leaks. In July, the communications team shuttered comments on an internal forum that was used for companywide announcements. “OUR ONE REQUEST: PLEASE DON’T LEAK,” read a post about the change.

At the same time, Facebook ramped up its marketing. During the Olympics this summer, the company paid for television spots with the tagline “We change the game when we find each other,” to promote how it fostered communities. In the first half of this year, Facebook spent a record $6.1 billion on marketing and sales, up more than 8 percent from a year earlier, according to a recent earnings report.

Weeks later, the company further reduced the ability of academics to conduct research on it when it disabled the Facebook accounts and pages of a group of New York University researchers. The researchers had created a feature for web browsers that allowed them to see users’ Facebook activity, which 16,000 people had consented to use. The resulting data had led to studies showing that misleading political ads had thrived on Facebook during the 2020 election and that users engaged more with right-wing misinformation than many other types of content.

In a blog post, Facebook said the N.Y.U. researchers had violated rules around collecting user data, citing a privacy agreement it had originally struck with the Federal Trade Commission in 2012. The F.T.C. later admonished Facebook for invoking its agreement, saying it allowed for good-faith research in the public interest.

Laura Edelson, the lead N.Y.U. researcher, said Facebook cut her off because of the negative attention her work brought. “Some people at Facebook look at the effect of these transparency efforts and all they see is bad P.R.,” she said.

The episode was compounded this month when Facebook told misinformation researchers that it had mistakenly provided incomplete data on user interactions and engagement for two years for their work.

“It is inconceivable that most of modern life, as it exists on Facebook, isn’t analyzable by researchers,” said Nathaniel Persily, a Stanford University law professor, who is working on federal legislation to force the company to share data with academics.

In August, after Mr. Zuckerberg approved Project Amplify, the company tested the change in three U.S. cities, two people with knowledge of the effort said. While the company had previously used the News Feed to promote its own products and social causes, it had not turned to it to openly push positive press about itself, they said.

Once the tests began, Facebook used a system known as Quick Promotes to place stories about people and organizations that used the social network into users’ News Feeds, they said. People essentially see posts with a Facebook logo that link to stories and websites published by the company and from third-party local news sites. One story pushed “Facebook’s Latest Innovations for 2021” and discussed how it was achieving “100 percent renewable energy for our global operations.”

“This is a test for an informational unit clearly marked as coming from Facebook,” Mr. Osborne said, adding that Project Amplify was “similar to corporate responsibility initiatives people see in other technology and consumer products.”

Nick Clegg, Facebook’s vice president for global affairs and communications, pushed back over the weekend against Wall Street Journal articles about the company.
Credit…Marlene Awaad/Bloomberg

Facebook’s defiance against unflattering revelations has also not let up, even without Mr. Zuckerberg. On Saturday, Nick Clegg, the company’s vice president for global affairs, wrote a blog post denouncing the premise of The Journal investigation. He said the idea that Facebook executives had repeatedly ignored warnings about problems was “just plain false.”

“These stories have contained deliberate mischaracterizations of what we are trying to do,” Mr. Clegg said. He did not detail what the mischaracterizations were.

If Private Platforms Use Government Guidelines to Police Content, is that State Censorship?

YouTube’s decision to demonetize podcaster Bret Weinstein raises serious questions, both about the First Amendment and regulatory capture

 

Just under three years ago, Infowars anchor Alex Jones was tossed off Facebook, Apple, YouTube, and Spotify, marking the unofficial launch of the “content moderation” era. The censorship envelope has since widened dramatically via a series of high-profile incidents: Facebook and Twitter

This week’s decision by YouTube to demonetize podcaster Bret Weinstein belongs on that list, and has a case to be put at or near the top, representing a different and perhaps more unnerving speech conundrum than those other episodes.

Profiled in this space two weeks ago, Weinstein and his wife Heather Heying — both biologists — host the podcast DarkHorse, which by any measure is among the more successful independent media operations in the country. They have two YouTube channels, a main channel featuring whole episodes and livestreams, and a “clips” channel featuring excerpts from those shows.

Between the two channels, they’ve been flagged 11 times in the last month or so. Specifically, YouTube has honed in on two areas of discussion it believes promote “medical misinformation.” The first is the potential efficacy of the repurposed drug ivermectin as a Covid-19 treatment. The second is the third rail of third rails, i.e. the possible shortcomings of the mRNA vaccines produced by companies like Moderna and Pfizer.

Weinstein, who was also criticized for arguing the lab-leak theory before conventional wisdom shifted on that topic, says YouTube’s decision will result in the loss of “half” of his and Heying’s income. However, he says, YouTube told him he can reapply after a month.

YouTube’s notice put it as follows: “Edit your channel and reapply for monetization… Make changes to your channel based on our feedback. Changes can include editing or deleting videos and updating video details.”

They want me to self-censor,” he says. “Unless I stop broadcasting information that runs afoul of their CDC-approved talking points, I’ll remain demonetized.”

Weinstein’s travails with YouTube sound like something out of a Star Trek episode, in which the Enterprise crew tries and fails to communicate with a malevolent AI attacking the ship. In the last two weeks, he emailed back and forth with the firm, at one point receiving an email from someone who identified himself only as “Christopher,” indicating a desire to set up a discussion between Weinstein and various parties at YouTube.

Over the course of these communications, Weinstein asked if he could nail down the name and contact number of the person with whom he was interacting. “I said, ‘Look, I need to know who you are first, whether you’re real, what your real first and last names are, what your phone number is, and so on,” Weinstein recounts. “But on asking what ‘Christopher’s’ real name and email was, they wouldn’t even go that far.” After this demand of his, instead of giving him an actual contact, YouTube sent him a pair of less personalized demonetization notices.

As has been noted in this space multiple times, this is a common theme in nearly all of these stories, but Weinstein’s tale is at once weirder and more involved, as most people in these dilemmas never get past the form-letter response stage. YouTube has responded throughout to media queries about Weinstein’s case, suggesting they take it seriously.

YouTube’s decision with regard to Weinstein and Heying seems part of an overall butterfly effect, as numerous other figures either connected to the topic or to DarkHorse have been censured by various platforms. Weinstein guest Dr. Robert Malone, a former Salk Institute researcher often credited with helping develop mRNA vaccine technology, has been suspended from LinkedIn, and Weinstein guest Dr. Pierre Kory of the Front Line COVID-19 Critical Care Alliance (FLCCC) has had his appearances removed by YouTube. Even Satoshi Ōmura, who won the Nobel Prize in 2015 for his work on ivermectin, reportedly had a video removed by YouTube this week.

There are several factors that make the DarkHorse incident different from other major Silicon Valley moderation decisions, including the fact that the content in question doesn’t involve electoral politics, foreign intervention, or incitement. The main issue is the possible blurring of lines between public and private censorship.

When I contacted YouTube about Weinstein two weeks ago, I was told, “In general, we rely on guidance from local and global health authorities (FDA, CDC, WHO, NHS, etc) in developing our COVID-19 misinformation policies.”

The question is, how active is that “guidance”? Is YouTube acting in consultation with those bodies in developing those moderation policies? As Weinstein notes, an answer in the affirmative would likely make theirs a true First Amendment problem, with an agency like the CDC not only setting public health policy but also effectively setting guidelines for private discussion about those policies. “If it is in consultation with the government,” he says, “it’s an entirely different issue.”

Asked specifically after Weinstein’s demonetization if the “guidance” included consultation with authorities, YouTube essentially said yes, pointing to previous announcements that they consult other authorities, and adding, “When we develop our policies we consult outside experts and YouTube creators. In the case of our COVID-19 misinformation policies, it would be guidance from local and global health authorities.”

Weinstein and Heying might be the most prominent non-conservative media operation to fall this far afoul of a platform like YouTube. Unlike the case of, say, Alex Jones, the moves against the show’s content have not been roundly cheered. In fact, they’ve inspired blowback from across the media spectrum, with everyone from Bill Maher to Joe Rogan to Tucker Carlson taking notice.

“They threw Bret Weinstein off YouTube, or almost,” Maher said on Real Time last week. “YouTube should not be telling me what I can see about ivermectin. Ivermectin isn’t a registered Republican. It’s a drug!”

From YouTube’s perspective, the argument for “medical misinformation” in the DarkHorse videos probably comes down to a few themes in Weinstein’s shows. Take, for example, an exchange between Weinstein and Malone in a video about the mRNA vaccines produced by companies like Moderna and Pfizer:

Weinstein: The other problem is that what these vaccines do is they encode spike protein… but the spike protein itself we now know is very dangerous, it’s cytotoxic, is that a fair description?

Malone: More than fair, and I alerted the FDA about this risk months and months and months ago.

In another moment, entrepreneur and funder of fluvoxamine studies Steve Kirsch mentioned that his carpet cleaner had a heart attack minutes after taking the Pfizer vaccine, and cited Canadian viral immunologist Byram Bridle in saying that that the COVID-19 vaccine doesn’t stay localized at point of injection, but “goes throughout your entire body, it goes to your brain to your heart.” 

Politifact rated the claim that spike protein is cytotoxic “false,” citing the CDC to describe the spike protein as “harmless.” As to the idea that the protein does damage to other parts of the body, including the heart, they quoted an FDA spokesperson who said there’s no evidence the spike protein “lingers at any toxic level in the body.”

Would many doctors argue that the 226 identified cases of myocarditis so far is tiny in the context of 130 million vaccine doses administered, and overall the danger of myocarditis associated with vaccine is far lower than the dangers of myocarditis in Covid-19 patients?

Absolutely. It’s also true that the CDC itself had a meeting on June 18th to discuss cases of heart inflammation reported among people who’d received the vaccine. The CDC, in other words, is simultaneously telling news outlets like Politifact that spike protein is “harmless,” and also having ad-hoc meetings to discuss the possibility, however remote from their point of view, that it is not harmless. Are only CDC officials allowed to discuss these matters?

The larger problem with YouTube’s action is that it relies upon those government guidelines, which in turn are significantly dependent upon information provided to them by pharmaceutical companies, which have long track records of being less than forthright with the public.

In the last decade, for instance, the U.S. government spent over $1.5 billion to stockpile Tamiflu, a drug produced by the Swiss pharma firm Roche. It later came out — thanks to the efforts of a Japanese pediatrician who left a comment on an online forum — that Roche had withheld crucial testing information from British and American buyers, leading to a massive fraud suit. Similar controversies involving the arthritis drug Vioxx and the diabetes drug Avandia were prompted by investigations by independent doctors and academics.

As with financial services, military contracting, environmental protection, and other fields, the phenomenon of regulatory capture is demonstrably real in the pharmaceutical world. This makes basing any moderation policy on official guidelines problematic. If the proper vaccine policy is X, but the actual policy ends up being plus unknown commercial consideration Ya policy like YouTube’s more or less automatically preempts discussion of Y.

Some of Weinstein’s broadcasts involve exactly such questions about whether or not it’s necessary to give Covid-19 vaccines to children, to pregnant women, and to people who’ve already had Covid-19, and whether or not the official stance on those matters is colored by profit considerations. Other issues, like whether or not boosters are going to be necessary, need a hard look in light of the commercial incentives.

These are legitimate discussions, as the WHOs own behavior shows. On April 8th, the WHO website said flatly: “Children should not be vaccinated for the moment.” A month and a half later, the WHO issued a new guidance, saying the Pfizer vaccine was “suitable for use by people aged 12 years and above.”

The WHO was clear that its early recommendation was based on a lack of data, and on uncertainty about whether or not children with a low likelihood of infection should be a “priority,” and not on any definite conviction that the vaccine was unsafe. And, again, a Politifact check on the notion that the WHO “reversed its stance” on children rated the claim false, saying that the WHO merely “updated” its guidance on children. Still, the whole drama over the WHO recommendation suggested it should at least be an allowable topic of discussion.

Certainly there are critics of Weinstein’s who blanch at the use of sci-fi terms like “red pill” (derived from worldview-altering truth pill in The Matrix), employing language like “very dangerous” to describe the mRNA vaccines, and descriptions of ivermectin as a drug that would “almost certainly make you better.”

Even to those critics, however, the larger issue Weinstein’s case highlights should be clear. If platforms like YouTube are basing speech regulation policies on government guidelines, and government agencies demonstrably can be captured by industry, the potential exists for a new brand of capture — intellectual capture, where corporate money can theoretically buy not just regulatory relief but the broader preemption of public criticism. It’s vaccines today, and that issue is important enough, but what if in the future the questions involve the performance of an expensive weapons program, or a finance company contracted to administer bailout funds, or health risks posed by a private polluter?

Weinstein believes capture plays a role in his case at some level. “It’s the only thing that makes sense,” he says. He hopes the pressure from the public and from the media will push platforms like YouTube to reveal exactly how, and with whom, they settle upon their speech guidelines. “There’s something industrial strength about the censorship,” he says, adding. “There needs to be a public campaign to reject it.”

I watched Weinstein’s Youtube discussion of the mRNA vaccine with Robert Malone. As a physician, I didn’t find his discussion particularly convincing, nor that of Dr. Malone. Three or four hundred million people have now been vaccinated and we are not seeing a lot of serious side effects, which we would almost have certainly seen by now if there really was a problem. The issue, as I see it, is that Weinstein is making a living with his Youtube channel and obviously, he is motivated to increase his income by generating controversy. There’s a heck of a lot of content on Youtube and careful, well-reasoned discussion probably would generate less income than outlandish claims. As a physician, I’m used to reading medical journals and I have enough statistical training to evaluate the evidence. That’s not true for the majority of people exposed to this kind of programming. I’d have found Weinstein’s program a lot more interesting if he had brought on an active mRNA researcher to debate Dr. Malone. (I don’t think Dr. Malone is “in his dotage” at age 60, but he’s clearly not involved with this kind of work anymore.) Weinstein is a smart guy, but he’s not a physician, and not a virologist. His show needs to be a little more balanced if he wants to be taken seriously.

Check out Dr John Campbell , https://youtube.com/c/Campbellteaching he has over 1 million subs, talks about ivermectin all the time and is not demonetized. Why? Because of how he frames it, he is also a believer in vaccines.

Bret on the other hand has gone full Alex Jones with a messiah complex to boot! He has lost the fucking plot completely. Nothing he says makes sense anymore, it’s full on global conspiracy shit. He takes ivermectin live on air.. says he is not getting vaccinated but using ivermectin prophylacticly?.. it’s just totally over the top for a public channel and asking to be demonetized.

I think the reason there’s very little effort going into figuring out if ivermectin works is because we have vaccines that work so well and the fact that so many people got burned promoting early alternative treatments that turned out to be bullshit like hydroxychloroquine…. But I’m sure I’m wrong and Bret is the savior of humanity battling against big tech and the globalists behind the great reset! Maybe he should try and build back betterer his channel.

740 more comments…