THANKS TO GLOBE-SPANNING SOCIAL PLATFORMS like Facebook, YouTube, and Twitter, misinformation (any wrong information) and disinformation (intentional misinformation like propaganda) have never been able to spread so rapidly or so far, powered by algorithms and automated filters. But misinformation expert Joan Donovan, who runs the Technology and Social Change Research Project at Harvard’s Shorenstein Center, says social media platforms are not the only ones who play a critical role in perpetuating the misinformation problem. Journalists and media companies also do, Donovan says, because they often help to amplify misinformation when they cover it and the bad actors who create it, often without thinking about the impact of their coverage.
There is clearly more misinformation around than in previous eras, Donovan tells CJR in a recent interview on our Galley discussion platform, because there’s just a lot more media, and therefore a lot more opportunity to distribute it. “But quantity never really matters unless there is significant attention to the issue being manipulated,” she says. “So this is where my research is fundamentally about journalism and not about audiences. Trusted information brokers, like journalists and news organizations, are important targets for piggybacking misinformation campaigns into the public sphere.”
Donovan’s research looks at how trolls and others—whether they are government-backed or freelance—can use techniques including “social engineering” (lying to or manipulating someone to achieve a specific outcome) and low-level hacking to persuade journalists and news outlets of the newsworthiness of a specific campaign. “Once that story gets picked up by a reputable outlet, it’s game time,” she says. Donovan and other misinformation experts warned that the Christchurch shooter’s massive essay about his alleged justification for the incident in April was clearly designed to get as much media attention as possible, by playing on certain themes and popular topics, and they advised media outlets not to play into this strategy by quoting from it.
Before she joined the Shorenstein Center at Harvard last year, Donovan was a member of the research group Data & Society, where she led the Media Manipulation Initiative, mapping how interest groups, governments, and political operatives use the internet and the media to intentionally manipulate messages. Data & Society published an extensive report on the problem last year, written by Syracuse University media studies professor Whitney Phillips, entitled “The Oxygen of Amplification,” with advice on how to cover topics like white supremacy and the alt-right without giving them more credibility in the process.
“Sometimes, I want to throw my hands in the air and grumble, ‘We know what we know from history! Journalists are not outside of society. In fact, they are the most crucial way the public makes sense of the world,” Donovan writes in her Galley interview. “When journalists pay attention to a particular person or issue, we all do… and that has reverberating effects.’” As part of her postdoctoral research, Donovan looked at racial violence and media coverage in the 1960s and 1970s, when the Ku Klux Klan was active. “The Klan had a specific media strategy to cultivate journalists for positive coverage of their events,” Donovan says. “As journalists pivoted slowly to covering the civil rights movement with a sympathetic tone, Klan violence rises—but also public spectacles, torch marches, and cross burnings. These acts are often done with the potential for media coverage in mind.”
Sometimes, I want to throw my hands in the air and grumble, ‘We know what we know from history! Journalists are not outside of society. In fact, they are the most crucial way the public makes sense of the world.
While mass shootings are clearly newsworthy, Donovan says, the internet introduces a new dynamic where all stories on a topic are instantly available to virtually anyone anywhere around the globe. And the fact that they are shared and re-shared and commented on via half a dozen different social networks means that “journalists quickly lose control over the reception of their work,” she says. “This is why it is even more crucial that journalists frame stories clearly and avoid embedding and hyperlinking to known online spaces of radicalization.” Despite this kind of advice from Donovan and others, including sociologist Zeynep Tufekci, a number of media outlets linked to the Christchurch shooter’s writings, and at least one even included a clip from the live-streamed video of his attack.
When it comes to what the platforms themselves should do about mitigating the spread of misinformation and the amplification of extremists, Donovan says the obvious thing is that they should remove accounts that harass and use hate speech to silence others. This “would go a long way to stamping out the influencers who are providing organizing spaces for their fans to participate in networked harassment and bullying,” she says. On YouTube, some would-be “influencers” use hate speech as a way to attract new audiences and solicit donations, Donovan says, and these attempts are aided by the algorithms and the ad-driven model of the platforms. “These influencers would not have grown this popular without the platform’s consent,” she says. “Something can be done and the means to do it are already available.”
On the topic of the recent Christchurch Call—a commitment to take action on extremism signed by the governments of New Zealand, France, Canada, and a number of other nations, along with tech platforms like Google, Facebook, and Twitter—Donovan says that until there are tangible results, the agreement looks like just another pledge to do better. “These companies apologize and make no specific commitments to change. There are no benchmarks to track progress, no data trails to audit, no human rights abuses accounted for.” Something the Christchurch Call also doesn’t address, Donovan says, are the fundamental incentives behind how hate groups are financed and resourced online, “thanks to access to payment processIng and broadcast technologies at will.”
His campaign is testing the boundaries of what platforms like Twitter and Facebook allow in politics. They’re having trouble coming up with an answer.
SAN FRANCISCO — In the first few months of his presidential campaign, Michael R. Bloomberg has been as aggressive on social media as President Trump was four years ago. But with a lot more money to spend.
Mr. Bloomberg has hired popular online personalities to create videos and images promoting his candidacy on social media. He is hiring 500 people — at $2,500 a month — to spend 20 to 30 hours a week recruiting their friends and family to write supportive posts. And his campaign has posted on Twitter and Instagram a flattering, digitally altered video of his debate performance last week in Las Vegas.
Through his money and his willingness to experiment, the billionaire former mayor of New York has poked holes in the already slapdash rules for political campaigns on social media. His digitally savvy campaign for the Democratic nomination has shown that if a candidate is willing to push against the boundaries of what social media companies will and won’t allow, the companies won’t be quick to push back.
“The Bloomberg campaign is destroying norms that we will never get back,” said Emerson Brooking, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab, which studies disinformation. The campaign, he said, has “revealed the vulnerabilities that still exist in our social media platforms even after major reforms.”
On Friday, Twitter announced that it was suspending 70 pro-Bloomberg accounts for violating its policies on “platform manipulation and spam.” The accounts were part of a coordinated effort by people paid by the Bloomberg campaign to post tweets in his favor.
Twitter’s rules state, in part, “You can’t artificially amplify or disrupt conversations through the use of multiple accounts,” including “coordinating with or compensating others” to tweet a certain message.
In response to Twitter’s move, the Bloomberg campaign issued a statement on Friday evening. “We ask that all of our deputy field organizers identify themselves as working on behalf of the Mike Bloomberg 2020 campaign on their social media accounts,” it said. The statement added that the tweets shared by its staff and volunteers with their networks went through Outvote, a voter engagement app, and were “not intended to mislead anyone.”
Social media companies have been under pressure since the 2016 presidential election. Over the last year or so, they have publicized a stream of new rules aimed at disinformation and manipulation. Facebook, Google and Twitter have created teams that look for and remove disinformation. They have started working with fact checkers to distinguish and label false content. And they have created policies explaining what they will allow in political advertisements.
Most social media companies have special rules that place elected officials and political candidates in a protected category of speech. Politicians are allowed much more flexibility to say whatever they want online. But the companies have had a hard time defining what is a political statement and what crosses the line into deception.
When Mr. Trump posted an altered video of Speaker Nancy Pelosi, Facebook and Twitter refused to take the video down. A 30-second video ad on Facebook in October falsely accused former Vice President Joseph R. Biden Jr. of blackmailing Ukrainian officials to stop an investigation of his son.
Mr. Bloomberg, a latecomer to the race, has poured hundreds of millions of dollars into it. As the owner of Bloomberg L.P., he has the money and the resources to vastly outspend his rivals.
Mr. Bloomberg has reassigned his employees and recruited other workers from Silicon Valley with salaries nearly double what other campaigns have offered their staffs. The roughly $400 million he has spent has made him omnipresent in ads across Facebook and Instagram, as well as on more traditional forms of media such as television and radio.
His campaign’s sophisticated understanding of how to generate online buzz has shown how uneven social media’s new political speech rules can be.
Mr. Bloomberg’s lackluster performance in the Las Vegas debate — three days before Saturday’s Democratic caucuses in Nevada — was startling even to his supporters. But soon after, his campaign’s digital team edited the debate into digestible bites on social media that made Mr. Bloomberg appear as though he had done better. On Thursday morning, a video was posted to his Twitter account.
“I’m the only one here, I think, that’s ever started a business. Is that fair?” Mr. Bloomberg said in the clip, showing him up on the debate stage. The video then cut to reactions from the other candidates, who appeared speechless. Crickets chirped in the background as the silence stretched on for 20 seconds.
In reality, Mr. Bloomberg had paused for about a second before moving on.
“It’s tongue in cheek,” Galia Slayen, a Bloomberg campaign spokeswoman, said of the video, which was viewed nearly two million times within hours. “There were obviously no crickets on the stage.”
Was the video against the rules?
Referring to new guidance on manipulated videos, Twitter said it would most likely label the video as misleading. That is, it would if the rule, which goes into effect in March, were already in effect. The company said it would not label Mr. Bloomberg’s video retroactively.
Facebook, which owns Instagram, said it would not remove the video. The company has recently altered its policy on manipulated media to state that Facebook will remove videos that have been edited “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”
The companies are less certain of how they will handle Mr. Bloomberg’s hiring of 500 “deputy digital organizers” to recruit and train their friends. (All 500 haven’t been hired yet.) His campaign has said it is paying people to use their own social media accounts to publish content of their choosing to mobilize voters for Mr. Bloomberg.
“We are meeting voters everywhere on any platform that they consume their news. One of the most effective ways of reaching voters is by activating their friends and network to encourage them to support Mike for president,” said Sabrina Singh, a spokeswoman for the Bloomberg campaign.
The Bloomberg team said the people they hired were ordinary Americans, and would not include so-called social media influencers, or individuals with large social media followings. The campaign said the digital organizers would not add disclosures to every post, but they would be directed to clearly identify in their social media profiles that they were affiliated with the Bloomberg campaign.
“We recommend campaign employees make the relationship clear on their accounts,” said Liz Bourgeois, a spokeswoman for Facebook. But if Mr. Bloomberg’s employees do not make clear on their accounts that the campaign paid them, Facebook has no easy way to identify them, she said.
Facebook has also made it clear that influencers who post content in support of Mr. Bloomberg’s campaign must clearly label themselves as being sponsored. The company also is exploring ways in which it can identify and catalog sponsored political content.
Google, which owns YouTube, did not respond to a request for comment on how it plans to handle paid influencers as well as digital organizers working for the Bloomberg campaign.
Mr. Brooking and other social media experts said they believed that until the companies saw themselves as media organizations — not neutral internet platforms — they would continue to struggle with how to police their platforms.
“We would not tolerate a falsified, unattributed political ad on CNN. We would not tolerate a paid campaign staffer masquerading as an objective analyst on NBC,” Mr. Brooking said. “We should not tolerate these behaviors on Twitter and Facebook today.”