The Washington Post reported earlier this month that moderators for YouTube are trained to treat the most popular video producers differently than others by, for instance, allowing hateful speech to remain on the site while enforcing their policies more stringently against creators with fewer followers. YouTube denied the claims.
YouTube was buffeted by allegations in June that it failed to act against a popular video creator who repeatedly mocked a journalist for being openly gay and of Mexican descent.
Bria Kam and Chrissy Chambers, whose BriaAndChrissy channel has about 850,000 subscribers, allege that YouTube’s enforcement against their channel reduced their monthly revenue to around $500 from $3,500.
However, according to the lawsuit, YouTube routinely restricts content that is allowable by, among other things, labeling videos aimed at LGBT communities for restricted audiences only or altering thumbnail previews of the videos that serve as enticements for potential viewers.
The lawsuit mentions a BriaAndChrissy music video titled “Face Your Fears,” which features the couple standing in front of anti-gay protesters, kissing. The song lyrics encourage people in the LGBT community to be themselves. “No more hate, no more shame,” the song goes. For reasons that are unclear to the creators, the video has been placed in “restricted mode,” making it invisible to viewers at many schools, libraries or to anyone who has activated the mode meant to limit offensive content.
The couple say these restrictions have “stigmatized” their videos and limited their audience, causing their earnings to dwindle. The lawsuit also says YouTube has allowed anti-gay groups to place obscene advertisements before the BriaAndChrissy videos.
Bret Somers, whose Watts the Safeword channel has about 200,000 subscribers, claims in the suit that his average monthly sales of $6,500 fell to around $300 as a result of YouTube’s actions against him, including restricting most of his videos to small audiences. Somers, in the suit, said that videos such as those describing his experience traveling to events, festivals or conventions do not appear for many viewers. His channel also depicts more adult content such as people watching virtual reality pornography and discussion of sex toys.
Lindsay Amer, another plaintiff and the creator of “Queer Kid Stuff,” says the channel’s videos, meant for kids aged 3 to 17, initially gained traction. But after a neo-Nazi website accused her of encouraging homosexuality, the comments sections underneath the videos were bombarded with hate speech that referred to Amer as a pedophile and attacked the LGBT community. Amer says parents wrote in to say that while they supported the content of Amer’s videos, they would not allow their children to watch them because of the comments.
–The YouTube demonetization crisis for independent news and politics channels is revealed to be even worse than we thought
What seems like a sensible decision to an algorithm can be a terrible misstep to a human.
Last week, The New York Times reported that YouTube’s algorithm was encouraging pedophiles to watch videos of partially-clothed children, often after they watched sexual content. To most of the population, these videos are innocent home movies capturing playtime at the pool or children toddling through water fountains on vacation. But to the pedophiles who were watching them thanks to YouTube’s algorithm, they were something more.
Human intuition can recognize motives in people’s viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube’s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist’s appetite is a job well done.
The result? The algorithm — and, consequently, YouTube — incentivizes bad behavior in viewers.
That dynamic works both ways. Many creators have recognized the flaws in YouTube’s algorithm, and have taken advantage of them, realizing that the algorithm relies on snapshots of visual content, rather than actions. If you (or your child) watch one Peppa Pig video, you’ll likely want another. And as long as it’s Peppa Pig in the frame, it doesn’t matter what the character does in the skit.
In February 2015, YouTube launched a child-centric version, YouTube Kids, in the United States. The app’s group manager Shimrit Ben-Yair described it as “the first Google product built from the ground up with the little ones in mind.” Ostensibly, YouTube Kids was meant to ensure that children could find videos without being able to accidentally click onto other, less family-friendly content — but it didn’t take long for inappropriate videos to show up in YouTube Kids’ ‘Now playing’ feeds.
Using cheap, widely available technology, animators created original video content featuring some of Hollywood’s best-loved characters. While an official Disney Mickey Mouse would never swear or act violently, in these videos Mickey and other children’s characters were sexual or violent. Generally speaking, the people creating these videos weren’t trying to wean children off the official, sanitized, friendly content: They were making content that they find funny for fellow adults. They don’t necessarily want their adult content to be served to children. But unlike adults who can distinguish between parody and mischief and the real thing, the algorithm can’t yet tell the difference.
The problem hasn’t been fixed. Nearly two years after “Elsagate,” as the issue with unsuitable content was dubbed, the issue still persists. In 2019, researchers analyzed thousands of videos targeted at toddlers aged between 1 and 5 featuring characters popular with kids. They also tracked how YouTube recommended subsequent videos and found that there’s a 3.5 percent chance of a child coming across inappropriate footage within 10 clicks of a child-friendly video.
That’s particularly concerning because of data analyzed for my book, “YouTubers,” published this week, by The Insights People, who survey 20,000 children and their parents about their media usage. Just four in 10 parents always monitor their child’s YouTube usage — and one in 20 children aged 4-to-12 say their parents never check what they’re watching.
Unsavory content is a problem that YouTube has been slow to acknowledge — and even slower to deal with, as we learned in early June with YouTube personality Steven Crowder’s baiting of Carlos Maza, a queer journalist for the news outlet Vox. The incident coincided with a long-planned shift in YouTube’s policy toward hate speech, but showed just how inconsistently the platform polices its rules: After initially saying Mr. Crowder’s videos weren’t inciting harassment, the company then said they did, then said it was down to a pattern of behavior that included selling a homophobic T-shirt off-site. At the height of the panic around Mr. Crowder’s videos, YouTube’s public policy on hate speech and harassment appeared to shift four times in a 24-hour period as the company sought to clarify what the new normal was. Susan Wojcicki, YouTube’s C.E.O., apologized earlier this week for how “hurtful” the company’s decision was to the LGTBQ+ community.
One possible solution that would address both problems would be to strip out YouTube’s recommendation altogether. But it is highly unlikely that YouTube would ever do such a thing: that algorithm drives vast swaths of YouTube’s views, and to take it away would reduce the time viewers spend watching its videos, as well as reduce Google’s ad revenue.
If YouTube won’t remove the algorithm, it must, at the very least, make significant changes, and have greater human involvement in the recommendation process. The platform has some human moderators looking at so-called “borderline” content to train its algorithms, but more humanity is needed in the entire process.Currently, the recommendation engine cannot understand why it shouldn’t recommend videos of children to pedophiles, and it cannot understand why it shouldn’t suggest sexually explicit videos to children. It cannot understand, because the incentives are twisted: every new video view, regardless of who the viewer is and what the viewer’s motives may be, is considered a success.
YouTube’s proposed solution is a uniquely technological one: Throw another algorithm at the problem. But when algorithms got us into this mess, is trusting them to get us out really the answer?
Social media sites like Facebook, Twitter, and YouTube exploit our tribalism to keep us watching ads. That makes them a perfect target for trolls, conspiracy theorists, and con artists.