Algorithms Won’t Fix What’s Wrong With YouTube

What seems like a sensible decision to an algorithm can be a terrible misstep to a human.

Last week, The New York Times reported that YouTube’s algorithm was encouraging pedophiles to watch videos of partially-clothed children, often after they watched sexual content. To most of the population, these videos are innocent home movies capturing playtime at the pool or children toddling through water fountains on vacation. But to the pedophiles who were watching them thanks to YouTube’s algorithm, they were something more.

Human intuition can recognize motives in people’s viewing decisions, and can step in to discourage that — which most likely would have happened if videos were being recommended by humans, and not a computer. But to YouTube’s nuance-blind algorithm — trained to think with simple logic — serving up more videos to sate a sadist’s appetite is a job well done.

The result? The algorithm — and, consequently, YouTube — incentivizes bad behavior in viewers.

That dynamic works both ways. Many creators have recognized the flaws in YouTube’s algorithm, and have taken advantage of them, realizing that the algorithm relies on snapshots of visual content, rather than actions. If you (or your child) watch one Peppa Pig video, you’ll likely want another. And as long as it’s Peppa Pig in the frame, it doesn’t matter what the character does in the skit.

In February 2015, YouTube launched a child-centric version, YouTube Kids, in the United States. The app’s group manager Shimrit Ben-Yair described it as “the first Google product built from the ground up with the little ones in mind.” Ostensibly, YouTube Kids was meant to ensure that children could find videos without being able to accidentally click onto other, less family-friendly content — but it didn’t take long for inappropriate videos to show up in YouTube Kids’ ‘Now playing’ feeds.

Using cheap, widely available technology, animators created original video content featuring some of Hollywood’s best-loved characters. While an official Disney Mickey Mouse would never swear or act violently, in these videos Mickey and other children’s characters were sexual or violent. Generally speaking, the people creating these videos weren’t trying to wean children off the official, sanitized, friendly content: They were making content that they find funny for fellow adults. They don’t necessarily want their adult content to be served to children. But unlike adults who can distinguish between parody and mischief and the real thing, the algorithm can’t yet tell the difference.

The problem hasn’t been fixed. Nearly two years after “Elsagate,” as the issue with unsuitable content was dubbed, the issue still persists. In 2019, researchers analyzed thousands of videos targeted at toddlers aged between 1 and 5 featuring characters popular with kids. They also tracked how YouTube recommended subsequent videos and found that there’s a 3.5 percent chance of a child coming across inappropriate footage within 10 clicks of a child-friendly video.

That’s particularly concerning because of data analyzed for my book, “YouTubers,” published this week, by The Insights People, who survey 20,000 children and their parents about their media usage. Just four in 10 parents always monitor their child’s YouTube usage — and one in 20 children aged 4-to-12 say their parents never check what they’re watching.

Unsavory content is a problem that YouTube has been slow to acknowledge — and even slower to deal with, as we learned in early June with YouTube personality Steven Crowder’s baiting of Carlos Maza, a queer journalist for the news outlet Vox. The incident coincided with a long-planned shift in YouTube’s policy toward hate speech, but showed just how inconsistently the platform polices its rules: After initially saying Mr. Crowder’s videos weren’t inciting harassment, the company then said they did, then said it was down to a pattern of behavior that included selling a homophobic T-shirt off-site. At the height of the panic around Mr. Crowder’s videos, YouTube’s public policy on hate speech and harassment appeared to shift four times in a 24-hour period as the company sought to clarify what the new normal was. Susan Wojcicki, YouTube’s C.E.O., apologized earlier this week for how “hurtful” the company’s decision was to the LGTBQ+ community.

One possible solution that would address both problems would be to strip out YouTube’s recommendation altogether. But it is highly unlikely that YouTube would ever do such a thing: that algorithm drives vast swaths of YouTube’s views, and to take it away would reduce the time viewers spend watching its videos, as well as reduce Google’s ad revenue.

If YouTube won’t remove the algorithm, it must, at the very least, make significant changes, and have greater human involvement in the recommendation process. The platform has some human moderators looking at so-called “borderline” content to train its algorithms, but more humanity is needed in the entire process.Currently, the recommendation engine cannot understand why it shouldn’t recommend videos of children to pedophiles, and it cannot understand why it shouldn’t suggest sexually explicit videos to children. It cannot understand, because the incentives are twisted: every new video view, regardless of who the viewer is and what the viewer’s motives may be, is considered a success.

YouTube’s proposed solution is a uniquely technological one: Throw another algorithm at the problem. But when algorithms got us into this mess, is trusting them to get us out really the answer?

What Is Up with Facebook’s Algorithm Lately?

This morning, Matt Lewis offers an observation about Facebook that I’ll bet will leave a lot of writers and publications nodding in agreement: “My beef w/ Facebook is that it’s a waste of time for me as a content creator. Here’s an example: I have about 13.4K total page ‘likes.’ Yet, this post (which is substantive, but/and not provocative or flashy) reached just 328 people.”

Jonathan Franks offers a cynical thought: “I think the real problem is you don’t write the kind of content Facebook’s algorithm wants — if you were to write something irresponsible like ‘Pelosi operates heroin den in pizza parlor’ it would go nuts, and therein lies the problem!”

Author Christi Daugherty offers a perhaps-exaggerated example that suggests the social-media site is algorithm-ing itself into uselessness: “All authors are having this right now. Facebook won’t even show our content to people who have specifically followed us and asked to see this content. Followers are saying ‘Show me things from this author’ and Facebook is like ‘How about a Nazi meme from your cousin instead?’”