Last year, Wikipedia co-founder Jimmy Wales told NPR that Wikipedia has largely avoided the “fake news” problem, raising the question of what the encyclopedia does differently than other popular websites. As Brian Feldman suggested in New York magazine, perhaps it’s simply the willingness within the Wikipedia community to delete. If a user posts bad information on Wikipedia, other users are authorized and empowered to remove that unencyclopedic content. It’s a striking contrast to Twitter, which allows lies and inflammatory statements to remain on its platform for years.
The Wikipedia community has also embraced automated technologies to protect the integrity of the encyclopedia. While YouTube scans videos for potential content violations using its Content ID database, the community of Wikipedia editors have created editing bots that go further by making determinations about content quality. For example, ClueBot NG quickly reverts probable vandalism based on its machine-learning algorithm and a database of common indicators such as expletives and poor punctuation. In 2016, YouTube courted controversy for attempting to enforce its policy against inappropriate language, with many vloggers alleging censorship. But a civility requirement makes sense for Wikipedians because the community shares a vision: to build a better encyclopedia.