The Case Against Google (nytimes.com)

Content recommendation algorithms reward engagement metrics. One of the metrics they reward is getting a user’s attention, briefly. In the real world, someone can get my attention by screaming that there is a fire. Belief that there is a fire and interest in fire are not necessary for my attention to be grabbed by a warning of fire. All that is needed is a desire for self-preservation and a degree of trust in the source of the knowledge.

Compounding the problem, since engagement is improved and people make money off of videos, there is an incentive in place encouraging the proliferation of attention grabbing false information.

In a better world, this behavior would not be incentivized. In a better world, reputation metrics would allow a person to realize that the boy who cried wolf was the one who had posted the attention grabbing video. Humanity has known for a long time that there are consequences for repeated lying. We have fables about that, warning liars away from lying.

I don’t think making that explicit, like it is in many real world cases of lying publicly in the most attention attracting way possible, would be unreasonable.

.. Google recommends that stuff to me, and I don’t believe in it or watch it. Watch math videos, get flat earth recommendations. Watch a few videos about the migration of Germanic tribes in Europe during the decline of the Roman Empire, get white supremacist recommendations.
My best guess? They want you to sit and watch YouTube for hours, so they recommend stuff watched by people who sit and watch YouTube for hours.
This stuff reminds of the word “excitotoxins,” which is based on a silly idea yet seems to capture the addictive effect of stimulation. People are stimulated by things that seem novel, controversial, and dangerous. People craving stimulation will prefer provocative junk over unsurprising truth.