Science & Technology
The truth about YouTube’s recommendations
Think twice before clicking on any recommended YouTube videos. You’ll thank yourself later.Johnson Shrestha
While casually scrolling through YouTube one day, Niranjan Joshi, a 24-year-old student, came across videos related to theories and opinions on Nepali government and politics. He always spent a lot of time on YouTube regularly, and was quickly hooked to these videos and binge-watched them. This went on for a few weeks, with him watching the videos one after another, even sharing them on his Facebook page.
But now, he has stopped visiting YouTube altogether. The reason being, his YouTube recommendations are full of only political and conspiracy videos and he doesn’t know how to change his feed.
The same has happened to a friend of his, who has requested to be anonymous, where his YouTube recommendations were filled with provocative Spanish music videos. “I may have watched some sexy videos once or twice, but now they’re there every time I open YouTube and I’m embarrassed to open the site in front of others,” he says.
YouTube recommends videos on the ‘up next’ list on the right side of the screen every time a user is watching something. If the user enables the ‘autoplay’ button, the videos are played automatically played one after another. According to Guillaume Chaslot, a former YouTube employee and founder of AlgoTransparency, a website that attempts to uncover which videos YouTube's recommendation algorithm promotes, users should be wary of how YouTube works, that even innocent options such as ‘up next’ is YouTube’s artificial intelligence (AI) functioning to not only serve users the type of content they want but to keep them watching for as long as possible.
YouTube’s AI thus provides recommendations to their users not by the users’ choices and interests, but by their activity and watch time. While this is great for companies trying to sell ads, it is not always what users want.
Once you watch a certain type of content, the algorithm keeps recommending similar content from that category. Of course, for cat videos, this works quite well. But many kinds of recommendations are borderline content—like fake news, conspiracy theories, and clickbait titled videos—and such content are closest to the edge of YouTube’s policies, and thus get more engagement.
This can be attributed to people’s nature of finding sensational content exciting, and the more engagement it has, the more AI worthy it becomes. This is something known as a feedback loop.
Users engage more on such borderline contents, meaning more views. The number of views is also a determinant for YouTube’s AI to recommend the category to its other viewers. But more views also mean that content creators get the impression that they should create such type of content to get popular. Basically, the more grotesque content you make, the more likely it’ll keep people on YouTube—for creators and YouTube, it is a revenue opportunity. Hence, the cycle or loop goes on.
According to Chaslot, the AI’s intentions are not to promote such toxic content. Its objective is just to keep users engaged, and quality is something it doesn’t acknowledge. And every two out of 10 times, recommended videos are filled with edgy content. And this is where YouTube’s AI gets it wrong—watch time does not mean quality.
But the recommendations get even stronger at a certain point. It starts with YouTube pushing certain types of content to those who’re most likely to engage with them. As the AI becomes stronger, via machine learning and more data, it becomes more efficient at recommending specific user-targeted content. But as the AI improves, it is able to more precisely predict who is interested in these content. Therefore, it’s also less likely to recommend such content to those who aren’t. This means that such content will not be flagged or reported once it gets to this stage.
But here’s where it gets even more toxic. In February Matt Watson, a former YouTube creator, found that the algorithm was making it easier for pedophiles to connect and share child pornography in the comments of certain videos. But the worst part was that YouTube was monetising these videos and actively pushing 1,000s of users towards videos of children.
And this was not the first scandal for the platform. In recent years, YouTube has promoted terrorist content, foreign-state promoted propaganda, hate-crime videos, softcore zoophilia, and countless conspiracy theories. This is not only promoting animosity on YouTube, but negatively affecting the ‘informed society’ we seek.
In order to roll out of this feedback loop—even partially—users can disable their YouTube recommendations altogether, but this is only a short-term solution. The long-term solution lies with Google itself, however, and it is doubtful whether they will change their algorithm—which is a money-making machine for them.
What users can do is think twice before clicking on any recommended video.