The phenomenon of algorithmic radicalization is defined as the method by which personalized algorithms on social media platforms steer users towards progressively more extreme content. These algorithms, which are intended to boost user interaction, inadvertently construct echo chambers and filter bubbles, confirming users’ pre-existing beliefs and leading to confirmation bias and group polarization. This process is particularly widespread on platforms such as Facebook[2], YouTube, and TikTok. It has come under fire for facilitating the spread of misinformation, hate speech, and extremist ideologies, sparking legal discussions. The dissemination of false news and extremist content outpaces the truth due to these algorithms. The issue of algorithmic radicalization has been the subject of extensive research, with scholars expressing concerns about its societal effects and advocating for new regulations to manage advanced artificial intelligence[1].
Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation.
Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels. Though social media companies have admitted to algorithmic radicalization's existence, it remains unclear how each will manage this growing threat.