Clicks and Views

Summary
Whistleblowers from Meta Platforms and TikTok told the BBC that both companies allowed more harmful or “borderline” content on their platforms because outrage and controversial material generated higher engagement. Internal research reportedly showed that algorithms tend to promote posts that provoke anger, bullying, conspiracy theories, or hate speech because such content keeps users on the platform longer and increases advertising revenue. Former employees also said safety teams were often under-resourced while companies prioritised product growth and competition, especially after TikTok’s rapid rise pushed Meta to launch Instagram Reels quickly without sufficient safeguards. Some whistleblowers further claimed moderation systems sometimes prioritised politically sensitive cases over reports involving children or harmful posts, though both companies deny these allegations and say they invest heavily in safety measures and moderation technologies.
Application
This episode also illustrates how commercialisation can corrupt ethical decision-making when profit becomes the overriding objective. In highly competitive digital markets, social media companies rely heavily on advertising revenue, which increases the longer users remain on their platforms. As a result, algorithms are optimised primarily for engagement rather than well-being. Because provocative or outrage-inducing content attracts more clicks, views, and comments, it becomes economically rewarding to promote such material even if it harms users or spreads misinformation. Over time, this profit logic can crowd out ethical considerations.