Social Media Ban

Summary
An increasing number of governments worldwide are considering banning social media access for children under 16 as concerns grow over online harms such as bullying, exploitation, and mental health risks. Following Australia’s landmark ban, countries across Southeast Asia, Europe, and parts of the United States are exploring similar measures, reflecting strong public support for stricter regulation. Experts note that governments are increasingly willing to face criticism over feasibility, privacy, and civil liberties because they ❌ no longer trust technology companies to adequately self-regulate. While platforms such as Meta Platforms, TikTok, and Snap Inc. argue that bans may drive youths to unregulated platforms or encourage circumvention through tools like VPNs, regulators view stricter rules as necessary to shift responsibility back onto tech firms. Overall, the debate reflects a shift from relying on voluntary safety features to enforcing stronger government oversight of social media platforms.
Application
Government intervention can be a surer way of addressing the externalities created by digital platforms. In the case of social media, companies profit from engagement, but the social costs, such as cyberbullying, radicalisation, misinformation, or mental health harm among youths, are borne by society rather than the firms themselves. Because these negative externalities are not reflected in the platforms’ profit calculations, companies have limited incentive to fully address them on their own. Regulation therefore helps realign incentives by setting enforceable boundaries that prevent harmful outcomes from being amplified. As technology disrupts social norms and introduces new risks faster than institutions can adapt, government oversight becomes an important mechanism through which society collectively adjusts to technological change.