TikTok Risks: Protecting Children from Harmful Challenges & Addiction
- The European Union is intensifying its scrutiny of TikTok, accusing the popular video-sharing platform of employing design features intentionally engineered to be addictive, particularly for children.
- The EU’s concerns center around features like autoplay and infinite scroll, which are common across many social media platforms but are now facing increased regulatory pressure.
- TikTok has vehemently denied the accusations, stating that the Commission’s findings are “categorically false and entirely meritless.” The company has pledged to challenge the findings “through every means...
The European Union is intensifying its scrutiny of TikTok, accusing the popular video-sharing platform of employing design features intentionally engineered to be addictive, particularly for children. Preliminary charges, announced on , suggest TikTok hasn’t adequately assessed the potential harm these features pose to the physical and mental well-being of its users, including minors and “vulnerable adults.”
The EU’s concerns center around features like autoplay and infinite scroll, which are common across many social media platforms but are now facing increased regulatory pressure. According to the European Commission, TikTok should fundamentally alter the “basic design” of its service to mitigate these risks. This action falls under the EU’s Digital Services Act (DSA), a comprehensive rulebook designed to hold social media companies accountable for platform safety and user protection, with the threat of substantial fines for non-compliance.
TikTok has vehemently denied the accusations, stating that the Commission’s findings are “categorically false and entirely meritless.” The company has pledged to challenge the findings “through every means available.” However, the EU’s investigation, spanning two years, suggests a pattern of behavior that regulators believe warrants intervention.
The core issue isn’t simply the existence of these features, but rather the lack of sufficient assessment regarding their impact. Autoplay, for example, automatically plays the next video in a sequence, removing the need for a user to actively choose to continue watching. Infinite scroll continuously loads new content as the user scrolls down, eliminating natural stopping points. These mechanisms are designed to maximize engagement, but critics argue they can lead to compulsive use, particularly among younger users who may lack the cognitive maturity to regulate their own behavior.
This isn’t the first time TikTok’s practices have come under fire. A recent report by Amnesty International, titled “Dragged into the Rabbit Hole,” highlighted how easily children can be steered towards harmful content on the platform. The study found that teenagers searching for content related to mental health were quickly exposed to videos promoting depressive themes and within hours, were presented with content related to suicide. This demonstrates the potential for TikTok’s algorithm to create “rabbit holes” that exacerbate existing vulnerabilities.
Further concerns were raised in a report from Annahar, detailing lawsuits filed in the UK by mothers seeking justice for children who died attempting dangerous “challenges” popularized on TikTok. These challenges, often involving risky or harmful activities, underscore the platform’s potential to facilitate real-world harm.
The EU’s move follows similar actions by other countries. Australia has already introduced a national ban on social media for children under 16, and France is pursuing similar legislation. Denmark is considering a national age limit of 15, and the UK’s House of Lords has voted in favor of a ban until age 16, awaiting a decision from the House of Commons. Spain’s Prime Minister is also pushing for a ban, despite facing criticism from Elon Musk, owner of X (formerly Twitter).
The underlying problem extends beyond TikTok. The business model of most major social media platforms relies on maximizing user engagement to generate advertising revenue. Algorithms are designed to serve content that keeps users scrolling, clicking, and watching for as long as possible. While this isn’t inherently malicious, it creates a powerful incentive to prioritize engagement over user well-being. The result is a digital environment where extreme, shocking, or emotionally charged content often thrives, potentially at the expense of mental health and responsible behavior.
The EU’s accusations against TikTok represent a significant escalation in the regulatory pressure facing social media companies. The DSA gives the Commission broad powers to investigate and enforce rules related to content moderation, user safety, and algorithmic transparency. The outcome of this case could set a precedent for how other platforms are regulated in the future. It also raises fundamental questions about the responsibility of tech companies to protect their users, particularly children, from the potential harms of their products.
The debate isn’t simply about banning features or imposing age restrictions. It’s about fundamentally rethinking the design of social media platforms to prioritize user well-being over engagement metrics. This could involve implementing features that promote mindful usage, providing users with greater control over their feeds, and increasing transparency around algorithmic decision-making. The challenge lies in finding a balance between innovation, freedom of expression, and the need to protect vulnerable users from harm.
