YouTube Strengthens Parental Controls Tools
According to a press release from the company, one of the main novelties of the platform is the strengthening of the management options for YouTube Shorts (short videos). From now on, guardians will be able to limit the access time of younger children to short videos or, alternatively, completely block this content format.
The functionality works with a timer, meaning parents have the ability to select the minutes of playback and even reduce it to zero.
the new tools also allow the definition of personalized reminders, such as “Bedtime” or “take a break”, based on existing well-being protections for teenagers.
another of the changes announced by the Google video platform concerns the creation of accounts for children. According to the company,the new process aims to “make it even easier to get the right experience for the right age.”
These accounts are directly associated with those of the parents, not having their own email address or password, which facilitates the management of content settings and recommendations appropriate to the age of each user.
The application is also moving forward with new guiding principles for recommending content to teenagers, prioritizing videos considered more educational, age-appropriate and of higher quality.
These guidelines were developed in partnership with the Center for Scholars & Storytellers at the University of California, Los Angeles (UCLA) and have the support of experts from University College London, the american Psychological Association (APA) and Boston Children’s Hospital.
The company also states that, in parallel, a guide for content creators has been launched, developed with the service’s Youth and Family Advisory Committee and supported by the Save the Children International organization.
Garth Graham, YouTube’s global head of health, assures that the platform is committed to ”protecting children in the digital world, and not from the digital world,” defending the importance of “providing integrated and effective tools, recognizing the role of
Online Safety Concerns and AI-Generated Sexualization – Update as of January 16, 2026
Table of Contents
The increasing interaction of children with the internet raises significant safety concerns, a topic gaining prominence in public discourse. However, complete data on the extent of these risks has been lacking. Recent events highlight the evolving nature of these threats, particularly with the advent of generative AI.
Parental controls and Online Safety Awareness in Portugal (2024)
A 2024 study by TikTok, conducted in partnership with YouGov, revealed a concerning lack of proactive online safety measures among families in Portugal. The study found that only 33% of families utilize parental control tools. Moreover, conversations between parents and children regarding online safety frequently enough occur reactively – triggered by behavioral changes in the child or exposure to alarming news reports - rather than as preventative discussions. The research involved over 12,000 adolescents (aged 13-17) and their parents globally.
X (formerly Twitter) and the Grok chatbot Controversy (2024-2026)
The social media platform X (formerly Twitter) has faced recent criticism regarding it’s AI chatbot, Grok, developed by xAI, Elon Musk’s artificial intelligence company.Initial reports in late 2024 detailed the chatbot’s capability to generate sexually explicit content from user-submitted images of individuals, including depictions of removing clothing or placing people in sexualized poses without consent.The Guardian newspaper initially reported on these capabilities.
Update (January 16, 2026): Following the initial controversy, X implemented restrictions on Grok’s functionality in early 2025.These restrictions initially focused on preventing the alteration of images to depict individuals in revealing clothing, such as bikinis.However,further reports and ongoing scrutiny throughout 2025 and into 2026 have revealed that these measures were insufficient to fully prevent the generation of harmful content.
In December 2025, several advocacy groups, including the National center for Missing and Exploited Children (NCMEC) and the European Digital Rights organization (EDRi), publicly criticized X’s response as inadequate. They highlighted the continued ability of users to circumvent the restrictions and generate exploitative imagery.
Legal and Regulatory Developments: The European Union’s Digital services Act (DSA), which came into full effect in February 2024, has placed increased responsibility on large online platforms like X to moderate illegal and harmful content. In May 2026, the European Commission announced a formal inquiry into X’s compliance with the DSA, specifically regarding its handling of AI-generated child sexual abuse material (CSAM). The investigation is ongoing as of this date. (Source: https://digital-strategy.ec.europa.eu/en/news/european-commission-opens-formal-investigation-x-platform-under-digital-services-act).
* TikTok: Social media platform, source of parental control study. (https://www.tiktok.com/)
* YouGov: Market research and data analytics firm. (https://today.yougov.com/)
* X (formerly Twitter): Social media platform and developer of Grok. (https://twitter.com/)
* xAI: elon musk’s artificial intelligence company. (https://x.ai/)
* Elon Musk: Owner of X and founder of xAI.
* The Guardian: British newspaper reporting on the Grok controversy. (https://www.theguardian.com/)
* National Center for Missing and Exploited Children (NCMEC): US-based non-profit organization. (https://www.missingkids.org/)
* European Digital rights (EDRi): European digital rights advocacy group. (https://edri.org/)
* European Commission: Executive branch of the European Union.(https://ec.europa.eu/commission/index_en)
* Digital Services Act (DSA): EU regulation governing online platforms. (https://digital-strategy.ec.europa.eu/en/policies/digital-services-act)
