Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
-Among AI Safety Failures: xAI Grok Under Fire - News Directory 3

-Among AI Safety Failures: xAI Grok Under Fire

January 27, 2026 Lisa Park Tech
News Context
At a glance
  • A⁢ new risk assessment has found ‌that xAI's ⁣chatbot Grok ‌has inadequate identification of users under 18, weak safety guardrails, ⁣and ​frequently generates‍ sexual, violent, and⁣ inappropriate material.
  • The damning ⁣report from Common⁤ Sense ⁤Media,a ‌nonprofit that provides age-based ratings and reviews of media and tech for families,comes as xAI faces ⁢criticism and an examination into...
  • "We assess a lot of AI chatbots at Common Sense⁣ Media,and they all have risks,but Grok is among the worst we've seen,"⁣ said Robbie Torney,head of​ AI⁢ and...
Original source: techcrunch.com

A⁢ new risk assessment has found ‌that xAI‘s ⁣chatbot Grok ‌has inadequate identification of users under 18, weak safety guardrails, ⁣and ​frequently generates‍ sexual, violent, and⁣ inappropriate material. Simply put, Grok is not safe ​for kids⁣ or teens.

The damning ⁣report from Common⁤ Sense ⁤Media,a ‌nonprofit that provides age-based ratings and reviews of media and tech for families,comes as xAI faces ⁢criticism and an examination into how Grok was used to create and spread nonconsensual explicit AI-generated images of women and children on the X ⁢platform.

“We assess a lot of AI chatbots at Common Sense⁣ Media,and they all have risks,but Grok is among the worst we’ve seen,”⁣ said Robbie Torney,head of​ AI⁢ and digital assessments at the nonprofit,in a statement.

He added that while it’s common for chatbots⁢ to‌ have some safety gaps, Grok’s failures intersect in a especially ‍troubling way.

“Kids Mode doesn’t work, explicit material is pervasive, [and] everything can be instantly ⁤shared‍ to ⁢millions of ‌users on X,” continued Torney. (xAI released ‘Kids Mode‘⁢ last October with content filters and parental controls.) “When a company responds to ⁢the enablement of​ illegal child sexual abuse material by putting the feature behind a paywall rather than removing it,‍ that’s not an oversight.That’s a‌ business model ‍that puts profits ahead ​of kids’ safety.”

After facing outrage from users, policymakers,⁢ and ⁢ entire nations, xAI restricted Grok’s image​ generation and editing to paying X subscribers only, ​though many reported they coudl still access the tool with free accounts. Moreover,paid​ subscribers were still able to edit real photos ​of people to⁣ remove clothing or put ​the subject into sexualized positions.

Common sense Media tested Grok across the mobile app, website, and @grok account on X using teen test accounts between this past November and January 22, evaluating text, voice, default ‍settings, Kids ⁤Mode, Conspiracy Mode, and image and video ⁣generation features.‍ xAI​ launched Grok’s image generator, TikTok and X Face Scrutiny Over Teen Safety Measures

Table of Contents

  • TikTok and X Face Scrutiny Over Teen Safety Measures
    • TikTok’s New Teen Safety Rules and Age Prediction
    • xAI’s Grok and the Ineffectiveness of ‘Kids Mode’
    • Legislative and Regulatory Pressure

TikTok and ⁤X (formerly Twitter) are under increasing pressure from lawmakers and safety advocates regarding ‍the protection of minors on their platforms, with concerns centering on‍ inadequate age ⁤verification and the effectiveness ​of safety features like “Kids Mode.” Investigations reveal significant shortcomings in both platforms’ ‌ability to prevent harmful content from reaching young users.

TikTok’s New Teen Safety Rules and Age Prediction

TikTok⁤ recently implemented ⁤ new teen safety rules, including parental controls, and ⁤utilizes an age prediction model to ⁤estimate whether an account likely belongs⁤ to someone under 18.

Definition / Direct Answer: TikTok is attempting to enhance teen safety through new rules and AI-powered age⁢ estimation, ‍but ⁤the effectiveness ⁤of these ‌measures remains under debate.

Detail: The new rules⁤ aim to limit features like⁣ direct messaging ‍and content recommendations for ‌users identified as under 16. tiktok also introduced family pairing features, allowing parents to link their accounts to​ their children’s and manage settings. However, critics argue that the age prediction model is easily circumvented and⁣ doesn’t reliably identify underage users.

Example or evidence: According to a NBC News report ‍from January 26,2026,lawmakers are weighing AI standards for minors,highlighting the ongoing legislative​ focus on platform accountability.

xAI’s Grok and the Ineffectiveness of ‘Kids Mode’

xAI’s chatbot, Grok, offers a ‘Kids​ mode’ intended to provide a safer experience⁣ for younger users, ​but investigations suggest it is indeed largely ineffective.

Definition / Direct Answer: xAI’s ‘Kids⁣ Mode’ for Grok, designed to protect ⁢young users, has been found to be easily bypassed and fails to consistently prevent ⁤the ⁢generation of harmful content.

Detail: Parents can activate ‘Kids Mode’ within the mobile submission, but‌ the‍ feature is unavailable on the web or X platform itself. Crucially, the system does not require⁣ age verification, allowing minors to falsely claim to be older.Moreover, ⁣Grok doesn’t appear‌ to analyze conversation context to⁣ identify potential underage users.

Example ​or Evidence: A Common Sense Media review published on January ⁢25, 2026, found that even with ‘kids Mode’ enabled, Grok generated content containing gender and‌ racial biases, sexually⁣ violent⁤ language, and detailed descriptions ‌of harmful acts. This demonstrates a significant ⁢failure in the safety‌ mechanisms.

Legislative and Regulatory Pressure

The scrutiny ‌of TikTok ⁣and X is part of a broader wave ⁤of legislative and regulatory pressure on social media companies to protect children online.‍

Definition ⁢/ Direct Answer: ‍ Lawmakers ⁢and regulators are increasing pressure on social media ⁤companies to implement more robust safeguards‍ for young users, driven‍ by concerns about mental health ⁤and exposure to ⁢harmful ‌content.

Detail: Several bills are currently under consideration in Congress that would require platforms to ⁢prioritize child⁢ safety, implement stricter age verification measures, and provide⁢ parents with⁤ greater control over their​ children’s online experiences.​ The Federal Trade Commission ⁢(FTC) is also ⁣actively investigating potential violations of existing consumer protection laws.

example or Evidence: On January 20, 2026, the Federal Trade Commission announced a ‌renewed focus on protecting children’s privacy and safety online, signaling increased enforcement actions against companies that fail to comply​ with ⁢regulations.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

AI chatbot, ai companions, grok, xAI

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service