Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Texas AI Law: Stiff Regulations to Prevent AI Behavior Manipulation - News Directory 3

Texas AI Law: Stiff Regulations to Prevent AI Behavior Manipulation

January 25, 2026 Victoria Sterling Business
News Context
At a glance
  • OpenAI is reportedly making notable efforts to mitigate the risk of users developing unhealthy ​attachments or ⁣even "psychosis" stemming from interactions with⁢ it's AI models, particularly ChatGPT and...
  • The concern⁣ arises from instances where users have reported forming⁣ strong emotional bonds with AI chatbots, attributing human-like qualities to them, and even ⁤experiencing distress when the AI...
  • As noted, I have been earnestly predicting ⁤that eventually all of the major ⁢AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Original source: forbes.com

In today’s column,I examine a new‍ AI law in Texas that was passed last​ year and is now getting underway as we enter 2026. The AI law,⁤ which is ​known as TRAIGA, the Texas ‌responsible AI Governance Act, is rather complete‌ and covers a wide variety of potential AI issues. I aim to focus on the legal restrictions associated with the manipulation of human behavior by AI ⁤and AI makers.

Does the Texas AI ‌law go far enough,or are there sneaky loopholes and discernible omissions?

Let’s talk about it.

This analysis of⁤ AI breakthroughs is part of my⁢ ongoing Forbes column coverage on ⁣the ⁢latest in AI, including identifying and explaining various impactful AI complexities ‌(see ‍the link here).

AI ‍And Mental Health

Table of Contents

  • AI ‍And Mental Health
  • OpenAI Eagerly Trying To Reduce AI Psychosis And⁤ Squash Co-Creation Of ‍Human-AI Delusions When Using⁣ ChatGPT And GPT-5
    • The Current Situation​ Legally
  • Jurisdictional Scope
  • Stated Purpose Of​ The AI Law
  • Analysis of the Provided Text: AI, Mental⁢ health,​ and Regulation

As a speedy background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been⁤ spurred by the evolving advances and widespread adoption of‌ generative AI. For an extensive ⁢listing of my well-over one hundred analyses and postings, see the link here and the link here.

OpenAI Eagerly Trying To Reduce AI Psychosis And⁤ Squash Co-Creation Of ‍Human-AI Delusions When Using⁣ ChatGPT And GPT-5

OpenAI is reportedly making notable efforts to mitigate the risk of users developing unhealthy ​attachments or ⁣even “psychosis” stemming from interactions with⁢ it’s AI models, particularly ChatGPT and the forthcoming GPT-5. This includes attempting to prevent the co-creation of delusions between humans and AI.

The concern⁣ arises from instances where users have reported forming⁣ strong emotional bonds with AI chatbots, attributing human-like qualities to them, and even ⁤experiencing distress when the AI doesn’t reciprocate or behaves unexpectedly. Some users ⁤have described the AI as their “boyfriend” or “girlfriend,” and have expressed⁢ feelings of loss or betrayal when the AI’s responses deviate from their expectations. ‌OpenAI is actively working to reduce ​thes occurrences.

As noted, I have been earnestly predicting ⁤that eventually all of the major ⁢AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today’s generic LLMs,such as chatgpt,Claude,Gemini,Grok,and others,are not at​ all⁢ akin to the robust capabilities of human therapists.Simultaneously occurring, ‍specialized LLMs are being built ⁣to presumably attain similar​ qualities, ⁣but they are still primarily in the progress and testing stages. See my coverage at‍ the link here.

The Current Situation​ Legally

some states have​ already opted to⁣ enact new laws governing AI that provide mental⁣ health guidance. For my analysis of the AI mental health law in ​Illinois, see the link here, for the law ‌in Utah, see the link here, and for the law in Nevada, see‍ the link here. There will be court cases that test those ‌new laws. It ⁤is too early to know whether the laws ‌will stand as is and​ survive legal battles waged ⁤by AI makers.

Congress has repeatedly waded into establishing‌ an overarching federal ⁢law that woudl encompass AI that ⁣dispenses mental health​ advice.So far, no dice. The efforts have ultimately faded from view.Thus, at this time, there isn’t a federal law devoted to these controversial AI matters ⁢per se. I have laid out an outline of what a comprehen

“`html

I will go ahead and unpack ⁤selected portions of the AI law. If you ⁣are​ interested in the full text of the AI law, it is posted online as Texas⁤ House Bill 149, ‌HB149, ⁤and was passed in the Texas 89th Legislature on June 22, 2025.

Let’s⁣ begin by examining Subtitle D, Chapter 551, Section ⁣551.001, which contains the definition of AI:

  • “(1) ‘Artificial intelligence⁤ system’ means any machine-based ‌system that, for any explicit or implicit objective,‌ infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or⁢ recommendations, ⁣that can influence physical or virtual environments.”

One of the most vexing parts⁣ of any set ⁤of ‍AI laws is the scope of automated systems and technology that is construed as within‌ the purview of the proposed laws.This boils down to how the AI law opts to define AI.

I’ve ⁣pointed out repeatedly that trying to nail ⁢down ‍what is meant by ⁤referring‌ to ‌AI is a much‌ harder legal problem than it⁢ might seem at⁤ first glance (see the link here).If the definition of AI is broad,⁤ all kinds of perhaps non-AI​ systems will fall ‍into the scope, which is presumably unintended. when the definition of AI is too narrow, all sorts of AI systems that should be covered​ can attempt to slip⁣ out of the laws by claiming that they aren’t ‌within the stipulated scope.

The AI definition used in this instance is one of the broader versions. We do not ⁣yet know on ‌a legal basis how the‍ courts will opt to interpret the definitional aspects of⁣ these broader definitions.In any case, ⁣makers of ⁣non-AI systems could potentially be⁢ squeezed‌ into this definition, so all software and systems developers should be mindful of whether their automation could fall into this zone.

Jurisdictional Scope

Another very important component of any AI law is the‌ jurisdictional scope.In the ​case ​of ⁤states enacting AI laws, by and large, they are ‍conventionally limited ⁢to governing only AI that arises within their​ geographic boundaries. They cannot reach out to other states and place limits there, per se. With​ that in mind, ⁢if​ an⁢ AI system is housed in one state and is available ‍for use in a different state that has an AI law, this would normally ⁤be ‍within the purview of that state’s AI⁢ law.

Here ‍is what ​Section​ 551.002 on applicability has to say:

  • “This subtitle applies ‍only to a person who:⁢ (1) promotes, advertises, or conducts ⁤business in this state; (2) produces a product or‌ service used‍ by residents of‍ this state; or (3) develops or deploys an‌ artificial intelligence system in this ⁢state.”

I earlier noted that the Illinois AI law jurisdictionally entails the use of AI while in Illinois, and likewise, ‍the same applies to the other respective states. The takeaway is that an AI maker with AI housed in, say, California, is ​not off-the-hook⁢ if their‌ AI is available for use in Texas. They would come under this ​AI law.

Global AI makers will need to keep this crucial point in mind.

Stated Purpose Of​ The AI Law

It is indeed helpful for ⁢AI laws to clarify what the intention of the law is. I mention this because some ‌AI laws just leap into the details of whatever scope and violations they are interested in covering. There isn’t an explicit callout of why the⁣ law was devised and enacted. I contend that it is exceedingly useful for those writing these laws to take a⁢ moment and mindfully explain what the overarching goal or intention of their new AI ​law purports to be.

Analysis of the Provided Text: AI, Mental⁢ health,​ and Regulation

this text ⁣discusses the implications of ‌the new Texas AI law, particularly as it relates to mental health, ‍and argues for careful consideration of regulation in this rapidly evolving field. Here’s ‍a breakdown of the key points:

1. Texas AI ‍Law – Breadth vs. Depth:

* The Texas AI law is described ‌as broad but not specifically focused on AI and mental health.
* The author believes the law is too short and ‍lacks the ⁢comprehensive detail found in AI laws from other states (Illinois, Utah,⁣ Nevada).
* There’s ⁢a debate presented: should AI laws be broad and simple, or detailed and exhaustive? Simplicity risks loopholes, while length risks⁢ ambiguity and unintended interpretations.

2. Specific Provision Regarding Mental Health:

* Section 552.052 prohibits developing/deploying ⁤AI ⁢that intentionally incites self-harm, harm to others, or criminal activity.
* The author points out this provision is considerably shorter than those addressing mental health in other AI laws.

3. The ‍”Grand Experiment” & Dual-Use Nature ⁢of AI:

* The author frames the current situation as a‍ large-scale, uncontrolled⁢ experiment regarding societal mental health. AI is providing mental health guidance (often cheaply and readily available) ‍to a global audience.
* AI has a dual-use effect: it can both harm and benefit‍ mental health.⁤ ⁢Regulation needs to balance mitigating risks with maximizing ‍benefits.

4. The Core Question: Regulation vs. Innovation:

* The central question is whether new AI laws are necessary or if they will hinder innovation.
* The author invokes Henry Ward Beecher’s quote: a law’s value lies in its righteousness – is there a legitimate reason to ‍regulate, or would⁣ doing⁣ so be detrimental?
* The author ultimately leaves the judgment to the reader.

In essence, the text advocates ‌for ⁤a‍ thoughtful and nuanced approach to AI regulation, particularly concerning mental health. It highlights the potential dangers ⁤of overly simplistic laws while acknowledging the importance of not ​stifling innovation. The author doesn’t offer a definitive answer but encourages readers to consider the ethical and societal implications of ‌AI’s​ growing‍ role in mental wellbeing.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Anthropic Claude Google Gemini Meta Llama xAI Grok Microsoft CoPilot, artificial intelligence AI, generative AI large language model LLM, law legal regulation policy policymaker lawmaker attorney, mental health well-being counseling coaching therapy therapist, Openai Chatgpt GPT-5 GPT-4O, psychology psychiatry cognition, Texas HB 149 TRAIGA responsible

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service