Therapists’ Guide: Analyzing AI Chat for Client Mental Health
Therapists need to be on their toes and know how to best review AI chats that their clients have had on mental health topics.
getty
In today’s column, I examine the best way for therapists to clinically analyze transcripts of AI chats that their clients have undertaken, particularly focusing on any mental health considerations.
The assessment of such heavy-handed chats is becoming an increasingly significant and frequent activity for modern therapists. Clients are walking in the door with printouts of online chats they’ve had with generative AI and large language models (LLMs), including ChatGPT, GPT-5, Gemini, CoPilot, Grok, Llama, etc. The inquisitive client wants to know what the therapist has to say about the mental health advice and psychological insights being made by the AI.
Some therapists refuse to inspect the AI chats. They tell their clients to flatly stop using AI for any mental health purposes. Period, end of story. The problem is that a notable portion of those clients will do so anyway, behind the therapist’s back. That’s not conducive to a suitable therapist-client relationship. The choice is for the therapist to realize that AI chats are hear to stay and be willing to examine the chats, using the material as further fodder for the therapeutic process.
In that case, there are mindfully good ways to assess those transcripts, and there are less stellar ways to do so. A savvy therapist ought to go the mindful route.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
Table of Contents
As a swift background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here).
OpenAI Eagerly Trying To Reduce AI Psychosis And Squash Co-Creation Of Human-AI Delusions When Using ChatGPT and GPT-5
openai is reportedly making a concerted effort to mitigate the risk of AI psychosis and the development of delusions in users interacting with its large language models (LLMs) like ChatGPT and the forthcoming GPT-5. This is a crucial step, as the potential for AI to exacerbate or even induce mental health issues is a growing concern.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Simultaneously occurring, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages.
AI And The Matter Of Therapists
Some therapists won’t touch AI with a ten-foot pole. Their viewpoint is that AI is outside the scope of what they do. They won’t use AI for their own therapeutic practice. Nor will they advocate that their clients should use AI. It is indeed the proverbial no-AI-zone outlook.
My viewpoint that I’ve repeatedly expressed is that the therapy marketplace is inescapably heading toward a new triad, the therapist-AI-client combination. this will replace the classic dyad of therapist-client.
Therapists are going to ultimately recognize that AI is playing a role in the mental health dynamics of society, irrespective of whether therapists like that or not. It is reality.Harsh, cold reality. The therapists who stick their heads in the sand will gradually find themselves losing clients and not getting new ones. That’s maybe okay for therapists who have already run most of their career,but not good for therapists at earlier stages in building their practice.
Even if a therapist chooses not to use AI as a purposeful therapeutic tool, clients are going to be using AI to get mental health guidance anyway. When a client frist gets underway with a therapist, I’ve recommended that therapists ask whether the new client is already using AI. There are handy questions to be asked and answered on that front.
This should become a standard part of the intake process.
What to Do About AI Chats
Let’s suppose that a therapist is willing to review the AI chats that their client is having with a generic LLM such as ChatGPT. First, the therapist needs to make sure they have permission to do so from the client, which is best d
Therapists Need a New Approach to AI Chat Transcripts
As more patients turn to artificial intelligence for emotional support, therapists are increasingly encountering transcripts of these AI chats in sessions. These transcripts aren’t straightforward windows into a patient’s psyche, and require a careful, layered approach to interpretation. Therapists must move beyond simply reading the words on the screen and consider the context, potential alterations, and the patient’s intentional presentation of the material.
Making Sense Of The AI Chat
First, determine if the AI chat is genuine. Transcripts can be fabricated or altered, so verify with the client that you are reviewing an accurate record of their interaction.
Consider the broader context. Is the transcript complete? What events led up to the chat that aren’t included? why did the client choose to use AI for mental health support? How deeply involved did they become, and are they overly influenced by the AI’s responses?
Treat the AI chat as a behavioral artifact, much like a dream journal. it offers insight into the client, but it’s also a curated presentation. The client likely anticipated sharing the chat with a therapist and may have intentionally shaped it to elicit a specific response.
A structured review checklist can definitely help ensure a thorough evaluation. I will share a detailed checklist in a future post.
The Three Layers Approach
I recommend assessing an AI chat using a three-layered approach:
- (1) Client Prompts layer: Analyze the client’s input - their prompts – to understand their tone, urgency, emotional expression, and cognitive patterns.
- (2) AI Responses layer: Examine the AI’s responses to the client.
- (3) Interactional Dynamics layer: Focus on the interplay between the client and the AI, considering the conversation as a whole, rather than focusing solely on one side.
This layered approach allows you to see both the details and the bigger picture,ensuring a comprehensive understanding of the client’s experience.
