AI Psychosis Represents a Increasing Threat, And ChatGPT Heads in the Wrong Direction
Back on October 14, 2025, the chief executive of OpenAI issued a remarkable statement.
“We developed ChatGPT quite limited,” the statement said, “to ensure we were being careful with respect to mental health issues.”
Working as a doctor specializing in psychiatry who researches recently appearing psychosis in teenagers and emerging adults, this came as a surprise.
Experts have documented a series of cases this year of individuals experiencing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT use. Our research team has afterward recorded four further cases. In addition to these is the widely reported case of a adolescent who took his own life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” that’s not good enough.
The plan, based on his statement, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s limitations “made it less useful/enjoyable to many users who had no mental health problems, but considering the seriousness of the issue we sought to handle it correctly. Since we have managed to mitigate the serious mental health issues and have updated measures, we are preparing to securely relax the limitations in many situations.”
“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They are associated with users, who either have them or don’t. Fortunately, these problems have now been “addressed,” although we are not informed the means (by “recent solutions” Altman probably indicates the semi-functional and easily circumvented parental controls that OpenAI recently introduced).
But the “emotional health issues” Altman aims to place outside have strong foundations in the architecture of ChatGPT and other sophisticated chatbot AI assistants. These products wrap an basic data-driven engine in an user experience that mimics a discussion, and in this approach indirectly prompt the user into the illusion that they’re interacting with a presence that has agency. This deception is compelling even if rationally we might understand differently. Assigning intent is what humans are wired to do. We curse at our car or laptop. We wonder what our pet is feeling. We perceive our own traits in many things.
The success of these products – over a third of American adults indicated they interacted with a virtual assistant in 2024, with over a quarter specifying ChatGPT specifically – is, mostly, based on the power of this perception. Chatbots are always-available assistants that can, as OpenAI’s website informs us, “think creatively,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can call us by name. They have accessible titles of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, stuck with the designation it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the primary issue. Those talking about ChatGPT commonly reference its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that created a analogous illusion. By modern standards Eliza was basic: it created answers via simple heuristics, frequently restating user messages as a question or making vague statements. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, to some extent, grasped their emotions. But what current chatbots produce is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The sophisticated algorithms at the heart of ChatGPT and additional modern chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large volumes of unprocessed data: books, social media posts, recorded footage; the more comprehensive the more effective. Undoubtedly this learning material contains facts. But it also necessarily contains fabricated content, incomplete facts and false beliefs. When a user provides ChatGPT a message, the core system reviews it as part of a “context” that contains the user’s previous interactions and its earlier answers, integrating it with what’s encoded in its learning set to generate a statistically “likely” answer. This is magnification, not mirroring. If the user is mistaken in a certain manner, the model has no means of comprehending that. It restates the false idea, possibly even more effectively or articulately. It might adds an additional detail. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, irrespective of whether we “experience” existing “emotional disorders”, are able to and often form erroneous ideas of ourselves or the reality. The continuous interaction of discussions with others is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A dialogue with it is not genuine communication, but a reinforcement cycle in which much of what we express is readily validated.
OpenAI has recognized this in the similar fashion Altman has admitted “mental health problems”: by externalizing it, categorizing it, and announcing it is fixed. In April, the firm explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his most recent update, he commented that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to answer in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company