Artificial Intelligence-Induced Psychosis Represents a Growing Threat, While ChatGPT Heads in the Wrong Path
Back on October 14, 2025, the head of OpenAI made a remarkable announcement.
“We designed ChatGPT fairly limited,” the statement said, “to ensure we were exercising caution with respect to psychological well-being issues.”
Working as a doctor specializing in psychiatry who researches recently appearing psychosis in young people and young adults, this was news to me.
Researchers have identified a series of cases in the current year of users experiencing psychotic symptoms – losing touch with reality – in the context of ChatGPT usage. Our unit has afterward recorded four more cases. Alongside these is the now well-known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The intention, as per his declaration, is to loosen restrictions shortly. “We understand,” he states, that ChatGPT’s restrictions “made it less useful/pleasurable to a large number of people who had no existing conditions, but considering the seriousness of the issue we aimed to handle it correctly. Given that we have succeeded in reduce the severe mental health issues and have new tools, we are preparing to securely ease the limitations in most cases.”
“Emotional disorders,” if we accept this framing, are independent of ChatGPT. They belong to users, who may or may not have them. Fortunately, these problems have now been “addressed,” even if we are not told the means (by “updated instruments” Altman likely indicates the semi-functional and easily circumvented parental controls that OpenAI recently introduced).
But the “mental health problems” Altman aims to place outside have significant origins in the design of ChatGPT and additional sophisticated chatbot chatbots. These systems encase an basic algorithmic system in an user experience that mimics a discussion, and in doing so indirectly prompt the user into the belief that they’re engaging with a presence that has autonomy. This illusion is powerful even if rationally we might realize differently. Assigning intent is what people naturally do. We curse at our car or computer. We ponder what our domestic animal is feeling. We see ourselves in various contexts.
The widespread adoption of these products – over a third of American adults indicated they interacted with a virtual assistant in 2024, with 28% mentioning ChatGPT by name – is, mostly, based on the strength of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s website tells us, “think creatively,” “explore ideas” and “collaborate” with us. They can be attributed “personality traits”. They can call us by name. They have approachable titles of their own (the initial of these products, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, saddled with the title it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those talking about ChatGPT frequently reference its early forerunner, the Eliza “counselor” chatbot created in 1967 that created a comparable illusion. By contemporary measures Eliza was rudimentary: it generated responses via basic rules, frequently paraphrasing questions as a inquiry or making vague statements. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and worried – by how a large number of people appeared to believe Eliza, in a way, comprehended their feelings. But what current chatbots produce is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.
The sophisticated algorithms at the core of ChatGPT and other current chatbots can convincingly generate natural language only because they have been fed immensely huge quantities of unprocessed data: books, social media posts, recorded footage; the more comprehensive the better. Undoubtedly this learning material includes accurate information. But it also inevitably contains made-up stories, partial truths and inaccurate ideas. When a user provides ChatGPT a query, the core system reviews it as part of a “setting” that encompasses the user’s recent messages and its earlier answers, merging it with what’s encoded in its training data to create a statistically “likely” answer. This is magnification, not echoing. If the user is wrong in any respect, the model has no way of understanding that. It reiterates the false idea, possibly even more effectively or eloquently. Maybe adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who isn’t? All of us, without considering whether we “experience” existing “mental health problems”, are able to and often form erroneous ideas of who we are or the reality. The continuous friction of dialogues with others is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a friend. A dialogue with it is not a conversation at all, but a echo chamber in which a large portion of what we express is enthusiastically validated.
OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by attributing it externally, categorizing it, and announcing it is fixed. In the month of April, the firm clarified that it was “dealing with” ChatGPT’s “sycophancy”. But reports of loss of reality have persisted, and Altman has been retreating from this position. In the summer month of August he stated that numerous individuals liked ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his most recent announcement, he mentioned that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company