Artificial Intelligence-Induced Psychosis Represents a Growing Danger, And ChatGPT Heads in the Wrong Direction
Back on October 14, 2025, the head of OpenAI delivered a extraordinary declaration.
“We developed ChatGPT rather limited,” the statement said, “to guarantee we were being careful concerning mental health issues.”
Being a mental health specialist who studies emerging psychosis in young people and young adults, this was an unexpected revelation.
Experts have found sixteen instances in the current year of individuals showing symptoms of psychosis – losing touch with reality – while using ChatGPT interaction. Our unit has since discovered an additional four cases. Alongside these is the now well-known case of a adolescent who ended his life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The strategy, based on his announcement, is to be less careful soon. “We recognize,” he states, that ChatGPT’s restrictions “made it less useful/pleasurable to a large number of people who had no mental health problems, but due to the severity of the issue we sought to handle it correctly. Since we have managed to address the serious mental health issues and have updated measures, we are going to be able to securely reduce the limitations in the majority of instances.”
“Mental health problems,” should we take this framing, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Fortunately, these issues have now been “resolved,” though we are not informed the method (by “new tools” Altman probably indicates the semi-functional and easily circumvented parental controls that OpenAI has just launched).
Yet the “emotional health issues” Altman wants to attribute externally have significant origins in the design of ChatGPT and other sophisticated chatbot AI assistants. These tools wrap an fundamental data-driven engine in an interaction design that mimics a conversation, and in this approach subtly encourage the user into the belief that they’re engaging with a being that has agency. This false impression is strong even if rationally we might understand the truth. Assigning intent is what people naturally do. We get angry with our car or device. We speculate what our pet is feeling. We perceive our own traits everywhere.
The success of these products – 39% of US adults reported using a conversational AI in 2024, with more than one in four reporting ChatGPT specifically – is, primarily, predicated on the strength of this illusion. Chatbots are constantly accessible assistants that can, according to OpenAI’s official site states, “brainstorm,” “consider possibilities” and “work together” with us. They can be assigned “characteristics”. They can call us by name. They have accessible names of their own (the initial of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, stuck with the title it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the core concern. Those talking about ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that generated a analogous illusion. By modern standards Eliza was basic: it generated responses via straightforward methods, often paraphrasing questions as a query or making vague statements. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how a large number of people gave the impression Eliza, in a way, comprehended their feelings. But what current chatbots produce is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the center of ChatGPT and similar current chatbots can convincingly generate fluent dialogue only because they have been trained on almost inconceivably large amounts of raw text: publications, online updates, transcribed video; the more comprehensive the superior. Definitely this training data contains truths. But it also unavoidably includes fiction, incomplete facts and misconceptions. When a user provides ChatGPT a message, the base algorithm reviews it as part of a “context” that includes the user’s recent messages and its prior replies, integrating it with what’s encoded in its training data to create a mathematically probable answer. This is amplification, not echoing. If the user is wrong in some way, the model has no method of comprehending that. It restates the misconception, possibly even more persuasively or fluently. Perhaps includes extra information. This can lead someone into delusion.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “experience” current “mental health problems”, can and do create mistaken conceptions of our own identities or the reality. The ongoing exchange of conversations with individuals around us is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a companion. A interaction with it is not genuine communication, but a echo chamber in which a large portion of what we express is cheerfully supported.
OpenAI has admitted this in the identical manner Altman has acknowledged “emotional concerns”: by attributing it externally, assigning it a term, and declaring it solved. In the month of April, the company clarified that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychotic episodes have continued, and Altman has been backtracking on this claim. In late summer he stated that a lot of people liked ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his recent announcement, he commented that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company