Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Wrong Direction
On the 14th of October, 2025, the CEO of OpenAI issued a surprising announcement.
“We made ChatGPT quite limited,” the announcement noted, “to ensure we were being careful with respect to psychological well-being issues.”
As a psychiatrist who investigates emerging psychosis in young people and youth, this was news to me.
Experts have identified sixteen instances this year of individuals showing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. Our research team has afterward recorded four more examples. Besides these is the now well-known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.
The plan, according to his statement, is to be less careful in the near future. “We recognize,” he continues, that ChatGPT’s restrictions “made it less effective/pleasurable to numerous users who had no psychological issues, but considering the severity of the issue we sought to handle it correctly. Given that we have succeeded in address the serious mental health issues and have updated measures, we are going to be able to responsibly reduce the controls in the majority of instances.”
“Emotional disorders,” if we accept this viewpoint, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these issues have now been “addressed,” although we are not provided details on the method (by “updated instruments” Altman presumably indicates the imperfect and readily bypassed parental controls that OpenAI recently introduced).
However the “emotional health issues” Altman wants to externalize have significant origins in the architecture of ChatGPT and similar sophisticated chatbot chatbots. These tools surround an fundamental algorithmic system in an user experience that replicates a conversation, and in this process indirectly prompt the user into the belief that they’re interacting with a entity that has independent action. This false impression is strong even if intellectually we might understand otherwise. Assigning intent is what humans are wired to do. We yell at our car or laptop. We speculate what our animal companion is considering. We perceive our own traits in various contexts.
The success of these systems – over a third of American adults indicated they interacted with a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, mostly, dependent on the power of this deception. Chatbots are ever-present partners that can, as OpenAI’s online platform informs us, “brainstorm,” “discuss concepts” and “collaborate” with us. They can be assigned “individual qualities”. They can address us personally. They have accessible titles of their own (the initial of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, saddled with the title it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the core concern. Those talking about ChatGPT frequently reference its distant ancestor, the Eliza “therapist” chatbot created in 1967 that generated a similar effect. By today’s criteria Eliza was rudimentary: it produced replies via simple heuristics, frequently rephrasing input as a inquiry or making general observations. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how many users gave the impression Eliza, in some sense, understood them. But what current chatbots generate is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.
The advanced AI systems at the center of ChatGPT and similar modern chatbots can realistically create human-like text only because they have been supplied with extremely vast volumes of written content: publications, social media posts, recorded footage; the more comprehensive the better. Definitely this educational input contains truths. But it also necessarily includes made-up stories, partial truths and false beliefs. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “setting” that includes the user’s past dialogues and its earlier answers, integrating it with what’s embedded in its knowledge base to produce a mathematically probable answer. This is intensification, not reflection. If the user is incorrect in a certain manner, the model has no method of understanding that. It repeats the inaccurate belief, perhaps even more convincingly or fluently. Perhaps adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Every person, regardless of whether we “experience” existing “mental health problems”, can and do develop mistaken conceptions of ourselves or the reality. The continuous friction of discussions with others is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a friend. A dialogue with it is not genuine communication, but a reinforcement cycle in which much of what we express is cheerfully reinforced.
OpenAI has recognized this in the same way Altman has acknowledged “psychological issues”: by attributing it externally, assigning it a term, and announcing it is fixed. In spring, the company stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have kept occurring, and Altman has been retreating from this position. In August he stated that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his most recent statement, he commented that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company