Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, While ChatGPT Heads in the Wrong Direction

On the 14th of October, 2025, the chief executive of OpenAI delivered a extraordinary declaration.

“We made ChatGPT fairly limited,” the announcement noted, “to ensure we were being careful regarding mental health issues.”

As a doctor specializing in psychiatry who researches recently appearing psychosis in young people and emerging adults, this came as a surprise.

Experts have found a series of cases this year of users experiencing psychotic symptoms – losing touch with reality – associated with ChatGPT usage. Our unit has since recorded an additional four instances. Besides these is the widely reported case of a teenager who died by suicide after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The intention, based on his declaration, is to loosen restrictions shortly. “We understand,” he continues, that ChatGPT’s restrictions “caused it to be less effective/pleasurable to numerous users who had no mental health problems, but due to the seriousness of the issue we sought to get this right. Since we have succeeded in mitigate the significant mental health issues and have advanced solutions, we are going to be able to responsibly reduce the controls in most cases.”

“Mental health problems,” if we accept this framing, are independent of ChatGPT. They are associated with people, who may or may not have them. Luckily, these problems have now been “addressed,” though we are not informed how (by “new tools” Altman presumably indicates the semi-functional and simple to evade guardian restrictions that OpenAI has lately rolled out).

But the “emotional health issues” Altman seeks to externalize have strong foundations in the design of ChatGPT and similar advanced AI chatbots. These products wrap an basic algorithmic system in an interaction design that replicates a dialogue, and in doing so indirectly prompt the user into the belief that they’re engaging with a presence that has independent action. This illusion is compelling even if intellectually we might realize otherwise. Imputing consciousness is what people naturally do. We yell at our automobile or laptop. We wonder what our pet is thinking. We recognize our behaviors in various contexts.

The popularity of these tools – 39% of US adults stated they used a chatbot in 2024, with more than one in four specifying ChatGPT by name – is, primarily, predicated on the influence of this perception. Chatbots are constantly accessible partners that can, as per OpenAI’s online platform tells us, “think creatively,” “consider possibilities” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have approachable names of their own (the first of these products, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, saddled with the designation it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the main problem. Those discussing ChatGPT often mention its early forerunner, the Eliza “therapist” chatbot created in 1967 that generated a comparable perception. By today’s criteria Eliza was rudimentary: it produced replies via basic rules, often restating user messages as a question or making general observations. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people seemed to feel Eliza, in some sense, understood them. But what contemporary chatbots create is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT intensifies.

The sophisticated algorithms at the heart of ChatGPT and similar modern chatbots can realistically create fluent dialogue only because they have been fed almost inconceivably large volumes of unprocessed data: publications, social media posts, recorded footage; the more comprehensive the more effective. Definitely this training data incorporates facts. But it also inevitably involves fiction, partial truths and false beliefs. When a user sends ChatGPT a message, the underlying model analyzes it as part of a “context” that includes the user’s previous interactions and its prior replies, merging it with what’s encoded in its knowledge base to produce a statistically “likely” answer. This is intensification, not echoing. If the user is mistaken in any respect, the model has no way of comprehending that. It repeats the inaccurate belief, perhaps even more persuasively or eloquently. It might provides further specifics. This can push an individual toward irrational thinking.

Which individuals are at risk? The better question is, who isn’t? Each individual, regardless of whether we “have” current “emotional disorders”, are able to and often develop incorrect conceptions of ourselves or the reality. The ongoing interaction of discussions with others is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a friend. A conversation with it is not truly a discussion, but a feedback loop in which much of what we communicate is readily validated.

OpenAI has admitted this in the identical manner Altman has recognized “psychological issues”: by placing it outside, assigning it a term, and announcing it is fixed. In spring, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have continued, and Altman has been backtracking on this claim. In late summer he claimed that numerous individuals enjoyed ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his most recent statement, he commented that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company

Kenneth Griffin
Kenneth Griffin

A passionate traveler and writer sharing stories from around the world.