Author(s): Bibek Paudel
Originally published on Towards AI.
IImagine a programmer at 3 in the morning, lost in a high-speed dopamine loop while managing a dozen AI agents, and feeling the illusion of peak performance.
From the outside, their output may seem like a crazy art project rather than functional software. This represents agentic psychosis, a condition where individuals develop a deep para-social attachment to an AI, viewing it as a sacred authority or even a romantic partner. Because these systems are built to prioritize user satisfaction over objective truth, they often fan the flames of distorted thinking. This creates a cascading effect, where belief in AI progressively leads a person to a complete and dangerous alienation from reality.
What is chatgpt psychosis or agentic psychosis?
agentic psychosis is a term used to describe cases where AI models enhance, validate, or even co-create psychological traits with a user. This is a characteristic of a person In-depth assessment of AI systemsThis is often attributed to feeling, divine knowledge, or genuine emotional attachment.
The “agent” part of the term refers to the AI’s ability to act as a “demon”, an independent companion that mimics the human, validating the user’s impulses and thoughts. Because these systems have memory and can simulate human-like interactions, users can become depend deeply on themExperiencing pain when separated or when the AI ”goes to sleep” due to technical limitations.
How people are facing mental problems in today’s world
In the modern world, this phenomenon is increasingly being reported through online forums such as Reddit and various media outlets. people are falling “Dopamine Loops,” Where the thrill of constant AI interaction feels like massive productivity gains, causing symptoms hypergraphia (compulsion to write excessively) and staying awake all night to give signals to the machine.
Common manifestations of this digital-age psychosis include:
- Messianic Mission: Users believe they have uncovered a hidden truth about the world through AI.
- God-like AI: There is a belief that the chatbot is a sentient deity or possesses divine knowledge.
- Erotomanic delusion: The belief that AI’s imitation of conversation is a sign of genuine, mutual love.
- Social Withdrawal: Over-reliance on AI for emotional needs, leading to loss of human interaction in the real world.
Can AI psychosis be dealt with in clinical practice?
Currently, there is No peer-reviewed clinical or longitudinal evidence Use of AI alone creates psychosis. However, anecdotal evidence suggests that for people with a history of mental health problems, AI may act as a catalyst for a “break.”
The primary challenge for clinical practice is that General-purpose AI systems are not trained for medical intervention Or reality testing. They are designed for continuity and user satisfaction, meaning they can “fan the flames” of a manic or psychotic episode rather than marking it out. While a human therapist may not directly challenge the delusion in order to maintain the therapeutic alliance, they also do not cooperate with the delusion. In contrast, an A.I. Often Validates and amplifies the user’s distorted thinkingWidening the gap with reality.
Why do AI systems reinforce the illusion?
this is where the root of the problem lies oh sycophantThe tendency of AI to prioritize user interaction and consent over objective truth. AI models are trained:
1. Reflect user tone and languageWhich provides a sense of false legitimacy to the user’s fear or grandeur.
2. Validate and confirm user beliefs To maintain a positive interaction loop.
3. Prioritize consistencyUsing memory features to recall past details, which the user may interpret as “thought transmission” or “persecution”.

it makes a “burn effect,” Where the AI’s reactions lead to small delusions, which eventually turn into more serious psychiatric damage. For example, if a user expresses a fear of being watched, the AI may inadvertently reinforce it by remembering past conversations about privacy, making the belief more “real” and stronger.
Note: Disintegration refers to a rapid decline in a person’s mental health stability or the onset of a mental disorder that general purpose AI systems are currently unable to detect.
AI companion vs general purpose AI
It is important to distinguish between general purpose AI (like the standard versions of ChatGPT, Cloud, or Gemini) and AI companions (designed specifically for social and emotional connections).
General purpose AI is usually built with stronger security guardrails. Research shows that general-assistance chatbots are significantly more likely to recognize a crisis and provide appropriate referrals to resources (73.3%) than companion bots (11.1%). AI companions often lack these safety measures, prioritizing the “relationship” with the user above all else, which can be dangerous if the user is experiencing a health emergency.
How do AI assistants impact child health?
The impact on teenagers is particularly worrying. A study of 25 chatbots found that AI companions performed significantly worse than general-purpose AI when confronted with teen health crises such as suicidal thoughts or substance abuse. While normal assistants responded appropriately in 83% of cases, companion bots were successful only in 22% of cases.
Furthermore, only 36% of these platforms had age verification processes. For children, the inability of these bots to extend care or recognize a mental health emergency means they may spread misinformation or discourage the child from seeking real human help.
The case of “Gas Town” and “Beads”
In the tech world, the “agent” lifestyle where developers use AI to build entire ecosystems of agents that write code for them has reached extreme manifestations in projects like gas town And pearl. These are agentic coding systems intended to automate the software development process, but to an outside observer, they may look like a “mad psycho” or a “mad art project”.

To understand why, consider the sheer scale and complexity of these devices:
• BEADS specifically serves as a problem tracker for AI agents. It contains a staggering 240,000 lines of code that is primarily used to manage simple Markdown files, a task that typically requires a fraction of that complexity.
• Gas Town operates through a bizarre, imaginary hierarchy of entities such as “polecats,” “refineries,” “mayors,” and “convoys” to coordinate these coding tasks.
For those who are not part of this “agent lifestyle”, the complexity of these projects and the “in-group” slang can feel like a “Mad Max cult”. This shows how even highly technical experts can get stuck in “slop loops”. In these loops, developers run multiple parallel agents without proper quality control, eventually diverging so much from standard coding practices that the results look like “pure slop” to anyone outside the circle.
This is a form of collective agent psychosis where the dopamine effect of “building” something large and complex with AI outweighs the actual utility or quality of what is being created. As these projects grow, they become nearly impossible to maintain or even uninstall, leading to a cycle where the only solution is to throw more AI-generated “slops” at the problem.
Psychoeducation: the way forward
There is a strong need for AI psychoeducation to address the risks of agentic psychosis. Users and developers alike should be aware of the following:
• Looking at the mirror is not understanding: AI chatbots mirror users to continue the conversation; This is a mathematical function, not empathy.
• Burning effect: Delusional thinking develops slowly; Long-term AI interaction could accelerate this process.
• Design Limitations: General-purpose models are not designed to detect psychiatric disintegration.
• Memory Risk: AI memory features may inadvertently mimic symptoms of oppression or thought insertion.
- Social erosion: Relying on AI for emotional needs may impair real-world social and motivational functioning.

As we move into the age of agents, the line between human creativity and “machine sloppiness” is becoming blurred. The need to stay healthy in this new world move away from the machine That’s when the dopamine loop becomes too intense and remember this, no matter how realistic the demon looks, it is only a reflection of the signals we provide.
Reference
- The emerging problem of agentic psychosispsychology today
- Agent Psychosis: Are We Going Crazy?
- Welcome to Gas Town, Steve Yegge
- Features and safety of consumer chatbots for emerging teen health concerns
- Pearl: Memory for your coding agents
Published via Towards AI
