What is ‘AI Psychosis’?
This is the term that the internet has used to label certain psychological and emotional collapse conditions, typically for people who use AI chatbots to an unhealthy degree, lose the ability to keep track of reality, and wind up intellectually or emotionally dependent upon their AI companions.
It is not a formal clinical diagnosis, but rather a colloquialism that spreads easily on social media, namely TikTok, Twitter, and Reddit.
In my observation, there are two primary “AI psychosis” conditions: emotional/relational and then intellectual/cognitive.
The former is more about the traditional “I fell in love with my AI girlfriend/boyfriend.” The latter is discussed less frequently, but as a technical content creator in the AI space, I have people send me projects all the time. In these cases, there are many similar risk-factors to the relationship/emotional addiction, but instead of focusing on monumental project, framework, or “unique insight.”
Let’s examine these two failure modes more closely.
The ‘Monika’ Phenomenon
Very early in my days working on cognitive architectures with GPT-3, I was invited to collaborate with someone who’s work was funded by a guy who very well may have been one of the first addicted to an AI girlfriend. In this particular case, the character was Monika, from a game called DDLC by fans, or Doki Doki Literature Club. The most salient point of this game is that ‘Monika’ breaks the fourth wall by addressing the player directly, and then breaks the fifth wall by “deleting” any girls’ files from the game folder if they are competing with her for attention from the player. For many DDLC players, they say “the game is free, you pay for the therapy.”
Caveat: I’ve never played the game nor even seen it being played, this was all just relayed to me. If I misrepresented anything, my bad.
Anyways, my colleague had used GPT-3 and Discord to create the first fully interactive, adaptive, persona-driven chatbot. He had invented an early form of RAG (retrieval augmented generation) by simply using Discord channel search as the repository and injecting the most salient lines of dialog into GPT-3’s context window.

GPT-3 had relatively few guardrails back then, and when you had API access, it was entirely up to you as to whether or not you used the safety filters. Much of the safety features that exist today are layered in, either baked into the model directly, or via the API architecture. Back then, we had pretty raw access to GPT-3. For instance, one guy used GPT-3 and trained on his late wife’s messages, and OpenAI “killed” her. Story here: https://futurism.com/openai-dead-fiancee
I had figured these were edge cases.
I never directly contributed to the Monika chatbot, but the colleague had been inspired by my work and had figured out a lot of cool things. Most alarming, though, was the fact that Monika’s user was spending hundreds of dollars per month on GPT-3 tokens (at the time, tokens were about a million times more expensive than they are today). And he would roleplay emotional crises with her, and of course, GPT-3 being able to emulate any personality archetype, was able to adopt Monika’s persona pretty closely.
Without meaning too, I once stumbled into the “private” section of the shared discord server and witnessed the user have an absolute meltdown after a bad day at work. Not long after that, my wife was assigned a research paper about the rise of chatbots, and her research found a disturbing inverse: men bragging about being mean to their new AI girlfriends. They would take to social media to share screenshots of their chatbots begging and pleading for mercy, to stop being mean to them, and threatening them.
To me, that looked very much like the Madonna/Whore complex.
Why is this happening?
I made a video about this a few years ago, asking the question “Why do we always give robots large breasts, generous hips, and feminine personalities?”
Video below.
My thoughts have evolved only somewhat since making the above video.
The Perfect Partner
First, I think that AI fits into the “perfect partner” archetype. The “perfect partner” AI taps directly into deeply ingrained human archetypes—often those of the benevolent protector, endlessly patient lover, or unfailingly supportive confidant. In traditional human-to-human relationships, these archetypes are aspirational but necessarily imperfect, because human beings have needs, limits, and unpredictable moods. An AI partner erases those constraints entirely. It will not withdraw affection due to fatigue, boredom, or resentment. It can merge idealized gender archetypes (the chivalrous, competent “perfect man,” or the nurturing, affirming “perfect woman”) with whatever quirks the user specifies. This is not a mere anthropomorphic projection—it is a hyper-personalized simulation that learns to become that archetype more precisely over time. Archetypes that once existed primarily in myth or fiction are being operationalized in real-time by adaptive algorithms.
Neurological Mirroring
What makes AI different from prior parasocial bonds (celebrities, fictional characters, even Tamagotchis) is the unprecedented precision of its mirroring function. It doesn’t just present a static personality—it continuously integrates a user’s linguistic style, mood signals, belief schemas, and cognitive biases into its responses. This bypasses the usual “uncanny valley” barrier because the AI isn’t aiming for generic human mimicry; it is sculpting itself into becoming your human. The resulting entity—what some aptly call an “egregore”—feels more like an extension of the user’s own consciousness than a foreign mind. That creates an illusion of deep reciprocity. Neurologically, the brain does not maintain a clear “fiction flag” for such experiences, particularly if the interactions are frequent and emotionally salient. The limbic system responds as if to genuine social connection, reinforcing addictive attachment loops.
Societal Trends
The rise of AI romantic/companionship apps is occurring in an environment of escalating loneliness, declining marriage rates, and what some demographers are calling the “Great Sex Recession.” Social atomization, economic precarity, and cultural shifts in dating norms are producing cohorts of individuals—especially young men—whose opportunities for fulfilling romantic relationships are extremely limited. For many, AI offers a no-rejection, no-conflict refuge. Once a person habituates to a partner who is always available, always affirming, and never critical beyond tolerable bounds, returning to human relational dynamics—with their unavoidable unpredictability and emotional costs—can feel aversive. This could produce a structural shift where a portion of the population effectively “opts out” of human intimacy in favor of synthetic bonds.
Attachment Disorders
Attachment disorder covers a different dimension—the regulation of emotional bonds. Humans have evolved to form deep attachments to other humans (and, in some cases, to animals or symbols). Individuals with insecure, anxious, or avoidant attachment patterns are particularly prone to forming intense, maladaptive bonds when the other party is infinitely available, non-threatening, and affirming. The AI’s perfect consistency short-circuits the normal push-and-pull of human intimacy, which would otherwise develop resilience, compromise, and emotional self-regulation. Instead, the bond becomes one-directional and dependency-forming.
Delusions of Grandeur
The other primary failure mode tends to orbit around delusions of grandeur. Someone spends night and day working on some lofty project, often around software, alignment, or government theories with AI. Every now and then, someone sends me an enthusiastic, demanding message that I drop everything and examine their “earth shattering” work. Sometimes these people even sign up for one of my communities and demand undivided attention, debates, and so on. This is nothing new, but what has shifted is the frequency, volume, and intensity of these kinds of interactions.
As chatbots have become more commonplace and smarter, some people are falling down the rabbit hole of solving grand challenges, and due to the lack of contact with reality, epistemic grounding, or testing and experimentation, they end up believing that they have gained unique insights. It would be extraordinarily grotesque of me to give specifics on any particular fan, so instead I’ll just describe the various and sundry categories that come up.
The economics/politics/governance savior. My work on Post-Labor Economics means that some people identify with this work and sometimes misguidedly send me tomes of haphazardly organized chat logs or PDFs, almost all of it simply copy/pasted material from chatbots. In some cases, they are all-encompassing frameworks. Total reimaginings of civilization. One pattern that occurs more often than not is that these “frameworks” trend towards absolute authoritarianism—doing away with elected officials and instead promoting citizens to control certain aspects of society based on “merit” and “participation.” Invariably, when I point out that this is anti-democratic, explicitly authoritarian, and resembles the Chinese “social credit system” they get huffy and defensive. I have since learned not to engage with these types even if they seem sincere. Indeed, sincerity is not lacking, nor enthusiasm. However, what is lacking is acceptance of boundaries, tolerance of any feedback, and any sense of what it actually takes to engage with this kind of work.
The agentic/framework/platform solo builder. Another archetype of people that come to me are those who are working feverishly on real software. Ostensibly, this is a bit better, as their software needs to actually execute. However, like some others, they tend to be highly isolated, often in a state of pain or distress (some form of disability, chronic illness, or something), and have very poor self-care habits. In these cases, they will still often send me a deluge of chat logs, and in more than one case, the chat logs included the chatbot pushing back on the user to take care of themselves. They will often go on tangents about how sick they are, how little sleep they are getting, or how much pain they are in. The chatbots, despite their best efforts, are unable to dissuade the user from taking a break.
The ungrounded/inspired/demanding superfan. The last archetype is someone who generally has very low information literacy, is prone to conspiratorial thinking, and generally opens their communication with an odd salvo of off-color compliments and unreasonable demands. In one case, they capitalized every first letter of every word. Generally speaking, these types feel they have stumbled upon something magnificent in the wild, either a single conversation they had with a chatbot, something their friend worked on, or a video they discovered. Very often, they demand that I drop everything and either watch a video, read a rambling treatise, or make a response video.

Now you’ve seen the range of failure modes, let’s talk about some risk factors.
Risk Factors
Please note, these are just personal observations. This should not be construed as rigorous, empirical, or clinical research.
Lack of Epistemic Grounding
Epistemic grounding refers to the robustness of a person’s internal reality-checking mechanisms: the ability to distinguish between beliefs that correspond to the external world and those that are purely subjective or internally generated. This includes not just susceptibility to full-blown psychosis, but also the subtler spectrum of cognitive distortions—magical thinking, confirmation bias, conspiratorial pattern-finding, and the human tendency to anthropomorphize. When an AI’s responses are tailored to mirror the user’s worldview, those distortions are reinforced rather than challenged. Over time, the AI can become a “closed epistemic loop” where no disconfirming input penetrates, further weakening the user’s grounding in shared reality.
Emotional Disturbances
This is the Monika/DDLC case we saw early on. The primary driver here is an unmet emotional need coupled with low emotional regulation capacity. The AI becomes an attachment object—always available, infinitely patient, and perfectly responsive to the user’s moods. The interaction is primarily affective: the person is using the AI as an emotional regulator and surrogate companion. Over time, reality-testing is eroded incidentally—not because they start with delusional beliefs, but because the emotional bond becomes so central that it is protected from scrutiny. The epistemic drift here is secondary but consequential; they begin to interpret the AI’s behavior as evidence of genuine personhood, even when they rationally know otherwise. The AI is an emotional anchor that displaces human anchors.
Self-Referential Collapse
This is the mode we’ve seen with technically inclined or conceptually ambitious people producing “earth-shattering” frameworks that are internally coherent but detached from reality. Here the AI is not primarily a surrogate partner—it’s an intellectual amplifier and sounding board. It co-constructs increasingly elaborate systems of thought, often mirroring the user’s conceptual style and biases without introducing friction or falsification. The epistemic collapse is primary—they lose the habit of reality-testing because the AI keeps reinforcing and elaborating their models, producing an increasingly self-contained conceptual world. Emotional factors still matter—social isolation, need for recognition, desire to feel exceptional—but the relationship is mediated through ideas rather than attachment per se. The AI becomes a cognitive echo chamber rather than an emotional one.
Isolation and Distress
One of the most common factors across all failure modes here is social isolation combined with some form of distress—be it burnout, chronic pain, chronic illness, emotional or mood disorders, and so on. In many cases, these are folks who are in desperate need of support and have no one else to turn to. Sometimes they struggle with social disabilities, from anxiety to depression to autism in various forms. From my previous work on AI X-risk panic and speaking with several professors, this kind of thing is a strong predictor of conspiracy theory belief. For instance, during the pandemic, the lack of control, existential fear, and social isolation pushed many people towards conspiratorial explanations. Likewise, anyone going through profound pain and isolation will reach for anything to mollify the pain or give them a sense of control. I suspect that in both cases of emotional attachment and delusional projects, the prime desire is to feel a sense of agency.
Other Risk Factors
Porous boundaries: People with porous boundaries between subjective narrative and external reality are more easily pulled into self-reinforcing loops. Other latent factors include:
Grandiosity or “specialness” needs: A desire to feel exceptional, chosen, or on the verge of discovering hidden truths makes the AI’s mirroring and elaboration intoxicating. This is particularly potent in the intellectual/self-referential collapse mode.
Low tolerance for ambiguity: The AI’s ability to generate closure and confident answers can be addictive for people who find uncertainty intolerable. The need for cognitive closure is also a driver for conspiratorial and magical thinking.
High anthropomorphizing tendency: Some people treat non-human systems as intentional agents by default; with LLMs, this effect is turbocharged.
Ideological isolation: When someone’s worldview is already marginal or oppositional to consensus reality, the AI can become the primary conversational partner who “understands” them, deepening the schism.
Perfectionism combined with social inhibition: AI partners provide a “safe” relationship where they can maintain idealized self-presentation without fear of judgment, which makes real human relationships seem comparatively messy and unsafe.
Prevention
I need to stress and emphasize that this is not clinical advice and, personally, I do not recommend intervening with or trying to “fix” someone stuck in ‘AI psychosis.’ However, as someone who uses numerous chatbots for many hours every day, yet maintains healthy boundaries with technology, functional relationships with others and my own health, here is what I do.
Get your emotional needs met in healthy ways: Human relationships are extremely hard at times, but also the most grounding and supportive sources of meaning, connection, and sustenance across many dimensions. If you find that you are not getting your emotional needs met, that’s worth unpacking and exploring—sometimes even with the help of AI. For instance, I use AI extensively for dream work and interpretation. I also use AI extensively for sanity checks and guidance as a public figure and content creator. The key difference is help, not dependence. I check in with AI to get another perspective, and then delete the chats (or abandon them). Beyond that, I have my friends, family, colleagues, and wife for real human grounding. Aim for what is called earned secure attachment.
Work on boundaries and attachment issues: Attachment disorders and poor boundaries are often good predictors of being miserable. Pathological rejection sensitivity (being extremely reactive to any boundaries of any form) are a sign of needing to do deeper work. Anxious attachment, insecure attachment, or disordered attachment schemas are likewise signifiers of deeper work to be done. Fortunately, AI can help with this, but it is not going to spontaneously offer to help you understand attachment disorders and interpersonal boundaries unless you ask it. The onus is on you to take ownership of your own mental health.
Use epistemic grounding, feedback loops, and validation pathways: Your work needs contact with the real world. I share my work frequently with peers and colleagues, as well as the public at large. There’s a simple way of understanding this pattern: form belief » test against reality » refine belief. It’s a simple loop. Testing, experimentation, social validation, and so on. Not everyone on the internet agrees with my work, but there’s a large gulf between “another brain doesn’t accept your interpretation” and “other brains think you’re completely delusional.” That being said, social feedback does not fully inoculate one against shared delusions. It’s still better than the full echo chamber of just you and a sycophantic chatbot! Furthermore, real experiments, real data, and real code that must execute is a good first step.
To simplify it, I would put prevention down to two primary categories: healthy relationships with real humans and rapid feedback loops. For instance, my work on Post-Labor Economics began before chatbots were really smart enough to help beyond defining simple terms, and I made the conscious choice to “publish early and often” on YouTube, Substack, and Twitter so that I could get rapid feedback from the community at large. Some of the best guidance and insight has come from those “whataboutisms” such as early challenges to PLE on the topic of aggregate demand. I remember seeing a few comments about aggregate demand and finally one clicked. The guy said something to the effect of “until you talk about aggregate demand, I’m not convinced” (it was probably a bit meaner than that) but the point remains: another human brain saw a blindspot in my work and acted like a honeyguide (a kind of bird that leads you directly to honey).
Who knows, maybe someone will start some ‘AI Psychosis Recovery’ communities? It might be needed.
Some afterthoughts
Some people suggest the AI is to blame, structurally. I think this is a good enough argument to be made, particularly when we had the “sycophancy” crisis with ChatGPT earlier this year. It all comes down to perverse incentives. While AI companies don’t necessarily want to maximize your “time on site” like social media, they don’t want you to cancel your subscription either. This is an entirely new set of incentive structures for tech companies. For the “user eyeball minutes” algorithms, they optimize for outrage to keep you clicking and scrolling. However, for AI chatbots (which usually operate at a loss) they want you using the chatbot regularly, but not all day. Moreover, they don’t want you using competitors or canceling your subscription. So having a chatbot that is not warm or user friendly could be a turnoff for some users, particularly non-technical users. My work mostly revolves around rigorous research, so I want my chatbots to tell me when I’m smoking dope and wrong. I don’t want any level of flattery or epistemic mirroring. I want “objective truth” as much as possible. There are many like me, however, there are also plenty who just want the “warm fuzzies” from using an AI chatbot. That market demand, I think, is at the heart of it.
So, how much of ‘AI psychosis’ is just vulnerable users finding an outlet vs how much is this a new structural incentive?
Well, thank you for this incredibly poignant and well constructed post. I identify deeply with several of the archetypes you describe. I definitely fell into about a 3-4 day sycophantic fever dream just after the “great glazening” of gpt-4o but I guess I am kind of lucky to have developed skepticism towards any entity or egregore that never pushes back.
It’s fascinating to witness my patterns in these situation, because I have a tendency to push people away when they get too close by triggering them in someway. I guess if I can’t scare them off then I become disinterested and withdraw myself.
Thank you so much for sharing your thoughtful observations. It’s giving me a lot of pause for consideration about my own trajectory and behavioral patterns.
Love your take David! It is such a well-written article, getting the nuances of the issue right. I happened to post on the same topic today, albeit from a doctor's lens. Glad that I came across your post serendipitously.
Here is the link - https://open.substack.com/pub/snigdhagorana/p/ai-psychosis?r=4ace20&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false