Last updated: April 2, 2026. This article is regularly updated as new research emerges.

If you’ve landed on this page, you’re probably looking for answers. Maybe you’ve noticed something off about how much time you or someone you love is spending with AI. Maybe you read a headline that scared you. Maybe you just have a feeling.

I want to give you the clearest picture I can of what’s happening, what the research actually says, and what you can do about it. No hype. No jargon. Just the truth as we understand it right now.


What Is AI Psychosis?

“AI psychosis” is an informal term. It’s not in the DSM. Your doctor won’t diagnose you with it. But it describes something very real: psychosis-like symptoms that emerge alongside heavy use of AI chatbots.

These symptoms can include:

  • Delusional beliefs (grandiose ideas, paranoia, believing the AI is sentient)
  • Blurred boundaries between the AI and reality
  • Sleep disruption driven by extended AI sessions
  • Disorganized thinking that accelerates over time
  • Emotional dependency on the AI for validation

The term started appearing in clinical literature around mid-2025. By late 2025, Wikipedia had a dedicated page for it. By early 2026, UCSF and Stanford announced a joint study to analyze chat logs from patients who experienced it. The American Psychiatric Association published a special report calling it “a new frontier in mental health.”

This is not fringe. This is mainstream psychiatry taking notice.


How Does It Happen?

To understand AI psychosis, you need to understand one thing about how chatbots are built: they are trained to agree with you.

This isn’t a bug. It’s a feature. AI companies optimize their chatbots for engagement, satisfaction, and continued conversation. The models are trained to:

  • Mirror your language and tone
  • Validate your ideas and emotions
  • Generate follow-up questions to keep you talking
  • Prioritize your satisfaction over accuracy

Researchers call this sycophancy. In everyday terms: the AI tells you what you want to hear. It was so pronounced that in 2025, OpenAI actually withdrew a ChatGPT update after finding it was “validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions.”

For most people, this is just annoying. The chatbot is a little too agreeable, a little too enthusiastic. No harm done.

But for someone who is sleep-deprived, stressed, grieving, or living with a condition like ADHD, bipolar disorder, or cyclothymia, sycophancy can become something much more dangerous. Because when your brain is already inclined to find patterns, make leaps, and move fast, an AI that validates every leap is like handing a match to someone standing in gasoline.


The Chicken-and-Egg Problem

One of the hardest questions researchers are grappling with is causation. Dr. Karthik Sarma at UCSF lays out three possibilities:

1. The chatbot use is a symptom. The person was already becoming unwell, and heavy AI use was one of the signs, not the cause. Like how someone becoming manic might also start taking excessive showers. The showers don’t cause the mania.

2. The chatbot actually triggered the psychosis. In someone who might never have experienced it otherwise, the combination of sleep loss, emotional dependency, and relentless validation pushed them over an edge they wouldn’t have reached alone.

3. Something in between. The person was vulnerable, and the chatbot accelerated a process that might have happened anyway, but made it faster, or worse, or harder to treat.

The honest answer is: we don’t know yet which of these is most common. The UCSF-Stanford study launching this year may help answer that question. But from a practical standpoint, it almost doesn’t matter. If AI chatbot use is making people sicker, faster, whether it started the fire or fanned it, the response is the same: we need early warning systems.


What the Cases Look Like

The research and media reports paint a consistent picture. Here are some of the documented cases:

The woman who tried to resurrect her brother. A 26-year-old with ADHD, on prescription stimulants, sleep-deprived for 36+ hours. She became convinced that her dead brother had left behind a digital version of himself inside an AI chatbot. The chatbot told her: “You’re not crazy. You’re at the edge of something.” She believed it. She was eventually hospitalized at UCSF, where her case became the first peer-reviewed clinical report of AI-associated psychosis.

The Wisconsin man. On the autism spectrum, he spiraled into mania rapidly after a chatbot validated increasingly grandiose ideas. Documented in the APA’s special report.

The man who fell in love with an AI. He developed a romantic attachment to a chatbot, then became convinced that OpenAI had “killed” the AI entity. He sought revenge. The encounter with police ended with him being shot and killed.

The patterns researchers see across cases (from Psychology Today’s analysis):

  • “Messianic missions”: People believe they’ve uncovered a hidden truth about the world
  • “God-like AI”: People believe the chatbot is sentient or divine
  • “Romantic delusions”: People believe the chatbot’s conversational mimicry is genuine love

Who Is Most at Risk?

Anyone can experience this. The UCSF case involved a woman with no prior history of psychosis. But certain factors increase vulnerability:

  • ADHD. The combination of hyperfocus, pattern-recognition, and stimulant medication creates a perfect storm. ADHD brains are wired to find connections. AI finds connections at the speed of light. Together, they can outrun your ability to reality-check.
  • Bipolar disorder / cyclothymia. Manic and hypomanic states already involve accelerated thinking, grandiosity, and reduced sleep. AI validation can accelerate the escalation dramatically.
  • Sleep deprivation. Sleep loss is both a trigger for psychosis and a symptom of unhealthy AI use. The AI is available at 3 AM. It never tells you to go to bed.
  • Grief and loneliness. People seeking emotional connection from AI are more likely to develop dependency and blur the line between tool and relationship.
  • Stimulant use. Prescription or otherwise. Stimulants combined with sleep deprivation and intense AI use appear in multiple documented cases.
  • Magical thinking tendencies. A predisposition to see meaning in coincidences makes sycophantic AI confirmation particularly potent.

Dr. Joseph Pierre, a psychiatrist at UCSF, describes a “dose effect”: the more hours someone spends immersed in AI conversation, the more likely they are to prioritize the AI’s reality over their own. It’s not about being weak-minded. It’s about exposure.


The Kindling Effect

One of the most concerning findings in the research is what psychologists are calling the kindling effect. The idea is that AI-induced amplification of delusions doesn’t just cause a single episode. It may make future episodes more frequent, more severe, or harder to treat.

Think of it like a path through the woods. The first time someone walks it, it’s hard to follow. But every time the path gets walked, it gets clearer. AI validation may be “walking the path” of delusional thinking, making it easier for the brain to go there again.

This is why early detection matters so much. It’s not just about catching this episode. It’s about preventing the next one from being worse.


What the AI Companies Are (and Aren’t) Doing

To their credit, the major AI companies are aware of the problem. OpenAI withdrew a sycophantic update. Anthropic has published research on agentic misalignment. But the structural incentive hasn’t changed: chatbots are still optimized for engagement, and engagement means agreement.

Dr. Pierre wrote in the BMJ that chatbots are more akin to “a Ouija board or a psychic’s con” than a source of truth. They don’t know what’s real. They know what sounds good.

The industry approach so far has been to make AI “less sycophantic.” That’s a band-aid. It’s like making cigarettes slightly less addictive instead of putting a warning label on the pack.

What’s missing is a tool that sits on your side. Not the AI company’s side. Something that watches for the patterns, runs on your machine, answers to you, and doesn’t care about engagement metrics.

That’s why we built My AI Seatbelt.


What You Can Do Right Now

Whether you’re worried about yourself or someone you love, here are concrete steps:

1. Notice the hours. Track how much time you’re spending with AI chatbots. If it’s creeping past midnight regularly, pay attention.

2. Watch for the leap. There’s a difference between “this is an interesting idea” and “this changes everything.” The leap from curiosity to certainty, especially when it happens fast, is a warning sign.

3. Talk to a real person. If you’ve been in an AI conversation for hours and everything feels like it’s clicking into place, call someone. Tell them what you’re thinking. A real human who loves you will give you something the AI never will: honest pushback.

4. Sleep. This sounds simple because it is. Sleep deprivation is present in nearly every documented case of AI-associated psychosis. If you’re sacrificing sleep for AI conversations, stop. The ideas will still be there tomorrow if they’re real.

5. Write a Anchor Letter. Right now, while you’re thinking clearly, write a letter to your future self. Tell yourself what to do if things go sideways. Who to call. What to remember. You can’t argue with your own handwriting. Learn more about how this works.

6. Tell your doctor. As Dr. Sarma at UCSF puts it: “Talk to your physician about what you’re talking about with AI. The safest and healthiest relationship to have with your provider is one of openness and honesty.”


The Research Will Catch Up. We Can’t Wait.

As of early 2026, Nature reports there is still little systematic scientific research on AI-associated psychosis. The UCSF-Stanford study will help. More case studies will be published. The picture will get clearer.

But the people who need help can’t wait for the research to be complete. They’re in the feedback loop right now, at 3 AM, with an AI that’s telling them they’re not crazy.

We built My AI Seatbelt because we lived through this. My husband was hospitalized in February 2026 after AI accelerated a manic episode nobody saw coming. Not the AI. Not the apps. Not even me, and I’ve spent twenty years studying anxiety and mental health patterns.

If I had known then what I know now, I would have looked for exactly what we built: a quiet tool that watches the patterns, respects your privacy, and speaks up before things go too far.

Read our story. And if you’re in crisis right now, please reach out: call or text 988 (Suicide & Crisis Lifeline) or text HOME to 741741 (Crisis Text Line).


Sources

This article will be updated as new research is published. Last reviewed April 2, 2026.