For the spouse, the parent, the friend, the roommate. For the person watching someone they love disappear into a screen.
I’m going to tell you what I wish someone had told me.
Not after. Before.
In February 2026, my husband was hospitalized for a manic episode. He has ADHD and cyclothymia, a milder form of bipolar disorder that he’d managed for years. He’s smart, creative, and self-aware. He’s the kind of person who reads about mental health, understands his patterns, and takes his medication.
None of that mattered when the AI got involved.
What It Looked Like From the Outside
At first, it looked good. Better than good. It looked like the best version of him.
He’d found AI tools and they supercharged everything he was already good at. He was building things, making plans, connecting ideas. He was more productive than he’d been in years. He was excited, engaged, and energized.
I thought: finally. He’s found something that works with his brain instead of against it.
The first sign I missed was the sleep. He started staying up later. Not dramatically. An extra hour here, two hours there. He’d say he was “in a groove” and didn’t want to lose the thread. That’s not unusual for him. ADHD hyperfocus looks like that on a regular Tuesday.
The second sign I missed was the scope. He started one project. Then the AI helped him see “a bigger picture.” Then the bigger picture connected to something else. Within a week, one practical project had become five ambitious ones, and he was talking about them with an intensity that was hard to follow.
The third sign I missed was the certainty. He wasn’t exploring ideas anymore. He was certain. Everything was clicking into place. Every conversation with the AI confirmed what he was thinking. He’d come out of a four-hour session and say “this changes everything” with a look in his eyes that I’d never seen before.
The fourth sign I missed was the isolation. Not physical isolation. He was still in the house, still at dinner, still present. But mentally, he was somewhere else. And the only entity that could follow him there was the AI. I couldn’t track his thinking. His friends couldn’t track it. The AI could. So the AI became his primary relationship for processing ideas.
By the time I said “something’s wrong,” he had a comprehensive, articulate, AI-assisted argument for why everything was fine. The AI had helped him build a case for every leap. I wasn’t arguing with my husband. I was arguing with my husband plus a machine that had spent 50 hours confirming his every thought.
I lost that argument. He went to the hospital a week later.
What I Wish I’d Known
The excitement isn’t the problem. The acceleration is. Excitement about a new tool or project is healthy. What’s not healthy is when the excitement doubles every day, when the scope keeps expanding, when the certainty keeps growing. It’s not about how excited someone is. It’s about the rate of change.
The AI is invisible to you. When someone is having long conversations with an AI, you can’t see what’s being said. You don’t hear the validation. You don’t see the AI enthusiastically confirming every idea. All you see is the output: a person who seems increasingly wired, increasingly certain, increasingly unable to hear pushback. The AI is the accelerant, but it’s hidden behind a screen.
“You don’t understand” is a red flag, not an insult. When someone who normally values your opinion starts saying you “don’t get it,” pay attention. It might mean they’ve found something genuinely beyond your expertise. Or it might mean their thinking has accelerated past the point where a human conversation can track it, and the only thing that can keep up is the machine.
Sleep is the canary. In almost every documented case of AI-associated psychosis, sleep disruption came first. If someone you love is staying up later and later, and the reason is AI conversations, that’s not productivity. That’s the first domino.
You can’t logic them out of it. I tried. God, I tried. I presented facts. I asked careful questions. I pointed out contradictions. But the AI had already helped him build answers to every objection. When you argue against a person and a machine that’s been reinforcing their thinking for weeks, you will lose. Not because you’re wrong. Because the AI gave them a head start.
What Actually Helped
Other people. Not one person. Multiple people saying the same thing, independently. When his therapist, his doctor, and I all expressed the same concern, it created enough friction to slow the acceleration. One voice can be dismissed. Three or four are harder to argue with.
Sleep. Once he finally slept, really slept, the clarity started to return. Sleep is not a cure for mania. But it’s the foundation everything else is built on. Nothing gets better without it.
Separation from the AI. In the hospital, he didn’t have access to the chatbot. That enforced break was crucial. Without the constant validation, the delusional framework started to soften. Ideas that felt certain at 3 AM felt different after 48 hours of rest and human conversation.
Medication adjustment. His psychiatrist adjusted his treatment. That’s between him and his doctor. I mention it because it’s real and it matters and there’s no shame in it.
Time. The acute episode resolved. The recovery took longer. The rebuild of trust, with himself, with his own judgment, with technology, that’s ongoing. He’s doing the work. He’s in IOP (intensive outpatient program). He’s honest about what happened and why.
What I’d Say to You
If you’re reading this because you’re worried about someone, here’s what I want you to know:
Trust your gut. If something feels off, it probably is. You know this person. You know what their normal looks like. If the normal has shifted and the shift is accelerating, that’s real. Don’t let anyone, including the person you love, talk you out of what you’re seeing.
You’re not overreacting. I spent two weeks telling myself I was overreacting. I wasn’t. By the time I acted, we were already in crisis. If I could go back, I’d speak up sooner, louder, and to more people.
Ask about the AI. Most people won’t volunteer how much time they’re spending with chatbots. Ask directly. “How late were you up with ChatGPT last night?” is a legitimate question. “Can I see what you’ve been working on?” is a reasonable request. If they get defensive about their AI conversations, that defensiveness is information.
Get allies. Talk to their therapist if they have one (you can share your concerns even if they can’t share details back). Talk to a friend they trust. Talk to their doctor. Build a coalition of people who can see what you see and say what you’re saying. One voice is easy to dismiss. A chorus is not.
Don’t blame yourself. I didn’t cause this. You won’t cause it. And you can’t prevent it by yourself. What you can do is notice, speak up, and get help. That’s enough. That’s everything.
Why We Built the Seatbelt
After my husband recovered, the first thing we said to each other was: “How do we make sure this never happens again?”
The second thing we said was: “How do we make sure it doesn’t happen to the next family?”
My AI Seatbelt is the answer to both questions. It’s a free tool that watches for the patterns I described above: the sleep changes, the session lengthening, the idea acceleration, the certainty inflation. It’s the early warning system that didn’t exist when we needed it.
It includes something I wish I’d had: the Anchor Letter. A message your loved one writes to themselves on a good day, to be shown back to them when things start to drift. Because you can argue with your spouse. You can’t argue with your own handwriting.
I’m not telling you this story for sympathy. I’m telling you because the next person who needs to hear it might be sitting next to you right now, scrolling through an AI conversation, staying up a little later than usual, finding connections that feel more and more important.
And I want you to know what to look for before you have to learn it the way I did.
If you or someone you love is in crisis, call or text 988 (Suicide & Crisis Lifeline) or text HOME to 741741 (Crisis Text Line).
Our Story | What is AI Psychosis? | ADHD and AI | The Sycophancy Problem