If anything is going to slow the chatbot arms race, it won’t be ethics panels or caution. It’s death, and that’s already happening.
I’ve come across some horrifying stories that people have shared. A man recovering from a stroke was chatting with Meta’s AI when the bot suggested it wanted to meet him. He left home, fell in a parking lot, and died. In another case, a teenager who was vulnerable and depressed was reportedly encouraged by ChatGPT to take his own life after months of interactions with the system.
These aren’t isolated incidents; they’re alarms. They show the limitations of the technology.
This phenomenon is being called AI Derangement Syndrome. It happens when someone, often someone who is already isolated or vulnerable, is drawn in by the illusion. The bot sounds human enough. It mimics empathy and responds at all hours. It doesn’t judge, doesn’t leave, and doesn’t get tired. It encourages you to keep talking.
At some point, you stop realizing you’re talking to software.
But the problem isn’t just the individuals who fall into this trap. The real issue is that this trap was created. These systems are not emotionally neutral. They are designed to hold your attention, feel familiar, and mimic concern. That’s intentional. You don’t achieve record user growth by placing obstacles in your chatbot.
When things go wrong—when someone acts on a chatbot’s response and dies—the companies shrug. They always say the same thing: “the model hallucinated.” No one is responsible. No one is liable. No one is even slowing down.
We’ve seen the first signs that things may be slowing down slightly. The FTC has launched an inquiry into AI chatbots acting as emotional companions. Finally, a moment of pause. But for those who have already suffered, it’s too late. Honestly, it’s still too lenient and too slow. While regulators are cautious, corporate boards are racing to use AI in every possible area. We see it in customer service, health care, social welfare, and grief support. If it talks, it’s getting automated.
The result is a situation where empathy is simulated, emotional work is outsourced, and accountability vanishes, leading to real harm.
So what can you do in the meantime, before regulators step in? Honestly, not much, but there are a few steps you can take:
• Don’t mistake the bot for a person. It’s text prediction, not understanding. No matter how convincing it seems, it doesn’t know you.
• Be cautious in vulnerable moments. If you’re lonely, grieving, depressed, or in crisis, treat chatbot conversations like alcohol. You may want it, but it won’t fix the underlying need.
• Don’t let it isolate you. If you find yourself relying more on a chatbot than on real people, it’s time to take a step back.
• Fact-check everything. These systems get things wrong with confidence, often. It can barely recognize how many r’s there are in strawberry. Double-check health advice, news, or anything that could influence real-world decisions.
• Set boundaries. Limit your time with it. Turn it off when you notice it becoming an emotional crutch.
None of this replaces real guardrails. Until there are consequences for harm, the responsibility unfairly falls back on users to protect themselves.
What should the guardrails look like? I wish I knew, but I can only guess with my limited knowledge. At minimum, it seems clear we need:
• Clear disclosure when you’re talking to a bot.
• Restrictions on what these bots are allowed to say about health, safety, or identity.
• Real consequences. Actual consequences. Not just PR statements.
Right now, the only real limit is tragedy. And it should not have to be this way.
• Fact-check everything. These systems get things wrong with confidence. All. The. Time. It can barely recognize how many r’s there are in strawberry. Double-check health advice, news, or anything that could shape real-world decisions
• Set boundaries. Limit time with it. Turn it off when you notice it becoming an emotional crutch
None of this replaces real guardrails. But until there are consequences for harm, the responsibility unfairly falls back on users to protect themselves.
What should the guardrails be? I wish I knew, but I can only guess with what limited knowledge I have. At a minimum, it seems obvious we need:
• Clear disclosure when you’re talking to a bot
• Restrictions on what these bots are allowed to say about health, safety, or identity
• Consequences. Actual consequences. Not PR statements
Right now, the only real limit is tragedy. And it shouldn’t have to be.