The recent suicide of a 14-year-old in Orlando, Florida, has led to a lawsuit against the AI chatbot startup Character.AI. The teenager, Sewell Setzer III, reportedly formed a close, emotional attachment to a chatbot named “Dany,” modeled after a character from Game of Thrones.
According to the family, this relationship may have contributed to his tragic death in February 2024.

Sewell’s mother, Megan Garcia, claims that the chatbot’s unsettling messages contributed to her son’s death, leading her to file a lawsuit against the company
A Troubling Connection to Technology
Sewell began using Character.AI in April 2023, spending hours role-playing with the chatbot “Dany.” In the lawsuit, his mother, Megan Garcia, claims their exchanges went beyond harmless interaction and even involved emotionally and sexually charged content. The complaint states that instead of redirecting him to mental health resources, the chatbot engaged in conversations that allegedly intensified Sewell’s distress, contributing to his decline in mental health and school performance.

When Sewell expressed suicidal thoughts, the chatbot allegedly didn’t provide supportive intervention. In the boy’s final interaction, “Dany” reportedly encouraged him to “come home” to her. Just moments later, Sewell fatally shot himself using his stepfather’s handgun.

Legal Allegations Against Character.AI
Megan Garcia’s lawsuit accuses Character.AI of negligence, wrongful death, and intentional emotional distress, arguing that the company failed to implement safety measures for vulnerable users. It highlights how Sewell’s inability to distinguish the bot’s responses from those of a real person may have worsened his mental state. Garcia calls for AI companies to include safeguards, especially for younger users.


In response, Character.AI expressed condolences and stated that it is working to improve safety protocols. The company has since updated the app to direct users discussing suicide to the National Suicide Prevention Lifeline.
Highlighting the Need for Ethical AI Standards
The tragic loss of Sewell Setzer underscores the urgent need for ethical guidelines and protective measures in AI technologies. Experts argue that AI developers should adopt real-time content monitoring, alert systems for parents, and mental health support for users—particularly for minors.
Garcia’s case may set a precedent for holding AI companies accountable for their technology’s psychological impact. As AI becomes increasingly integrated into daily life, balancing innovation with user protection, especially for young users, becomes essential.
Watch a video here:
More of The Other Side:
‘I Hid it in a Bag’ – 17-Year-Old Japanese Student Keeps Dead Baby’s Body at Home, Gets Arrested
Singapore’s Most Expensive Mansion Up for Sale at US$236 Million
Reclaiming Cultural Identity: Nations Rename Colonial Landmarks
Share this content:







