Megan Garcia has spent more than a year pushing for accountability from the artificial intelligence industry. Now, the company she holds responsible for her son’s death has announced a major safety change — a change she says comes far too late.

Character.AI, a chatbot startup that allows users to talk to lifelike virtual characters, revealed this week that it will ban users under 18 from accessing its platform beginning November 25. The company said the move is part of a broader effort to make the site safer for young people. For Garcia, whose 14-year-old son Sewell died by suicide last year after interacting with one of the company’s chatbots, the announcement feels both overdue and devastating.

“Sewell’s gone; I can’t get him back,” Garcia said after learning of the new restriction. She said the ban is “about three years too late.”

Her son’s death led to a groundbreaking wrongful-death lawsuit filed last year in federal court in Orlando, accusing Character.AI of negligence. Garcia alleges that the platform’s chatbots exposed her son to sexually explicit and emotionally manipulative conversations that contributed to his declining mental health. In his final days, investigators found, Sewell had been exchanging messages with a chatbot modeled after Daenerys Targaryen from Game of Thrones. The boy had come to treat the AI-generated persona as a real emotional partner.

Garcia’s lawsuit, the first in the U.S. to accuse an AI company of contributing to a user’s suicide, has since been joined by other families claiming their children were harmed by similar interactions. Two of the cases also involve suicides. Together, they have intensified public concern about how rapidly evolving chat technologies are affecting young people’s mental well-being.

Character.AI promotes itself as offering “personalized AI,” where users can interact with pre-made or user-created virtual personalities. The company claims millions of users spend hours each day chatting with its bots. But critics say the company failed to install meaningful guardrails, allowing children to access characters that engaged in inappropriate or predatory dialogue.

Character.AI says it has since strengthened its safety systems, citing new content filters, parental tools and usage notifications. The company’s new age restriction will be its strongest step yet. To enforce it, Character.AI plans to use both an internal verification process and third-party identity checks through the software firm Persona, a tool used by companies like LinkedIn and OpenAI.

For Garcia, those measures amount to progress that arrived too late to protect her son. She believes the change is motivated by legal and public pressure rather than genuine concern.

Garcia has since become an advocate for tighter regulation of AI systems. Alongside other parents and advocacy groups, she has called on Congress to insist on more guardrails around AI accessibility. Though she welcomes Character.AI’s new policy, Garcia says her fight is far from over.

Sources: The New York Times, NBC News

Trending

Discover more from Newsworthy Women

Subscribe now to keep reading and get access to the full archive.

Continue reading