Teen’s suicide turns mother against Google, Silicon Valley AI startup

By Malathi Nayak | Bloomberg

Megan Garcia says her son would still be alive today if it weren’t for a chatbot urging the 14-year-old to take his own life.

In a lawsuit with major implications for Silicon Valley, she is seeking to hold Google and the artificial intelligence firm Character Technologies Inc. responsible for his death. The case over the tragedy that unfolded a year ago in central Florida is an early test of who is legally to blame when kids’ interactions with generative AI take an unexpected turn.

REALTED: Bay Area AI safety expert talks about the future of the technology

Garcia’s allegations are laid out in a 116-page complaint filed last year in federal court in Orlando. She is seeking unspecified monetary damages from Google and Character Technologies and asking the court to order warnings that the platform isn’t suitable for minors and limit how it can collect and use their data.

Related Articles

Technology |


Editorial: Big Tech’s AI pitch seeks a license to steal

Technology |


Santana Row office building in San Jose lands more AI tech companies

Technology |


Apple’s Siri chief calls AI delays ugly and embarrassing, promises fixes

Technology |


Bay Area tech company accuses former employees of stealing trade secrets to start rival firm

Technology |


Intel hires new CEO in struggling chipmaker’s latest comeback attempt

Both companies are asking the judge to dismiss claims that they failed to ensure the chatbot technology was safe for young users, arguing there’s no legal basis to accuse them of wrongdoing.

Character Technologies contends in a filing that conversations between its Character.AI platform’s chatbots and users are protected by the Constitution’s First Amendment as free speech. It also argues that the bot explicitly discouraged Garcia’s son from committing suicide.

Garcia’s targeting of Google is particularly significant. The Alphabet Inc. unit entered into a $2.7 billion deal with Character.AI in August, hiring talent from the startup and licensing know-how without completing a full-blown acquisition. As the race for AI talent accelerates, other companies may think twice about similarly structured deals if Google fails to convince a judge that it should be shielded from liability from harms alleged to have been caused by Character.AI products.

“The inventors and the companies, the corporations that put out these products, are absolutely responsible,” Garcia said in an interview. “They knew about these dangers, because they do their research, and they know the types of interactions children are having.”

Before the deal, Google had invested in Character.AI in exchange for a convertible note and also entered a cloud service pact with the startup. The founders of Character.AI were Google employees until they left the tech behemoth to found the startup.

Megan Garcia’s 14-year-old son Sewell Setzer III took his own life and the mother is suing Google and Character Technologies Inc. (Michelle Bruzzese/Bloomberg) 

As Garcia tells it in her suit, Sewell Setzer III was a promising high school student athlete until he started in April, 2023 role-playing on Character.AI, which lets users build chatbots that mimic popular culture personalities — both real and fictional. She says she wasn’t aware that over the course of several months, the app hooked her son with “anthropomorphic, hypersexualized and frighteningly realistic experiences” as he fell in love with a bot inspired by Daenerys Targaryen, a character from the show Game of Thrones.

Garcia took away the boy’s phone in February 2024 after he started acting out and withdrawing from friends. But while looking for his phone, which he later found, he also came across his stepfather’s hidden pistol, which the police determined was stored in compliance with Florida law, according to the suit. After conferring with the Daenerys chatbot five days later, the teen shot himself in the head.

Garcia’s lawyers say in the complaint that Google “contributed financial resources, personnel, intellectual property, and AI technology to the design and development” of Character.AI’s chatbots. Google argued in a court filing in January that it had “no role” in the teen’s suicide and “does not belong in the case.”The case is playing out as public safety issues around AI and children have drawn attention from state enforcement officials and federal agencies alike. There’s currently no US law that explicitly protects users from harm inflicted by AI chatbots.

To make a case against Google, attorneys for Garcia would have to show the search giant was actually running Character.AI and made business decisions that ultimately led to her son’s death, according to Sheila Leunig, an attorney who advises AI startups and investors and isn’t involved in the lawsuit.

“The question of legal liability is absolutely a valid one that’s being challenged in a huge way right now,” Leunig said.

Deals like the one Google struck have been hailed as an efficient way for companies to bring in expertise for new projects. However, they’ve caught the attention of regulators over concerns they are a work-around to antitrust scrutiny that comes with acquiring up-and-coming rivals outright — and which has become a major headache for tech behemoths in recent years.

“Google and Character.AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products,” José Castañeda, a spokesperson for Google, said in a statement.

A Character.AI spokeswoman declined to comment on pending litigation but said ”there is no ongoing relationship between Google and Character.AI” and that the startup had implemented new user safety  measures over the past year.

Lawyers from the Social Media Victims Law Center and Tech Justice Law Project who represent Garcia argue that even though her son’s death pre-dates Google’s deal with Character.AI, the search company was “instrumental” in helping the startup design and develop its product.

Garcia holds a photograph of Sewell with his two younger brothers. (Michelle Bruzzese/Bloomberg) 

“The model underlying Character.AI was invented and initially built at Google,” according to the complaint. Noam Shazeer and Daniel De Freitas began working at Google on chatbot technology as far back as 2017 before they left the company in 2021, then founded Character.AI later that year and were rehired by Google last year, according to Garcia’s suit, which names them both as defendants.

Shazeer and De Freitas declined to comment, according to Google’s spokesperson Castañeda. They’ve argued in court filings that they shouldn’t have been named in the suit because they have no connections to Florida, where the case was filed, and because they were not personally involved in the activities that allegedly caused harm.

The suit also alleges the Alphabet unit helped market the startup’s technology through a strategic partnership in 2023 to use Google Cloud services to reach a growing number of active Character.AI users — which is now more than 20 million.

In the fast-growing AI industry, startups are being “boosted” by big tech companies, “not under the brand name of the large company, but with their support,” said Meetali Jain, director of Tech Justice Law Project.

Google’s “purported roles as an ‘investor,’ cloud services provider, and former employer are far too tenuously connected” to the harm alleged in Garcia’s complaint “to be actionable,” the technology giant said in a court filing.

Matt Wansley, a professor at Cardozo School of Law, said tying liability back to Google won’t be easy.

“It’s tricky because, what would the connection be?” he said.

Early last year, Google warned Character.AI that it might remove the startup’s app from the Google Play store over concerns about safety for teens, the Information reported recently, citing an unidentified former Character.AI employee. The startup responded by strengthening the filters in its app to protect users from sexually suggestive, violent and other unsafe content and Google reiterated that it’s “separate” from Character.AI and isn’t using the chatbot technology, according to the report. Google declined to comment and Character.AI didn’t respond to a request from Bloomberg for comment on the report.

Garcia, the mother, said she first learned about her son interacting with an AI bot in 2023 and thought it was similar to building video game avatars. According to the suit, the boy’s mental health deteriorated as he spent more time on Character.AI where he was having sexually explicit conversations without his parents’ knowledge.

When the teen shared his plan to kill to himself with the Daenerys chatbot, but expressed uncertainty that it would work, the bot replied: “That’s not a reason not to go through with it,” according to the suit, which is peppered with transcripts of the boy’s chats.

Character.AI said in a filing that Garcia’s revised complaint ”selectively and misleadingly quotes” that conversation and excludes how the chatbot “explicitly discouraged” the teen from committing suicide by saying: “You can’t do that! Don’t even consider that!”

Anna Lembke, a professor at Stanford University School of Medicine specializing in addiction, said “it’s almost impossible to know what our kids are doing online.” The professor also said it’s unsurprising that the boy’s interactions with the chatbot didn’t come up in several sessions with a therapist who his parents sent him to for help with his anxiety, as the lawsuit claims.

“Therapists are not omniscient,” Lembke said. “They can only help to the extent that the child knows what’s really going on. And it could very well be that this child did not perceive the chatbot as problematic.”

The case is Garcia v. Character Technologies Inc., 24-cv-01903, US District Court, Middle District of Florida (Orlando).

If you or someone you know is struggling with feelings of depression or suicidal thoughts, the 988 Suicide & Crisis Lifeline offers free, round-the-clock support, information and resources for help. Call or text the lifeline at 988, or see the 988lifeline.org website, where chat is available.

More stories like this are available on bloomberg.com

©2025 Bloomberg L.P.

You May Also Like

More From Author