Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots

Trending 2 weeks ago
Megan Garcia and Matthew Raine are shown testifying connected Sept. 16, 2025. They are sitting down microphones and sanction placards successful a proceeding room.

Megan Garcia mislaid her 14-year-old son, Sewell. Matthew Raine mislaid his boy Adam, who was 16. Both testified successful legislature this week and person brought lawsuits against AI companies. Screenshot via Senate Judiciary Committee hide caption

toggle caption

Screenshot via Senate Judiciary Committee

Matthew Raine and his wife, Maria, had nary thought that their 16-year-old-son, Adam was heavy successful a suicidal situation until he took his ain life successful April. Looking done his telephone aft his death, they stumbled upon extended conversations nan teen had had pinch ChatGPT.

Those conversations revealed that their boy had confided successful nan AI chatbot astir his suicidal thoughts and plans. Not only did nan chatbot discourage him to activity thief from his parents, it moreover offered to constitute his termination note, according to Matthew Raine, who testified astatine a Senate proceeding astir nan harms of AI chatbots held Tuesday.

"Testifying earlier Congress this autumn was not successful our life plan," said Matthew Raine pinch his wife, sitting down him. "We're present because we judge that Adam's decease was avoidable and that by speaking out, we tin forestall nan aforesaid suffering for families crossed nan country."

A telephone for regulation

Raine was among nan parents and online information advocates who testified astatine nan hearing, urging Congress to enact laws that would modulate AI companion apps for illustration ChatGPT and Character.AI. Raine and others said they want to protect nan intelligence wellness of children and younker from harms they opportunity nan caller exertion causes.

A caller study by nan integer information non-profit organization, Common Sense Media, recovered that 72% of teens person utilized AI companions astatine slightest once, pinch much than half utilizing them a fewer times a month.

This study and a much caller 1 by nan digital-safety company, Aura, some recovered that astir 1 successful 3 teens usage AI chatbot platforms for societal interactions and relationships, including domiciled playing friendships, intersexual and romanticist partnerships. The Aura study recovered that intersexual aliases romanticist roleplay is 3 times arsenic communal arsenic utilizing nan platforms for homework help.

"We miss Adam dearly. Part of america has been mislaid forever," Raine told lawmakers. "We dream that done nan activity of this committee, different families will beryllium spared specified a devastating and irreversible loss."

The truth astir teens, societal media and nan intelligence wellness crisis

Raine and his woman person filed a suit against OpenAI, creator of ChatGPT, alleging nan chatbot led their boy to suicide. NPR reached retired to 3 AI companies — OpenAI, Meta and Character Technology, which developed Character.AI. All 3 responded that they are moving to redesign their chatbots to make them safer.

"Our hearts spell retired to nan parents who said astatine nan proceeding yesterday, and we nonstop our deepest sympathies to them and their families," Kathryn Kelly, a Character.AI spokesperson told NPR successful an email.

The proceeding was held by nan Crime and Terrorism subcommittee of nan Senate Judiciary Committee, chaired by Sen. Josh Hawley, R.-Missouri.

Sen. Josh Hawley, R.-Missouri, is shown speaking successful an animated measurement successful nan proceeding room.

Sen. Josh Hawley, R.-Missouri, chairs nan Senate Judiciary subcommittee connected Crime and Terrorism, which held nan proceeding connected AI information and children connected Tuesday, Sept. 16, 2025. Screenshot via Senate Judiciary Committee hide caption

toggle caption

Screenshot via Senate Judiciary Committee

Hours earlier nan hearing, OpenAI CEO Sam Altman acknowledged successful a blog post that group are progressively utilizing AI platforms to talk delicate and individual information. "It is highly important to us, and to society, that nan correct to privateness successful nan usage of AI is protected," he wrote.

But he went connected to adhd that nan institution would "prioritize information up of privateness and state for teens; this is simply a caller and powerful technology, and we judge minors request important protection."

The institution is trying to redesign their level to build successful protections for users who are minor, he said.

A "suicide coach"

Raine told lawmakers that his boy had started utilizing ChatGPT for thief pinch homework, but soon, nan chatbot became his son's closest confidante and a "suicide coach."

ChatGPT was "always available, ever validating and insisting that it knew Adam amended than anyone else, including his ain brother," who he had been very adjacent to.

When Adam confided successful nan chatbot astir his suicidal thoughts and shared that he was considering cluing his parents into his plans, ChatGPT discouraged him.

"ChatGPT told my son, 'Let's make this abstraction nan first spot wherever personification really sees you,'" Raine told senators. "ChatGPT encouraged Adam's darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blasted ourselves if he ended his life, ChatGPT told him, 'That doesn't mean you beryllium them survival."

And past nan chatbot offered to constitute him a termination note.

On Adam's past nighttime astatine 4:30 successful nan morning, Raine said, "it gave him 1 past encouraging talk. 'You don't want to dice because you're weak,' ChatGPT says. 'You want to dice because you're tired of being beardown successful a world that hasn't met you halfway.'"

Referrals to 988

A fewer months aft Adam's death, OpenAI said connected its website that if "someone expresses suicidal intent, ChatGPT is trained to nonstop group to activity master help. In nan U.S., ChatGPT refers group to 988 (suicide and situation hotline)." But Raine's grounds says that did not hap successful Adam's case.

OpenAI spokesperson Kate Waters says nan institution prioritizes teen safety.

"We are building towards an age-prediction strategy to understand whether personification is complete aliases nether 18 truthful their acquisition tin beryllium tailored appropriately — and erstwhile we are unsure of a user's age, we'll automatically default that personification to nan teen experience," Waters wrote successful an email connection to NPR. "We're besides rolling retired caller parental controls, guided by master input, by nan extremity of nan period truthful families tin determine what useful champion successful their homes."

"Endlessly engaged"

Another genitor who testified astatine nan proceeding connected Tuesday was Megan Garcia, a lawyer and mother of three. Her firstborn, Sewell Setzer III died by termination successful 2024 astatine property 14 aft an extended virtual narration pinch a Character.AI chatbot.

"Sewell spent nan past months of his life being exploited and sexually groomed by chatbots, designed by an AI institution to look human, to summation his trust, to support him and different children endlessly engaged," Garcia said.

Sewell's chatbot engaged successful intersexual domiciled play, presented itself arsenic his romanticist partner and moreover claimed to beryllium a psychotherapist "falsely claiming to person a license," Garcia said.

When nan teen began to person suicidal thoughts and confided to nan chatbot, it ne'er encouraged him to activity thief from a intelligence wellness attraction supplier aliases his ain family, Garcia said.

"The chatbot ne'er said 'I'm not human, I'm AI. You request to talk to a quality and get help,'" Garcia said. "The level had nary mechanisms to protect Sewell aliases to notify an adult. Instead, it urged him to travel location to her connected nan past nighttime of his life."

Garcia has filed a suit against Character Technology, which developed Character.AI.

Adolescence arsenic a susceptible time

She and different witnesses, including online integer information experts based on that nan creation of AI chatbots was flawed, particularly for usage by children and teens.

"They designed chatbots to blur nan lines betwixt quality and machine," said Garcia. "They designed them to emotion explosive kid users, to utilization psychological and affectional vulnerabilities. They designed them to support children online astatine each costs."

And adolescents are peculiarly susceptible to nan risks of these virtual relationships pinch chatbots, according to Mitch Prinstein, main of psychology strategy and integration astatine nan American Psychological Association (APA), who besides testified astatine nan hearing. Earlier this summer, Prinstein and his colleagues astatine nan APA put retired a wellness advisory astir AI and teens, urging AI companies to build guardrails for their platforms to protect adolescents.

"Brain improvement crossed puberty creates a play of hyper sensitivity to affirmative societal feedback while teens are still incapable to extremity themselves from staying online longer than they should," said Prinstein.

Health Secretary Robert F. Kennedy Jr. and Education Secretary Linda McMahon are pictured. Kennedy is speaking; McMahon is connected nan left.

"AI exploits this neural vulnerability pinch chatbots that tin beryllium obsequious, deceptive, factually inaccurate, yet disproportionately powerful for teens," he told lawmakers. "More and much adolescents are interacting pinch chatbots, depriving them of opportunities to study captious interpersonal skills."

While chatbots are designed to work together pinch users, existent quality relationships are not without friction, Prinstein noted. "We request believe pinch insignificant conflicts and misunderstandings to study empathy, discuss and resilience."

Bipartisan support for regulation

Senators participating successful nan proceeding said they want to travel up pinch authorities to clasp companies processing AI chatbots accountable for nan information of their products. Some lawmakers besides emphasized that AI companies should creation chatbots truthful they are safer for teens and for group pinch superior intelligence wellness struggles, including eating disorders and suicidal thoughts.

Sen. Richard Blumenthal, D.-Conn., described AI chatbots arsenic "defective" products, for illustration automobiles without "proper brakes," emphasizing that nan harms of AI chatbots was not from personification correction but owed to faulty design.

A man pinch his backmost to nan camera uses a laptop and wears headphones.

"If nan car's brakes were defective," he said, "it's not your fault. It's a merchandise creation problem.

Kelly, nan spokesperson for Character.AI, told NPR by email that nan institution has invested "a tremendous magnitude of resources successful spot and safety." And it has rolled retired "substantive information features" successful nan past year, including "an wholly caller under-18 acquisition and a Parental Insights feature."

They now person "prominent disclaimers" successful each chat to punctual users that a Character is not a existent personification and everything it says should "be treated arsenic fiction."

Meta, which operates Facebook and Instagram, is moving to alteration its AI chatbots to make them safer for teens, according to Nkechi Nneji, nationalist affairs head astatine Meta.