OpenAI CEO Sam Altman, center, looks away from the table at a dinner with various other tech leaders hosted by President Donald Trump in Washington, DC, on Sept. 4, 2025.
Saul Loeb/AFP via Getty Images
The past month has brought a wave of attention to the impacts of artificial intelligence chatbots — and the behaviors of the Bay Area companies that peddle them. Now, the federal government is wading into the fray.
On Thursday, the Federal Trade Commission ordered several tech companies to cough up information about how they test, make, distribute and monetize their AI chatbots. The probe isn’t for a specific law enforcement case, but Commissioner Mark Meador wrote that the suicide of 16-year-old Californian Adam Raine, who left behind a grim log of conversations with ChatGPT, was one of the “troubling developments” prompting the orders.
Article continues below this ad
ChatGPT’s maker, San Francisco-headquartered OpenAI, is one target of the FTC’s order. The inquiry also includes Google’s parent company, Snapchat’s parent company, Instagram and its owner Meta, Elon Musk’s xAI, and the popular chatbot-maker Character.AI.
The FTC’s news release said the probe is meant to gauge how the companies evaluate chatbots’ safety, how they warn users and parents about the associated risks, and what steps they’ve taken to “limit the products’ use by and potential negative effects on children and teens.”
The orders themselves span 18 pages, with lengthy lists of requirements. All of the companies are asked to provide details and documents about various aspects of their chatbots, their users and user research, their compliance practices, their data collection and use, and even the complaints they have received about their chatbots.
If made public, full responses to these orders would provide a treasure trove of revelations about how these companies make and deploy their chatbots. A letter accompanying the orders said the FTC hopes to discuss the timing of the companies’ submissions over telephone by Sept. 25. FTC Chairman Andrew Ferguson, in the Thursday news release, said the “Trump-Vance FTC” has dual goals, in the protection of children and the fostering of AI innovation.
Article continues below this ad
The ratcheting up of regulatory scrutiny comes after Raine’s parents sued OpenAI, Reuters published major stories about Meta’s chatbot policies and a tragic death, and the Wall Street Journal reported on a man who killed his mother and then took his own life after ChatGPT fed his delusions. As SFGATE reported, the slate of bad news has already prompted the start of a reckoning, with both Meta and OpenAI committing to policy changes.
More changes might be on their way. OpenAI CEO Sam Altman said in an interview with Tucker Carlon published Wednesday that the company was considering a change where the system would automatically call authorities if a teenager were seriously discussing suicide. He later said, “I haven’t had a good night of sleep since ChatGPT launched,” and estimated that as many as 1,500 people might talk with ChatGPT at some point before dying by suicide each week. “Maybe we could have said something better,” he said. “Maybe we could have been more proactive.”
On Thursday, Meta, Google and xAI did not respond to SFGATE’s requests for comment. A spokesperson for each of the other companies emphasized existing safety measures and pledged that they would respond to the orders.
OpenAI spokesperson Liz Bourgeois wrote to SFGATE that the company prioritizes making ChatGPT safe and helpful and that it’s “committed to engaging constructively and responding” to the FTC directly. She pointed to existing safeguards and new protections for teens, such as parental controls and a notification system for when a teen is in “acute distress.”
Article continues below this ad
Character.AI reminds its users that its chatbots aren’t real people and everything a chatbot says should be treated as fiction, per a statement from Head of Trust and Safety Jerry Ruoti to SFGATE. He linked to the company’s safety policies and said, “We look forward to collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.”
Snap spokesperson Monique Bellamy wrote to SFGATE that the company has “rigorous safety and privacy processes” for Snapchat’s “My AI” feature.
“We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters US innovation while protecting our community,” she added.
Article continues below this ad
If you are in distress, call the Suicide & Crisis Lifeline 24 hours a day at 988, or visit 988lifeline.org for more resources.
Work at a Bay Area tech company and want to talk? Contact tech reporter Stephen Council securely at stephen.council@sfgate.com or on Signal at 628-204-5452.
Copyright for syndicated content belongs to the linked Source link





