The Chief Executive Officer of OpenAI, Sam Altman, has issued a strong warning about the privacy risks users face when sharing sensitive information with ChatGPT. Speaking on the “This Past Weekend” podcast hosted by comedian Theo Von, Altman explained that, unlike conversations with doctors, lawyers, or therapists, interactions with ChatGPT are not protected under any form of legal privilege. This means private details shared with the chatbot could be legally accessed and used in court proceedings.
Altman expressed deep concern about the emotional reliance many users particularly younger ones—have developed on ChatGPT. “People talk about the most personal shit in their lives to ChatGPT,” he said. “Young people especially use it as a therapist or a life coach, sharing relationship problems and asking, ‘What should I do?’” He noted that this kind of emotional trust is growing rapidly, but the legal system has yet to catch up with the realities of how AI is being used.
In contrast to the confidentiality laws that protect client conversations with licensed professionals, there are currently no clear regulations or legal shields guarding the content of AI-generated chats. Altman warned that this gap could have serious consequences: “If someone confides their most personal issues to ChatGPT, and that ends up in legal proceedings, we could be compelled to hand that over. That’s a real problem.”
Altman’s remarks shine a light on a growing legal and ethical dilemma in the age of artificial intelligence. As more people turn to tools like ChatGPT for advice, therapy-like support, or even confessions, the absence of legal protections places their data at significant risk.
He emphasized that the lack of clear privacy protections around AI conversations is not just a theoretical concern—it’s a real issue that needs urgent attention. “I think that’s very screwed up,” Altman said. “We should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever. And no one had to think about that even a year ago.”
The OpenAI boss is now calling on lawmakers, tech companies, and regulators to consider creating new privacy standards tailored to AI platforms. His warning adds to growing calls for responsible AI governance and legal reform, as millions of users continue to turn to AI tools for everything from academic help and legal guidance to emotional support.
In the meantime, Altman’s message is clear: users should be cautious about what they share with AI tools, because those conversations are not yet legally shielded and in the wrong circumstances, they could come back to haunt them.