Sunday, July 27, 2025 - In a revealing podcast appearance, OpenAI CEO Sam Altman has issued a serious caution to users of ChatGPT, warning that conversations with the AI chatbot are not protected by legal confidentiality. Speaking on comedian Theo Von’s podcast “This Past Weekend,” Altman noted that discussions with ChatGPT, even when deeply personal, do not enjoy the same legal safeguards as conversations with lawyers, doctors, or therapists.
“People talk about the most personal details in their lives
to ChatGPT,” Altman said. “Young people especially use it as a therapist, a
life coach for relationship problems. And right now, if you talk to a therapist
or a lawyer or a doctor about those problems, there’s legal privilege for it.
There’s doctor-patient confidentiality, there’s legal confidentiality,
whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
Altman said that while ChatGPT has become a go-to platform
for emotional support and personal guidance, there is no established policy or
legal framework to protect these conversations. “If you go talk to ChatGPT
about your most sensitive stuff and then there’s like a lawsuit or whatever, we
could be required to produce that, and I think that’s very screwed up,” he
said.
The warning comes amid growing public reliance on generative
AI tools like ChatGPT, Google Gemini, and Perplexity AI. Privacy experts and
cybersecurity analysts are now echoing Altman’s concerns, urging users to think
twice before sharing confidential or legally sensitive information with AI
platforms.
Altman went further to advocate for a concept of “AI
privilege,” which would protect user conversations with chatbots in the same
way as communications with licensed professionals. “I think we should have the
same concept of privacy for your conversations with AI that we do with a
therapist,” he said, adding that no one had to consider these implications just
a year ago.
The concerns raised by Altman are not just theoretical.
OpenAI is currently facing a legal challenge that has intensified the debate
over user privacy. As part of an ongoing copyright lawsuit brought by The New
York Times, a US court has ordered OpenAI to preserve and segregate all ChatGPT
output data that would normally be deleted. US Magistrate Judge Ona T. Wang
issued the order on May 13, 2025, and it was later upheld by District Judge
Sidney Stein on June 26. This means ChatGPT conversations, even those
users believe to be deleted, are now being retained indefinitely and could
be exposed during legal proceedings.
The order applies to users of ChatGPT Free, Plus, Pro, and
Team accounts, while enterprise and educational users are exempt. Altman
acknowledged the implications, especially since ChatGPT conversations are not
encrypted like those on secure messaging platforms. Under normal operations,
deleted chats are removed from OpenAI’s servers within 30 days, but the court
order has put that process on hold.
The revelations have alarmed privacy advocates who point to
OpenAI’s official privacy policy, which states that user data may be shared
with third parties, including government authorities, to comply with legal
obligations or prevent harm.
Until new legal protections are in place, users are advised
to treat conversations with AI chatbots as they would any unsecured digital
communication, with caution. For support involving legal, medical, or
mental health issues, experts continue to recommend consulting licensed
professionals who are bound by confidentiality laws.
OpenAI has yet to release an official statement addressing
Altman’s remarks, but the conversation around AI privacy is expected to
intensify as lawmakers and technology leaders weigh new regulatory frameworks
to protect users in the age of artificial intelligence.
0 Comments