top of page

Sam Altman: ChatGPT Conversations Not Legally Private

  • Admin
  • Jul 27
  • 3 min read

Updated: Jul 31

Sam Altman
Sam Altman

In a startling public statement, OpenAI CEO Sam Altman has cautioned users that personal conversations with ChatGPT are not legally protected and could potentially be used as evidence in court.


Speaking candidly on comedian Theo Von’s popular podcast, This Past Weekend, Altman underscored the legal vulnerability users face when engaging with AI chatbots like ChatGPT. He explained that while many especially young people turn to ChatGPT for advice on deeply personal matters, from mental health struggles to relationship issues, these exchanges are not safeguarded under confidentiality laws.


"There’s legal privilege when you talk to a doctor, lawyer, or therapist," Altman explained. "But we haven’t figured that out yet for AI."


As generative AI platforms like ChatGPT, Google Gemini, and Perplexity AI continue to dominate the tech landscape, their integration into users’ personal lives is deepening. Many now treat these tools as virtual confidants or digital life coaches, freely discussing their private experiences and decisions.


However, this growing intimacy with AI comes without the legal protections traditionally granted to human professionals. In legal terms, conversations with ChatGPT are not privileged, meaning they can be subpoenaed or accessed in the event of lawsuits, investigations, or other legal actions.


"If you go talk to ChatGPT about your most sensitive stuff," Altman warned, "and then there’s a lawsuit, we could be required to produce that. And I think that’s very screwed up."

In response to these concerns, Altman has advocated for the creation of a new legal standard: “AI privilege.” This would mirror the legal confidentiality found in professions such as medicine or law and could protect users who seek help or advice from AI systems.


"We should have the same concept of privacy for your conversations with AI that we do with a therapist," Altman argued.


This idea, while novel, reflects the urgency of adapting legal frameworks to keep pace with rapid advancements in AI and its role in daily human life.


These concerns are not just theoretical. OpenAI is currently embroiled in a major copyright lawsuit brought by The New York Times. As part of the litigation, a U.S. federal court has ordered the company to preserve all ChatGPT outputs, even those that would normally be deleted by the user.


The directive issued by U.S. Magistrate Judge Ona T. Wang and upheld by District Judge Sidney Stein means that user chats from all tiers (Free, Plus, Pro, Team) are now being retained indefinitely for legal review. Notably, enterprise and education-tier users are exempt from this data retention policy.


This court order essentially overrides OpenAI’s standard 30-day deletion policy and has raised alarm bells among digital privacy advocates.


Adding to the controversy is OpenAI’s own privacy policy, which allows for the sharing of user data with third parties, including government agencies, to comply with legal obligations or prevent harm. Furthermore, ChatGPT conversations are not end-to-end encrypted, making them more vulnerable than communications on secure messaging platforms.


With no clear protections in place, privacy experts now strongly urge users to avoid discussing legally sensitive, medical, or emotional issues with AI chatbots, and instead seek licensed professionals who are legally bound by confidentiality agreements.


Altman’s comments are expected to intensify calls for stronger AI regulation, particularly regarding user privacy and data security. Lawmakers and technology leaders are increasingly under pressure to create robust frameworks that can govern how AI companies handle the personal data they collect.


Until then, users are advised to treat interactions with AI tools like ChatGPT with the same caution they would use for emails, texts, or unsecured messages.



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page