Users seeking emotional support or therapy from AI applications like ChatGPT should proceed with caution, according to OpenAI CEO Sam Altman. Altman recently highlighted a critical issue: the current absence of legal confidentiality protections for sensitive conversations held with AI tools, a stark contrast to traditional doctor-patient or lawyer-client privileges.
During an appearance on Theo Von’s podcast, “This Past Weekend w/ Theo Von,” Altman addressed the evolving landscape of AI and the legal system. He pointed out that while individuals often share deeply personal information with ChatGPT – from relationship woes to life coaching inquiries – there is currently no established legal framework to safeguard these interactions.
“People talk about the most personal sh** in their lives to ChatGPT,” Altman stated. “People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’” He emphasized that unlike human therapists, lawyers, or doctors, conversations with AI lack the legal privilege that ensures confidentiality.
This gap poses a significant privacy risk. Should a legal dispute arise, OpenAI could be compelled to disclose these user conversations. Altman expressed strong disapproval of this situation, advocating for the same level of privacy for AI interactions as for professional human consultations.
The issue of user data privacy is not new for OpenAI. The company is currently engaged in a legal battle, appealing a court order in its lawsuit with The New York Times. This order would mandate OpenAI to preserve chat data from hundreds of millions of ChatGPT users globally, excluding enterprise customers. OpenAI describes this demand as an “overreach,” concerned that it could set a precedent for further legal or law enforcement demands for user data.
The discussion around digital privacy has gained prominence, especially in light of recent legal shifts affecting personal freedoms. For instance, following the overturning of Roe v. Wade, many users migrated to more secure period-tracking apps or platforms like Apple Health, which offer enhanced data encryption.
Altman acknowledged the host’s own privacy concerns regarding ChatGPT usage, affirming that it’s “very screwed up” and expressing a desire for greater legal clarity. He believes that securing robust privacy frameworks is crucial for fostering broader user trust and adoption of advanced AI technologies.
This ongoing debate underscores the urgent need for a comprehensive policy framework that addresses the unique privacy implications of AI, ensuring that users can engage with these powerful tools without compromising their most personal data.