With the rise of AI on the internet, millions — including many children — are now turning to chatbots like ChatGPT for emotional support and therapy-like guidance. However, OpenAI CEO Sam Altman recently raised a red flag regarding the lack of privacy protections in these deeply personal conversations.
In an interview on the This Past Weekend podcast with Theo Von, Altman stated that the current legal system offers no established safeguards for AI interactions. When asked about how AI aligns with legal confidentiality, he explained that users mistakenly assume their chats are private, even though there’s no legal framework to ensure that.
Altman pointed out that many people, particularly younger users, confide in ChatGPT about relationship struggles, mental health issues, and personal decisions. But unlike conversations with therapists, lawyers, or doctors — which are protected by confidentiality laws — AI chats don’t yet fall under any such legal protections.
He warned that, if needed in a lawsuit, ChatGPT conversations could be legally accessed and presented in court. “That’s seriously messed up,” Altman commented, urging policymakers to address this privacy gap as a priority.
Adding to these concerns, ChatGPT conversations aren’t encrypted like those on secure messaging platforms such as WhatsApp or Signal. OpenAI can access user data, and chats may be used internally to train the model or detect misuse. While OpenAI deletes free-tier chats after 30 days, some conversations are retained for legal or security purposes — especially during ongoing litigation, like the one involving The New York Times.