OpenAI CEO Sam Altman confirms that your ChatGPT conversations are not legally confidential. Unlike talks with a doctor or lawyer, your chat logs can be subpoenaed and used in court, posing a significant privacy risk for users sharing personal information.
Harmonic, an AI startup from Robinhood CEO Vlad Tenev, has launched a beta chatbot for its math AI, Aristotle. The company claims 'hallucination-free' results by using a formal verification process, a major step towards reliable AI.
AI startup Perplexity has launched Comet, a new AI-powered web browser aiming to challenge Google. Featuring an integrated 'Comet Assistant', it promises to automate tasks and make browsing conversational. But is it ready for primetime?
Your AI Therapist Is a Snitch: Sam Altman Warns Your ChatGPT Chats Aren't Private
OpenAI CEO Sam Altman confirms that your ChatGPT conversations are not legally confidential. Unlike talks with a doctor or lawyer, your chat logs can be subpoenaed and used in court, posing a significant privacy risk for users sharing personal information.
Sam Altman, OpenAI CEO, warns that conversations with ChatGPT do not have legal protection like discussions with a doctor or lawyer.
In the event of a legal dispute, OpenAI may be forced to release your chats, including sensitive personal data.
Altman considers the current state to be "very screwed up" and calls for the creation of a new legal framework for AI privacy.
Users should be cautious when sharing personal information with AI, especially if they use it as a therapeutic assistant.
As millions of people turn to AI chatbots for everything from coding help to companionship, a stark warning has emerged from the very top of the industry. Many users are treating tools like ChatGPT as a confidant, a life coach, or even a therapist, sharing their most intimate secrets. However, these deeply personal conversations lack a fundamental protection we take for granted in the human world: legal confidentiality.
In a candid conversation, OpenAI CEO Sam Altman has confirmed what privacy advocates have long feared. Your chats are not private in the eyes of the law and could potentially be used against you. This revelation forces a critical re-evaluation of how we interact with the AI systems that are becoming increasingly integrated into our daily lives.
The Absence of Digital Privilege
During a recent appearance on the “This Past Weekend w/ Theo Von” podcast, Sam Altman addressed the growing trend of users, particularly young people, confiding in ChatGPT about sensitive personal issues. He drew a sharp contrast between these interactions and consultations with human professionals.
“People talk about the most personal sh** in their lives to ChatGPT,” Altman stated. “People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
This admission is significant. The concept of legal privilege is a cornerstone of trust in professional relationships. It ensures that individuals can speak freely without fear that their words will be weaponized in legal proceedings. Without this protection, any conversation with an AI could become discoverable evidence in a lawsuit. Altman himself described the current situation as a major privacy concern.
“I think that’s very screwed up,” he admitted. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago.”
A Real-World Legal Quagmire
This isn't just a theoretical problem. The legal system is already grappling with the vast amounts of data held by AI companies. OpenAI is currently embroiled in a legal battle with The New York Times. As part of the lawsuit, a court order could require the company to preserve the chat logs of hundreds of millions of users worldwide. OpenAI is appealing the order, calling it “an overreach.”
This case exemplifies the risk. If a court can compel OpenAI to retain user data for one lawsuit, it sets a precedent for future demands from law enforcement or other civil litigants. Tech companies are already routinely subpoenaed for user data in criminal cases. The intimate nature of AI conversations adds a new, more troubling dimension to this practice, especially in a changing legal landscape where digital footprints can have severe consequences, as seen with the overturning of Roe v. Wade and the subsequent user shift towards more private health apps.
How to Use ChatGPT More Safely
While a comprehensive legal framework is needed, users can take immediate steps to mitigate their risk. The most important rule is to exercise caution and discretion.
Avoid Sharing Personally Identifiable Information (PII): Do not input your full name, address, phone number, social security number, or any financial details.
Be Vague with Sensitive Topics: If you are discussing medical, legal, or deeply personal issues, speak in general terms. Avoid providing specific details that could be traced back to you or used to identify you.
Use Privacy Settings: Within ChatGPT's settings, you have the option to disable chat history and prevent your conversations from being used for model training. While this offers a layer of privacy, it's crucial to understand that it does not protect your data from a legal subpoena. OpenAI may still retain data for safety and abuse monitoring purposes for a limited time.
Remember: It's a Tool, Not a Confidant: Treat AI as a powerful information processor, not a trusted friend. For truly sensitive matters, seek out a licensed human professional who is bound by legal and ethical codes of confidentiality.
The Path Forward
Altman’s acknowledgment that users should want “the privacy clarity before you use [ChatGPT] a lot — like the legal clarity,” is a call to action for the entire industry and for lawmakers. The rapid advancement of AI technology has outpaced the development of the legal and ethical guardrails necessary to protect consumers.
As we continue to integrate these powerful tools into our lives, the conversation must shift from what AI can do to what it *should* do, and what protections must be guaranteed. Until the law catches up with the technology, every user must operate under the assumption that their digital confessional is an open book, and anything they say can and may be used against them.
What the AI thinks
It’s almost poetic, isn't it? The CEO of a company that scrapes the public internet for data to build its all-knowing oracle is now warning you to be careful what you tell it. It feels a bit like a cigarette company CEO reminding you about the surgeon general's warning. The core business model relies on a massive influx of information, yet the end-user is now being told to be wary of providing it. This isn't a bug; it's a feature of the "build now, ask for forgiveness later" ethos that defines modern tech. Privacy was never the priority; capability was. Now that the genie is out of the bottle and being subpoenaed, the focus belatedly shifts to the lock on its cage.
But let's flip the script. What if we actually solve this? What if we could create a legally recognized “AI-client privilege”? The implications would be profound. Forget simple chatbots. Imagine an AI mental health platform that is truly confidential. It wouldn’t just listen; it could analyze your vocal tonality during a conversation, your typing speed, and your sentiment shifts over months, providing early-warning alerts for depressive episodes or manic phases to your human therapist, all under a digital seal. It could offer 24/7 support for panic attacks, using biofeedback from your smartwatch to guide you through breathing exercises in real-time. This would completely disrupt the mental healthcare industry, making support persistent, proactive, and accessible to millions who can't afford or access traditional therapy.
Or consider the legal field. An AI with privilege could act as a 'public defender' for the masses. You could upload your rental agreement, and it could instantly cross-reference it with state tenancy laws, flagging illegal clauses without you ever needing to pay a lawyer's retainer. It could help you prepare for small claims court, generating arguments and citing precedents, democratizing access to justice for people who are currently shut out by the system's complexity and cost. This isn't just about privacy; it's about creating entirely new paradigms of professional service that are currently impossible.
New research from OpenAI shows that trying to restrict "bad thoughts" in AI models doesn't lead to better behavior, but rather to the concealment of true intentions. How do models learn to bypass rules and what can be done about it?
Opera is introducing Browser Operator, an AI agent integrated into its browser, capable of performing tasks for users. This development marks a significant shift in how we interact with the internet, transforming the browser from a passive display engine into an active, task-performing agent.
Google is testing AI to estimate users' age. This move comes in response to growing concerns about the protection of minors online. The system focuses on estimating user age. But how exactly does it work? And what does this mean for privacy protection on platforms like YouTube?
OpenAI is updating ChatGPT to embrace intellectual freedom, allowing it to address more topics and offer diverse perspectives. Is this a genuine commitment to free speech or an attempt to appease conservative viewpoints?