您簽署了AI隱私政策。您同意了什麼?
本文深入探討了OpenAI、Anthropic和Perplexity等領先AI公司的隱私政策,揭示了它們收集大量用戶數據,包括提示、回應和技術資訊。文章強調了參與網路與隱私之間的權衡,並討論了在數位生態系統中重新奪回隱私的潛在方法。

January 13th, 2026 // Did someone forward you this newsletter? Sign up to receive your own copy here.

You signed an AI privacy policy. What did you agree to?
Most of us agree to thousands of words of legal text without reading a single sentence.For example, OpenAI’s privacy policy is 3,000 words; Anthropic’s is 5,000 words. Yet most of us click accept in under five seconds. Participation in today’s internet effectively requires consenting to terms that, in practice, strip away privacy. That is the price of entry.But what are we exactly signing? In this newsletter, we examine the privacy policies of leading AI companies, highlight recent changes they are making to reduce user privacy, and discuss what can be done to reclaim privacy in a digital ecosystem built on access to our data.
// TL/DR
When we use AI chatbots, what data is being collected and stored?We did a close read of the privacy policies of three leading AI companies, OpenAI, Anthropic, and Perplexity (each hyperlink goes to the company’s privacy policy so you can read it yourself, or ironically, ask AI to summarize it).Data collectionAll three AI companies largely collect the same data. In addition to account identifiers (name, email, login credentials), the companies collect all user content—everything that’s inputted by a user into their chatbot (prompts, responses, etc.), as well as what the chatbot outputs in its responses. This includes written content, images, and other files. They also collect technical data (IP addresses, device and browser information, cookies, log files, etc.).Data useIt should come as no surprise that AI companies are collecting the data we make available to their chatbots. What is surprising is how that data is used, and how the content from seemingly private conversations can find its way back into LLMs. All three AI companies report using the data they collect to maintain their products, improve them over time, ensure security, prevent abuse, and comply with legal obligations.
That may sound innocuous, but it means that AI companies are training their AI models on user data. In 2025, Anthropic announced a change to its privacy policy that allowed Claude to train on consumer chats unless users opt out, and it extended retention up to five years for users who allow training. OpenAI already had a similar policy in place. AI companies tend to exempt enterprise and corporate accounts from this data use, but individual users should expect that everything they input into an AI chatbot is fed back into the model, unless they opt out (more on how to do that below).Data disclosureAI companies retain the right to disclose the data they collect about a user. That’s to be expected, and all three AI companies have similar disclosure clauses in their privacy policies. They can share your data—identifiable to you or de-identified—with third parties (many AI companies rely on a vast network of vendors and other service providers), internal teams, and external law enforcement (when required by law). In the future, those third parties could include advertising partners; similar to the surveillance capitalism model popular in social media, the content we generate in an AI chatbot interaction could one day be used to advertise to us.In past newsletters, we’ve explored the tensions and tradeoffs between online safety and online privacy. Those tensions exist here, too: In an effort to avoid AI-induced psychosis or AI chatbots supporting someone to cause harm to themselves or someone else, AI companies retain the right to review your AI chat history and refer you to law enforcement.In 2025, OpenAI issued a statement outlining new safety measures, which represent a form of safety-intended surveillance: “When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”Data retentionA person using an AI chatbot has tacitly agreed that the AI company has few or no limits on the retention of that data. OpenAI’s privacy policy states: “We’ll retain your Personal Data for only as long as we need…” Anthropic’s and Perplexity’s policies say something similar. There is no explicit limit to when data will be deleted or destroyed.Children’s dataOpenAI and Perplexity’s policies state that their services are not intended for children under the age of 13, and it does not knowingly collect data from children under 13 (it has recently released age prediction technology to estimate the age of their users based on chat history). For Anthropic, the age cut-off is 18. If the platforms discover that a minor is using their chatbots, they will disable that account and delete the associated data.
Anthropic allows minors to use its models indirectly through third‑party applications (e.g., education apps) if those developers meet specified child‑safety and privacy requirements.
// Trained by default
Anthropic’s privacy policy acknowledges that users have certain rights, especially in jurisdictions like the EU, but it states: “However, please be aware that these rights are limited, and that the process by which we may need to action your requests regarding our training dataset are complex.”A 2025 study by researchers at Stanford analyzed the privacy policies of six leading AI companies (Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI) and raised concerns about the lack of privacy, transparency, and ability for users to opt out of data collection.
The study found that all six companies train on their users’ chat data by default, meaning users have opted in to having their data collected and used. “Most users do not change defaults…This is why default collection of chat data for model training by all developers has far-reaching implications for data privacy.”
The study also found that an AI company’s privacy policy wasn’t the only source of its data practices. They found that “there were data practices discussed in branch policies that were not disclosed in the main privacy policy…such as how user inputs contribute to model training, whether human reviewers are involved in the process of reviewing inputs, or whether a model is trained on deidentified inputs or on personal data from the internet.”
The study’s lead author, Jennifer King, Privacy and Data Policy Fellow at the Stanford Institute for Human-Centered AI, said, “As a society, we need to weigh whether the potential gains in AI capabilities from training on chat data are worth the considerable loss of consumer privacy. And we need to promote innovation in privacy-preserving AI, so that user privacy isn’t an afterthought.”
// How to reclaim privacy
The Stanford study on AI company privacy policies made five recommendations to address the concerns and issues identified in analyzing AI company privacy policies:
// What users can do today
AI companies offer the ability to stop data from being used to train models. Here is what individuals can do today:
OpenAI: Navigate to the OpenAI Privacy Portal and click on “do not train on my content.” Once done, new conversations will not be used to train AI models. Follow the specific steps outlined here by OpenAI.
Anthropic: Navigate to Claude’s Privacy Settings to toggle “Help improve Claude” to off.
Perplexity: In Settings, you can disable data retention or operate in Incognito mode. Follow the steps here.
This article from The Washington Post offers additional specific instructions for reclaiming privacy and opting out of default data-collection practices. (Accessing this article might require creating an account.)For individuals seeking to audit the other terms of service and privacy policies they’ve unwittingly consented to online, resources are available from organizations that are members of the Project Liberty Alliance.
Consumer Reports’ Permission Slip is an app that helps users take back control of their data by having an AI agent discover what data online companies are collecting and automatically file a request to delete it.
Block Party helps users deep-clean their social media accounts, boosting privacy and increasing control over privacy settings.
Before taking these steps, it’s important to understand their limits. In most cases, opting out applies only to future use of data for model training. Companies typically retain broad rights to store and process information for security, abuse prevention, and legal compliance, and removing data from existing training systems may be complex or even impossible.
// A better tech future
A better future depends on technology that gives people greater control over their digital lives. What would this look like? Data is portable, consent is explicit, access is time-bound and revocable, and value can flow back to the people who generate it. That is the promise of a “people’s internet,” built on systems designed to give each of us a voice, choice, and stake rather than a permanent surrender buried in fine print.
Today, the defaults that underpin AI chatbots and tech platforms are set to collect, store, use, and disclose our data. Individuals are not powerless, but reclaiming privacy largely requires effort and vigilance, shifting the burden onto users. As concern about the data economy grows, there is an opportunity for stronger policies like the Digital Choice Act, by-design platform changes, and technologies that make privacy the default rather than the exception.
📰 Other notable headlines
// 🏩 A Guardian investigation found that Google’s AI overviews provided inaccurate and false information when queried over blood tests. (Free).
// 📞 An article in The Washington Post pointed out that Gen Zers aren’t talking on the phone, and it could cost them (in more than social awkwardness). (Paywall).
// 🤔 Large language models don’t “learn”—they copy. And that could change everything for the tech industry, according to an article in The Atlantic. (Paywall).
// 📱 An article in Tech Policy Press argued that the latest Grok scandal isn't an anomaly. It follows warnings that were ignored. (Free).
// 🤝 An article in WIRED asked, is Craigslist the last real place on the internet? Millennials are still using Craigslist to find jobs, love, and even to cast creative projects—eschewing other AI- and algorithm-dominated online spaces. (Paywall).
// 📸 In the age of AI, we endlessly debate what consciousness looks like. An article in The New Yorker asked, can a camera see things more clearly? (Paywall).
// 🏛 Google and Character.AI settled the lawsuit in the case of a 14-year-old in Florida who had killed himself after developing a relationship with an AI chatbot, according to an article in The New York Times. (Paywall).
// 😍 Can you optimize love? A group of tech executives, app developers, and Silicon Valley philosophers is seeking to streamline the messy matters of the heart, according to an article in The New York Times. (Paywall).
// 👀 An op-ed in The New York Times argued that the multi-trillion-dollar battle for your attention is built on a lie. (Paywall).
Partner news
// Slow Violence, Fast Tech: Reclaiming AI for civil and human rights
January 29 | 1-2 PM ET | Virtual
All Tech Is Human is hosting a conversation with Malika Saada Saar, Senior Fellow at Brown University and former Global Head of Human Rights at YouTube, examining how AI’s incremental “slow violence” can erode civil and human rights. Register here.
// New podcast: How to let a thousand societies bloom
BlockchainGov highlights a new episode of the Network Nations podcast featuring Vitalik Buterin. The discussion explores core tensions shaping the future of digitally aligned communities: culture vs. mission, kinship vs. purpose, and permanence vs. mobility. Tune in here.
// TRANSFER Download: AliveNET explores the space between human and machine
January 16 | 5-8 PM ET | Nguyen Wahed Gallery, New York City
TRANSFER and Nguyen Wahed Gallery present TRANSFER Download: AliveNET, an immersive in-person exhibition (Jan. 16 - March 1) examining how perception, identity, and embodiment are shaped through screen-mediated and algorithmic systems. Register for opening night.
What did you think of today's newsletter?
We'd love to hear your feedback and ideas. Reply to this email.
// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.
Thank you for reading.





10 Hudson Yards, Fl 37,New York, New York, 10001Unsubscribe Manage Preferences
© 2025 Project Liberty LLC
相關文章