
Anthropic’s Claude AI Updates – Impact on Privacy & Confidentiality
Anthropic will update the Consumer Terms of Service and Privacy Policy of the popular Claude AI model on 28 September. After this update, businesses worldwide will discover a fundamental shift in how their data gets handled when using Claude AI. The changes to these terms of Claude dramatically amend data training consent mechanisms. Thin k how important this is for Claude AI Privacy and Confidentiality of Data when using Claude AI. This is why we wrote this article “Anthropic’s Claude AI Updates – Impact on Privacy & Confidentiality’.
In short, from 28 September, Claude AI will train on all data, except from business accounts. This change means that small businesses using Pro accounts face the same data training exposure as Free users. The biggest question is whether companies realize that they are now training Claude AI with their data?
The critical question “does Claude train on your data” now depends entirely on which account type you use. Most importantly: you can even opt-out of this possibility, but have you done that? Let us also explain how the Terms of Use and Privacy Policy documents work together. They establish legal frameworks that are not clear for most business leaders.

Executive Summary: The TLDR of Claude’s Privacy Shift
Anthropic’s Sept. ’25 update changed data handling for most users. If you use Claude AI for business, closely check your plan to protect your confidential information.
- Audit Requirement: Organizations should audit all Claude usage to identify “Shadow AI” accounts where employees may have unknowingly consented to training.
- The “Pro” Trap: Claude Pro and Team are classified as Consumer accounts. By default, these now train on your data unless you manually opt out.
- 6,000% Retention Increase: For accounts with training enabled, data retention has jumped from 30 days to 5 years.
- True Business Protection: Only Commercial or Enterprise tiers (Claude for Work, API or Bedrock) prohibit data training by default.
- Manual Opt-Out: Free, Pro and Team users must change “Help improve Claude” to OFF in Privacy Settings to prevent data usage for training.
What We Will Cover
- Explanation of the Anthropic’s Claude AI Terms – how do they work?
- The critical distinction between consumer and business accounts that determines whether Claude trains on your data
- Why the new 5-year data retention period and opt-out default represents a 6,000% increase from previous policies
- Practical steps small and medium businesses must take to protect confidential information under the new framework
- Contract negotiation strategies for organizations dealing with AI vendors implementing similar policy changes
- Essential compliance considerations for regulated industries handling sensitive data through AI platforms
Claude AI Terms Explained: What You Need to Know
Anthropic’s terms and policies use a lot of specific terms, and they are not there by accident. Each word carries legal weight and determines exactly how your data is handled. Understanding them is not a technical exercise;. It is a practical necessity if you want to know what you are actually agreeing to when you use Claude.
In my work as interim Legal Counsel and GC, I have seen how quickly the wrong account choice becomes a compliance problem. The terms below will help you follow the rest of this article. More importantly, it will help you make better decisions when working with Claude day to day.
The Claude Privacy & Contract Terminology List
- Consumer Terms of Service: The primary contract for Free, Pro, and Team accounts. This document gives Anthropic the legal right to use your conversations to train their AI models by default.
- Commercial Terms of Service: The business-grade contract used for Claude for Work, Enterprise, and API access. These terms explicitly prohibit data training on your inputs.
- Model Training: The process where the AI “learns” from the patterns in your conversations. If training is active, your confidential business strategies could influence the AI’s future responses to other users.
- Data Training Opt-Out: A specific privacy setting for Consumer accounts. It allows you to manually stop the AI from learning from your data while staying on a lower-tier plan.
- 5-Year Data Retention: The period Anthropic now stores conversation data in training-enabled accounts. This is a massive jump from the previous 30-day policy.
- Shadow AI: This happens when employees use personal Claude accounts for company work. This often leads to sensitive corporate data being governed by weaker consumer rules instead of strict business terms.
Understanding the Claude AI Privacy Policy and Terms
The Complete Document Ecosystem
Anthropic’s September 2025 changes of its terms of service affected multiple interconnected documents. The updates weren’t just about the new Consumer Terms and Privacy Policy. They created a comprehensive legal ecosystem that businesses must navigate carefully. See below our detailed explanation of the Contract setup.
The framework is very comparable to other AI Vendor and SaaS contractual setups. It includes primary contracts, data policies and usage guidelines. Each document serves a specific purpose. Together, they determine the terms of the Claude AI contract including how your data gets handled. Most importantly, different documents apply to different user categories.
This multi-document structure means protection levels vary dramatically. Consumer and business users operate under entirely different rules. Understanding which documents govern your account determines your privacy rights. Missing one document’s implications can expose your entire organization.
What is the difference between Consumer vs Business Use?
Consumer Terms Explained
How do Anthropic Terms of Service work? Consumer users operate under the Consumer Terms of Service. This primary contract establishes the relationship with Anthropic. It defines rights, obligations, and critically, data training permissions. The Consumer Terms apply to Free, Pro, and Team accounts.
The Privacy Policy explains how Anthropic handles data. It details collection methods, usage purposes, and retention periods. Moreover, it contains the crucial opt-out possibility for training. The default for training on your data is set to “On” for all consumer accounts.
The Usage Policy sets acceptable use boundaries. It prohibits harmful content and illegal activities. Violations can trigger human review of conversations. Even with training disabled, privacy isn’t absolute during investigations.
Business Framework Components
Business users receive Commercial or Enterprise Terms of Service (the Commercial Terms of Service) instead. These terms explicitly prohibit data training without exception. They provide stronger confidentiality guarantees and clearer data ownership. Business terms apply to Claude for Work, Enterprise, and API access.
The same Privacy Policy applies but functions differently. Business accounts can’t enable training even if desired. Data retention stays minimal regardless of settings. The Usage Policy remains identical but enforcement differs.
Business users often receive additional documents. Data Processing Agreements provide GDPR compliance. Service Level Agreements guarantee uptime and support. These extra protections justify higher pricing tiers.
The Critical Account Classification Problem
Why “Pro” Doesn’t Mean Professional
Claude Pro costs $20 monthly but remains a consumer account. The name suggests business-grade protection that doesn’t exist. Similarly, Team accounts at $30 monthly sound enterprise-ready. They’re actually consumer tier with training enabled by default.
This naming confusion creates massive risks. A 50-person law firm using Team accounts seems protected. In reality, their client communications train AI models. Meanwhile, a solo consultant with API access has better protection.
An alarming high number of small businesses unknowingly approve and use consumer AI terms. They assume paid accounts mean business protection. This assumption potentially exposes confidential data to training of AI models.
The Real Business Account Options
True business protection at Claude requires specific account types:
- Claude for Work (custom pricing)
- Claude Enterprise (negotiated contracts)
- Claude API with Commercial Terms
- Claude via Amazon Bedrock
- Claude for Government/Education
These accounts operate under Commercial Terms of Service. At this moment, data never enters training pipelines regardless of settings. Retention periods remain minimal by default. Additional compliance documents provide extra protection.
The September 28 Deadline’s Lasting Impact
The mandatory September 28, 2025 deadline forced immediate decisions. Users have to accept new terms or lose access entirely. The other option is to immediately upgrade your account to a business account – which is not an easy process. As far as we know, this approach did not create a panic among businesses. Many accepted without understanding the implications.
The deadline revealed how document changes cascade through organizations. Individual employees accepted terms independently. They unknowingly bound their organizations to training consent. Corporate data entered pipelines without authorization or oversight.
How Claude’s Privacy Policy and Terms of Service Create a Complex Legal Framework
Understanding Claude’s Document Structure
Claude operates through a multi-document legal framework that differs for consumers and businesses. Here’s how the framework related to Anthropic terms of service works:
For Consumers:
- Consumer Terms of Service (primary contract)
- Privacy Policy (data handling rules)
- Usage Policy (acceptable use guidelines)
- Supporting documents referenced within terms

For Businesses:
- Commercial/Enterprise Terms of Service (primary contract)
- Privacy Policy (data handling rules)
- Usage Policy (acceptable use guidelines)
- Data Processing Agreements (where applicable)
- Supporting documents referenced within terms
This structure creates different protection levels. Consumer Terms allow data training by default. However, Commercial Terms prohibit it entirely. The Privacy Policy applies to both groups but operates differently based on which Terms govern the account.
Consumer Terms of Service: Where Training Rights Begin
The Consumer Terms of Service forms the primary contract for Free, Pro and Max accounts. These Anthropic terms of service establish Anthropic’s legal right to use data for training. Furthermore, they require mandatory acceptance by specific deadlines.
The Terms incorporate other documents by reference. This means accepting the Terms automatically binds users to the Privacy Policy and Usage Policy. Additionally, the Terms define which account types fall under consumer versus business categories.
Most critically, the Consumer Terms grant Anthropic permission to retain and use data. They establish the legal foundation for the 5-year retention period. However, they defer implementation details to the Privacy Policy.

Privacy Policy: How Your Data Gets Used
The Privacy Policy explains exactly how Anthropic collects, uses, stores, and shares data. It contains the actual consent mechanisms users must navigate. Moreover, it introduces the critical “Help improve Claude” toggle.
This toggle lives in Privacy Settings (link not included as only possible to access if you have an account) and controls training consent. When enabled, it allows:
- Model training on conversations
- Safety system improvements
- Product development uses
- 5-year data retention
When disabled, it limits:
- Data retention to 30 days
- Usage to service delivery only
- No model training permitted
The Privacy Policy also explains data sharing with third parties. It details security measures and user rights. Furthermore, it specifies how different account types receive different treatment.

Usage Policy: The Forgotten Third Document
The Usage Policy sets boundaries on acceptable platform use. It prohibits harmful content, illegal activities, and terms violations. Moreover, it affects data handling indirectly.
Violations of the Usage Policy can trigger account reviews. These reviews might involve human examination of conversations. Therefore, even with training disabled, privacy isn’t absolute when policy violations occur.
The Two-Step Consent Trap
Step 1: Accept Terms or Lose Access
Users must accept updated Consumer Terms by deadline dates. Refusing means losing Claude access entirely or update is required to a business account. This creates pressure to accept without careful review.
Step 2: Find and Change Privacy Settings
After accepting Terms, users must locate Privacy Settings separately. The training toggle defaults to “On” for new users. Many miss this second step entirely.
This two-step process disadvantages small businesses. They often lack legal resources to understand both documents. Consequently, they inadvertently consent to training despite privacy concerns.
This is a pattern seen across AI vendors and SaaS companies. Companies use complex document structures to maximize consent rates. Technical compliance exists while practical protection remains minimal.
Critical Claude Privacy Policy Changes Small Businesses Must Navigate
The 5-Year Data Retention
The retention period jumped from 30 days to 5 years for training-enabled accounts. This 6,000% increase affects all consumer accounts by default. Your conversations today could train AI models in 2030.
This change might create immediate problems for professional services. Law firms’ client strategies become training data. Consultants’ competitive insights feed future models. Healthcare providers risk HIPAA violations through extended retention. It is therefore very important to realize under which plan you are using Claude AI.
The retention period exceeds most document destruction policies. Companies typically delete sensitive data after 2-3 years. However, Claude keeps it for five. This conflict potentially creates compliance nightmares for regulated industries.
Why Small Firms Face Disproportionate Claude AI Privacy Risks
Limited Resources Create Vulnerabilities
Small businesses lack dedicated privacy teams. They can’t analyze complex policy changes effectively. Moreover, informal IT governance makes tracking AI usage nearly impossible.
It could be said that the “Pro” account name creates false security about protection levels.
The Professional Account Naming Trap
Claude Pro sounds professional but isn’t. It remains a consumer account with training enabled. Small firms assume “Pro” means business-grade protection. This assumption exposes confidential data to AI training.
Team accounts create similar confusion. They cost $30 per user monthly. Yet they receive consumer privacy treatment. Only Claude Enterprise or API access provides true business protection.
Shadow IT and Compliance Nightmares
The September 28 deadline revealed widespread shadow AI usage. Employees had signed up independently for Claude accounts. They accepted new terms without corporate oversight. Sensitive data potentially entered training pipelines without authorization.
Professional services face particular challenges here. Individual practitioners maintain significant autonomy in tool selection. A tax attorney might use personal Claude Pro for research. Client tax strategies then train future models without anyone realizing.
Healthcare consultants create similar risks. They might process patient information through consumer accounts. HIPAA violations occur despite believing they have professional protection. These violations carry penalties up to $2 million per incident.
Audit Requirements After Policy Changes
It is instrumental that organizations audit all AI tool usage immediately. Document every Claude account across all departments. Identify which accounts are consumer versus business tier. Check whether training toggles are properly configured.
This audit often reveals surprising results. Companies discover dozens of unknown accounts. Shadow IT usage exceeds official deployments significantly. Sensitive data has been processed through consumer tiers for months.
Practical Steps for Protecting Your Organization Under New Terms
Immediate Actions Every Business Must Take
Start with a comprehensive audit of all Claude usage. Document every account type and billing structure. Map usage patterns across departments. This audit reveals your actual exposure level.
Navigate to Claude’s Privacy Settings for all consumer accounts. Ensure the “Help improve Claude” toggle is OFF. This prevents future data from entering training pipelines. However, it doesn’t affect previously submitted information.
Implement strict data classification policies next. Define what information can use different account tiers:
- Public information: Consumer accounts acceptable
- Internal data: Enhanced monitoring required
- Client confidential: Business accounts only
- Regulated data: Enterprise accounts mandatory
Establish monitoring systems to detect policy violations. Flag attempts to input sensitive information into consumer accounts. Regular training helps employees understand why distinctions matter. Their choices directly affect organizational risk.

Contract Strategies for AI Vendor Negotiations
Essential Provisions to Demand
The Claude privacy policy changes teach valuable negotiation lessons. Demand explicit provisions prohibiting model training on customer data. Require data segregation between consumer and enterprise services. Include audit rights to verify compliance.
Hogan Lovells’ AI Contract Framework recommends specific clauses. Add termination rights triggered by adverse privacy changes. Negotiate graduated pricing for mixed account usage. Request transparency reports on data handling practices.
Protecting Against Future Policy Changes
Include change notification requirements with 90-day advance notice. Specify that material adverse changes permit immediate termination. Require grandfathering of existing terms for contract duration. These provisions protect against surprise modifications.
Small businesses need particular protection here. They can’t afford sudden enterprise pricing requirements. Graduated transition periods allow budget planning. Group purchasing through associations reduces individual costs.
Building Competitive Advantage Through Privacy Leadership
Transform Claude AI data privacy compliance into market differentiation. Law firms advertise enterprise AI tool usage in pitches. Consultancies include AI governance descriptions in proposals. This transparency builds trust and justifies premium pricing.
Develop “AI Privacy Pledges” for client communications. Guarantee that client data never enters training datasets. Promise privacy assessments before adopting new AI tools. Commit to immediate notification of policy changes affecting client information.
Professional service providers report significant benefits. They win 35% more proposals when demonstrating privacy leadership. Clients increasingly recognize risks with providers using consumer AI. Privacy commitment becomes a selling point rather than a burden.
Creating Internal AI Governance Frameworks
Establish an AI governance committee with cross-functional representation. Include legal, IT, compliance, and business unit leaders. Meet monthly to review AI tool usage and policy changes.
Document all AI tools in a central registry. Track account types, usage purposes, and data classifications. Review quarterly for compliance and optimization opportunities. This systematic approach prevents shadow IT growth.
Benefits of Proper Claude AI Privacy Management for Small Businesses
Business Impact: Trust, Efficiency, and Growth
Small businesses implementing proper Claude AI data privacy controls experience immediate competitive advantages. Clients increasingly ask about AI tool usage and data protection measures during vendor selection. Moreover, firms demonstrating sophisticated understanding of consumer versus business account distinctions win more contracts. Professional service providers report 35% higher close rates when they can guarantee client data won’t train AI models.
Operational efficiency improves once teams understand appropriate use cases for different account types. Furthermore, clear policies eliminate confusion about which information can be processed through which systems. Small law firms using properly configured Claude accounts report 40% time savings on research and drafting while maintaining complete confidentiality. Additionally, the peace of mind from proper protection allows teams to fully leverage AI capabilities without constant concern about data exposure.
Legal Impact: Compliance, Insurance, and Risk Management
Proper Claude privacy management dramatically reduces legal exposure for small businesses. Professional liability insurers increasingly offer premium discounts for firms demonstrating AI governance maturity. Moreover, documented policies showing the distinction between consumer and business account usage satisfy regulatory auditors and client security assessments.
Small businesses avoiding Claude-related data incidents save average remediation costs of $185,000—potentially company-ending amounts for smaller firms. Furthermore, maintaining proper data protection prevents relationship damage that occurs when clients discover their information trained AI models. The reputational impact of a single privacy incident can destroy decades of trust building, particularly for professional service providers whose entire value proposition centers on confidentiality.
Frequently Asked Questions (FAQ) on Claude AI Privacy
Q: Does Claude train on your data?
It depends entirely on your account type and the Anthropic terms of service that are applicable. By default, Claude Free, Pro, and Team plans (Consumer accounts) do train on your data unless you manually opt out in settings. Claude Enterprise and API accounts (Commercial accounts) never train on your data.
Q: What is the benefit of the Claude Team plan for data privacy?
Despite the name, the Claude Team plan operates under Consumer Terms. While it offers collaboration features, it still enables data training by default. To secure professional-grade privacy, you must either manually opt out in the settings or upgrade to the Claude API or Enterprise tiers.
Q: How do I opt out of Claude AI data training?
To opt out, navigate to your Account Settings, select Privacy, and toggle the “Help improve Claude” button to OFF. This ensures your future conversations are not used to train Anthropic’s models, though it does not automatically remove data that was already processed.
Q: Is Claude safe for law firms and regulated industries?
Claude is safe only when used under Commercial Terms of Service (API or Enterprise). Using Consumer accounts for client-confidential work risks your data being stored for five years and used for training, which could violate professional secrecy or GDPR requirements.
Q: What happens to my data if I use a personal Claude Pro account for work?
Your data is treated as consumer data. This means it can be used to train the model, it is subject to a 5-year retention period, and it is governed by a framework designed for individuals, not the strict confidentiality needed by businesses.
Key Takeaways
- Both Terms of Use and Privacy Policy changed simultaneously, creating a dual-document framework where Terms provide contractual authority while Privacy Policy contains actual consent mechanisms
- The 30-day to 5-year retention increase affects ALL consumer accounts (Free, Pro, Team) with training enabled by default—only true business accounts maintain automatic protection
- Small businesses face greater exposure than enterprises because “Pro” and “Team” accounts sound professional but receive consumer-grade privacy treatment
- Shadow IT discoveries revealed widespread unauthorized AI usage, with employees accepting new terms independently and potentially exposing corporate data
- Immediate action required: Audit all accounts, disable training in privacy settings, implement data classification policies and negotiate protective provisions with AI vendors
Taking Control of Your AI Contract and Data Privacy Strategy
These modifications mentioned above demonstrate how quickly privacy frameworks can shift and why organizations need proactive strategies rather than reactive responses. Therefore, businesses must treat AI privacy as a strategic priority requiring the same attention as cybersecurity or regulatory compliance.
AMST Legal specializes in navigating these complex AI contractual landscapes. We help organizations understand not just what policies say but what they mean for practical operations. We’ve guided numerous businesses through the maze of AI and SaaS Contracts, identifying risks others miss and negotiating protections that actually matter. Additionally, our expertise spans from emergency audits following policy changes to comprehensive AI governance framework development.
Whether you need immediate assistance assessing your Claude account exposure, strategic guidance negotiating with AI vendors, or ongoing support managing evolving privacy requirements, AMST Legal provides flexible engagement models tailored to your needs. Contact us at amstlegal.com to discuss how we can protect your organization’s interests while enabling responsible AI adoption.
To read more on this topic here is the Ultimate Guide how ChatGPT, Perplexity and Claude use Your Data
Here are other articles on this topic Anthropic will start training its AI on your chats unless you opt out. Here’s how, Anthropic will start training its AI models on chat transcripts and, Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out.
About AMST Legal
At AMST Legal, we specialize in helping businesses navigate the world of AI and SaaS contracts. We move beyond simple legal reviews to provide strategic advice on data privacy, sales & vendor negotiations and internal governance frameworks. Whether you need an emergency audit following a policy change, help securing enterprise-grade protections from AI vendors, or a fractional General Counsel to oversee your legal operations, we provide the expertise needed to enable responsible AI adoption. Contact us at info@amstlegal.com or book a meeting here to ensure your organization’s data remains a private commercial asset, not a public training set.
Author: Robby Reggers, Founder of AMST Legal (amstlegal.com), recognized by Legal Geek as a LinkedIn Top Voice for contracting, negotiation, and interim GC work. AMST Legal supports clients per contract/project or on an interim basis (set hours/week).

Ultimate Guide how ChatGPT, Perplexity and Claude use Your Data
AI Data use is at an all-time high and data privacy & confidentiality is incredibly important when using AI models. Before you add text or documents to AI Models, read below how and when these platforms might use your content. We also list what content to avoid adding to the models and why AI Policies can be incredibly helpful.
Recently, I had a discussion with a lawyer who shared full client documents, including detailed confidential information, with an AI tool. He was completely unaware that the data we feed into AI systems can be used by the model. That conversation made me realize something important: Not everyone knows how large language models (LLMs) like ChatGPT, Claude and Perplexity actually handle the content we provide.
This is why I wrote this article “Ultimate Guide how ChatGPT, Perplexity and Claude use Your Data”, showing when ChatGPT, Perplexity and Claude uses (or does not use) Your Data.
What we will cover:
- Why businesses, law firms, and other organizations need comprehensive AI policies
- How free and paid versions of ChatGPT differ
- How Claude and Perplexity compare in terms of data handling
- Why you should never share private or proprietary content
The below is an ongoing research project, comparing AI models. Please do not treat the below as legal advice. Always verify current policies to ensure compliance with your (organization’s) requirements – carefully review relevant documentation to understand if and how they might use your input to refine their algorithms.
2. What Are AI Models Learning From? Our Data!
It’s common knowledge that AI models, including ChatGPT, Claude, and Perplexity, are initially trained on large datasets (e.g., internet text, books, articles). However, many people don’t realize that these models can also learn from the additional content we submit. This includes for example when we’re asking them to:
- Analyze or summarize data (e.g., uploading sections of a contract)
- Write or edit an article (e.g., pasting confidential notes)
- Generate ideas or code (e.g., providing business-critical snippets)
Whenever you input text into these systems, it may be used – depending on the model and the plan you’re on – to further refine or train the AI. In this article, we’ll dig deeper into how various well-known models handle (or don’t handle) your content, so you can make more informed decisions when you’re working with sensitive data.
3. ChatGPT: Paid vs. Non-Paid Tiers
ChatGPT is currently the most widely used AI model – focusing on Chatbots / Generative AI. It is important to understand the difference between the paid and free models of OpenAI when focusing on data privacy, confidentiality and to answer the question whether Open AI uses your data to train their model.
ChatGPT (Free)
- Data Usage: Under the free model, OpenAI may use the content you provide to further train or refine the model. This means if you share potentially sensitive information, it could (in theory) be included in the AI’s training data. See the Terms of Use for OpenAI (Free) on this point – I add the relevant part relating to whether it will be able to use your Input in the image below.
- Key Takeaway: Be mindful of what you enter into the model. If it’s something you wouldn’t want to become part of the broader AI model or if it’s private, proprietary or personal, do not include it in your prompts. Also remember that it can be against the law to add certain content.

ChatGPT (Paid: Enterprise, API, and Other Business Solutions)
- Greater Privacy: According to OpenAI’s Enterprise Terms of Use, customer content is not used to develop or improve the service. See the Business Terms for OpenAI (Paid) on this point. See image below – I add the relevant part whether it will be able to use your data.
- Stronger Security: Enterprise clients typically get enhanced security measures, better data handling policies, and the option to opt out of data usage altogether.
- Who Should Consider This: Those handling confidential documents, proprietary business processes, or sensitive client data. Lawyers, for instance, might prefer the enterprise offering if they frequently need to process large volumes of privileged information. However, we should note that it is not certain that this Input & Content is secure. Currently, until further notice we would still advice everyone – especially lawyers, doctors, government employees and other that have access to sensitive information not to add such information in the AI models.
- Opt-out: If you do not want your data contributing to AI model improvements, go to the following link to opt-out: https://lnkd.in/dVPcMfH8

4. Comparing Other AI Models: Claude and Perplexity
Next to ChatGPT there are many other AI models that are used. As we will handle Gemini next time, we will go into the details on Claude and Perplexity in this Article.
Claude (by Anthropic)
- Focus on Safety: Claude is well-known for its emphasis on AI safety and ethical guidelines. However, terms regarding data usage can still allow the model to analyze or store user inputs for system improvement unless otherwise specified. See the Consumer Terms of Service of Claude on this point. See image below – I add the relevant part whether it will be able to use your data.
- Paid Services: Anthropic offers enterprise solutions as well, which comes with a separate set of Commercial Terms containing stricter confidentiality and privacy protections. They have also added the following text in the Commercial Terms of Service: “Anthropic may not train models on Customer Content from paid Services“. Always check the most recent Terms of Service for precise details.

Perplexity
- Research-Oriented: Perplexity is built around providing concise, sourced answers. It has been explained to me as more the “Google” way of searching in AI models. It has been said that Perplexity may not store or use data exactly like ChatGPT. However, the Terms of Service seem to indicate that Perplexity will use your Content – amongst others – for training purposes. See relevant part of Art. 6.4 (b) here: “Accordingly, by using the Service and uploading Your Content, you grant us a license to access, use, host, cache, store, reproduce, transmit, display, publish, distribute, and modify Your Content to operate, improve, promote and provide the Services, including to reproduce, transmit, display, publish and distribute Output based on your Input. “ See image below – I add the relevant part whether it will be able to use your data.
- Enterprise Terms of Service: See these specific terms here. Enterprise customers do provide a license to Perplexity regarding their content. However, the following important wording is added: “Notwithstanding the foregoing, Perplexity does not and will not use Customer Content to train, retrain or improve Perplexity’s foundation models that generate Output.”

5. What Not to Upload to AI Tools
Whether you’re using ChatGPT, Claude, Perplexity, or any other AI platform, apply common sense and caution. Avoid sharing for example:
- Confidential details (client, business or family member names, private letters, contracts & strategies).
- Personally identifiable information (PII) (addresses, phone numbers, medical records).
- Proprietary or business-critical data (unreleased products, prices, financials).
- Sensitive materials (health info, internal memos, or business & personal).
- Data protected by Applicable Law (copyright, illegal data, government data).
If you must work with potentially sensitive text, consider (a) anonymizing the data first, (b) using an enterprise-level AI solution where the Terms of Use prohibit using your content for model training or (ii) an ‘on-premise’ AI model that anonymizes data.
6. Why Organizations Need AI Policies
Many organizations still lack formal guidelines for using AI tools, leaving employees to guess what’s permissible. It is our understanding – and our worry also – that most employees don’t even realize or understand which data they should or should not add to the AI models. This is why we advocate for the wide use and deployment of AI Policies in companies.
Here’s why AI policies matter:
- Education & Awareness: Ensures everyone in your organization understands the risks and best practices when interacting with AI.
- Risk Management: A solid policy helps prevent data leaks and breaches of confidentiality.
- Compliance: Aligns your company or legal practice with industry regulations and local laws—a must for highly regulated sectors.
- Consistency: Establish uniform standards so that everyone, from interns to senior partners, uses AI responsibly and avoids costly mistakes.
Stay tuned: We’ll be writing a longer article soon detailing the key components of an effective AI policy, with real-world examples to guide your organization’s strategy.
7. Real-World AI Policy Examples
There has been a lot of discussion on whether it should be forbidden to use AI models for work – in some countries it is even forbidden for everyone. I believe that this is the wrong way of dealing with this technology that we will not be able to stop. We should embrace AI technology but be very mindful how to use it. See below two examples of courts and lawyer organizations that believe you should not ban AI, but embrace it with correct guardrails.
Florida Bar’s AI Policy
The Florida Bar issued Ethics Opinion 24-1 on January 19, 2024, providing guidance on the use of generative AI by Florida attorneys (source link: 16).
Key points include (source: 1) :
- Lawyers may use generative AI in their practice, but must:
- Protect client confidentiality
- Provide accurate and competent services
- Avoid improper billing practices
- Comply with lawyer advertising restrictions
- Attorneys must research AI programs’ policies on data retention, data sharing, and self-learning to maintain client confidentiality
- Lawyers are responsible for their work product and must verify that AI use aligns with ethical obligations.
- Informed client consent is recommended when using third-party AI programs that involve disclosure of confidential information.
Delaware Supreme Court’s AI Policy
The Delaware Supreme Court adopted an interim policy on October 21, 2024, governing the use of generative AI by judicial officers and court personnel (see full text here: 7). Key aspects (source: 11) include:
- The policy allows the use of approved generative AI tools by judicial branch officers, employees, law clerks, interns, externs, and volunteers.
- Users of AI remain responsible for the output and must ensure accuracy.
- Training on AI capabilities and limitations is required before use.
- AI should not influence judicial decisions or replace human judgment.
- AI use must comply with existing laws and judicial branch policies.
- Only approved AI tools are permitted on state technology resources.
Both policies aim to balance the benefits of AI in legal practice with ethical considerations and safeguards to protect client interests and maintain the integrity of the legal system. More and more courses are being offered for lawyers & judges explaining the risks and how to avoid them.
The above examples contains external information gathered from Perplexity – mentioning all sources used for the information shared. We have not verified the AI policies or the (latest information relating thereto) in detail at this moment.
8. Embrace AI—Responsibly
From streamlining legal tasks, writing articles to supporting creative brainstorming, generative AI offers enormous advantages. However, that conversation with the lawyer who unknowingly uploaded full client files to a free-tier AI tool reminds us that we to improve our understanding how AI models handle our data.
- Read the Terms: Familiarize yourself with the data usage policies of each AI platform you use.
- Choose Wisely: If data sensitivity is high, consider enterprise solutions or what we at this time advise most legal professionals: do not share sensitive data.
- Use Caution: Always think twice before uploading potentially sensitive content.
- Establish AI Policies: Protect your organization, employees, and clients by setting clear, enforceable guidelines.
Final Thoughts
AI is transforming the way we work – but it’s also transforming how data can move beyond our control. Actively keep track of how your content is used, whether you’re on the free or paid version of ChatGPT, Claude or Perplexity
The lesson? We need to stay informed, be cautious and be proactive. That way, you can use the power of AI without compromising your most sensitive information. Keep an eye out for our upcoming in-depth article on building AI policies that can guide you and your team toward ethical and secure usage of these exciting new technologies.
Disclaimer: This article is a research project, provides general information about AI data usage and does not constitute legal advice.
I wrote this article further to my post on LinkedIn on this subject. If you have any further questions about the above, contact me via lowa@amstlegal.com or set up a meeting directly here .
Here are more articles on this topic Anthropic’s Claude AI Updates – Impact on Privacy & Confidentiality, Italy fines OpenAI over ChatGPT privacy rules breach, User Privacy and Large Language Models: An Analysis of Frontier Developers’ Privacy Policies, Perplexity API Platform Privacy & Security
