AI NewsletterSubscribe →
Getting Started Guide
10Step 10 of 10

Privacy, Security, and Limitations

How Claude handles your data, what GDPR means for AI tools, plan-level differences in data policy, and an honest account of what large language models cannot do.

7 min read

Your Data Concerns Are Not Paranoia

I tell every client the same thing when they raise Claude AI data privacy: your caution is warranted, and anyone who dismisses it is selling something. Businesses handle client records, financial data, employee information, intellectual property, and commercially sensitive material every day. Putting any of that into a third-party tool without understanding exactly what happens to it is reckless, full stop.

This article lays out how Claude handles your data, where GDPR fits, what Claude genuinely cannot do, and what you should never share with it. None of this is meant to put you off using AI. It's meant to ensure you use it with your eyes open and your data governance intact.

How Claude Handles Your Data (Plan by Plan)

Anthropic, the company behind Claude, publishes detailed data handling policies, and the protections you get depend entirely on which plan you're using. This is the single most important distinction to understand before you do anything else.

On paid plans (Pro, Max, Team, and Enterprise), Anthropic does not use your conversations to train its models. Your inputs and outputs stay out of the training pipeline. This isn't a default setting you need to hunt for and toggle on; it's a firm policy commitment baked into the plan.

On the free plan, your inputs may be used for model improvement. You can opt out, but the default allows it. If you're using Claude for anything business-related, a paid plan is the minimum responsible choice. I've written more about what each tier includes in Plans and What You Get.

All data is processed on Anthropic's servers, which are primarily based in the United States. Data is encrypted in transit and at rest, which is standard for cloud services, but worth flagging if your business has data residency requirements or operates under sector-specific rules about where data can be processed.

Claude does not have persistent memory of your conversations by default. Each conversation starts fresh. Unless you're using Projects (which maintain a knowledge base across conversations) or Claude Code's memory features, Claude retains nothing from previous sessions. This is covered in detail in How Claude Remembers (and Forgets).

Conversations are retained by Anthropic for a limited period to support safety monitoring and abuse prevention. Enterprise plans offer greater control over retention periods and deletion policies.

Claude AI GDPR: What Irish and EU Businesses Need to Know

If your business operates in Ireland or anywhere in the EU/EEA, GDPR applies to any personal data you share with Claude. This is not theoretical. It has practical, day-to-day implications for how you use the tool, and from what I've seen, most small businesses haven't thought it through.

The core issue

When you paste text containing personal data into Claude, that data is transmitted to Anthropic's servers for processing. Under GDPR, this constitutes data processing, and you need a lawful basis for it. You also need to consider whether an adequate data transfer mechanism is in place for sending personal data to the United States, given that Anthropic's infrastructure sits there.

Practical steps for compliance

  • Anonymise or pseudonymise before sharing. Replace real client names with placeholders. Remove addresses, phone numbers, email addresses, and any other identifying details. If you need Claude to help draft a client email, write "Dear [Client Name]" and fill in the real name yourself afterwards. This takes seconds and eliminates the most common risk.
  • Do not paste raw database exports, spreadsheets, or CRM records into Claude without first removing personal data columns. I've seen people dump entire client lists into a prompt. Don't.
  • For Team and Enterprise plans, Anthropic offers Data Processing Agreements (DPAs) that address GDPR requirements. If your organisation processes personal data regularly and wants to use Claude at scale, requesting a DPA is a necessary step, not an optional extra.
  • Document your assessment. If you decide to use Claude in your business, record the basis on which you're doing so. This doesn't need to be a 40-page report. A short document covering your lawful basis, what data you allow into Claude, and what safeguards you've put in place is sufficient for most SMEs.

This article is not legal advice. GDPR compliance depends on your specific circumstances, the types of data you process, and your relationship with data subjects. If you're in a regulated industry or handling sensitive personal data at volume, consult your data protection officer or a solicitor who specialises in data protection law.

What You Should Never Share with Claude

Regardless of which plan you're on, certain categories of information should never go into any external tool. This isn't about Anthropic's policies being weak; it's about basic information hygiene that applies to every third-party service your business uses.

The hard "never" list

  • Passwords, API keys, access tokens, or credentials of any kind. Claude doesn't need your passwords to help you. If you accidentally paste a credential into a conversation, rotate it immediately. Don't assume the conversation will expire and the problem will go away.
  • Unredacted personal data unless you have an appropriate DPA in place and the data subjects have been informed. This includes client names paired with financial details, employee records, medical information, and similar sensitive combinations.
  • Client-privileged information. If you work in law, medicine, accounting, or financial advice, the information your clients share with you carries professional privilege. Sharing it with an AI tool may compromise that privilege. This is not a grey area; seek specific legal guidance before doing so.
  • Classified or export-controlled information. This applies to defence contractors, government suppliers, and businesses handling nationally sensitive material.
  • Anything you wouldn't put in an email to an external consultant. This is the simplest mental model, and it works because it maps onto a situation most business owners already understand. You'd give a contractor the brief, the context, and the requirements. You wouldn't give them your bank login or your clients' medical records.

The external contractor test

Think of Claude as a competent but external contractor you've hired for a specific task. You'd share the brief. You'd share background context. You'd share the requirements and constraints. You would not share your bank credentials, your clients' personal health information, or your proprietary pricing model with the raw numbers attached. Apply the same judgement, every time.

Plan-Level Data Handling Differences

Not all Claude plans offer the same level of data governance. Here's what each tier actually gives you, stripped of marketing language. For a full comparison of features and pricing, see Claude Plans and Pricing.

Free

Conversations may be used for model improvement. Minimal data governance controls. Suitable for personal experimentation, not for business use involving any sensitive or proprietary information. If you're still on the free plan and using Claude for client work, stop reading this article and go upgrade. Seriously.

Pro and Max

Anthropic does not train on your data. Standard data retention applies for safety monitoring. These plans suit individual professionals and small businesses. You get the core privacy commitment without the administrative controls that larger organisations need. For most sole traders and micro-businesses in Ireland, Pro is sufficient.

Team

No training on your data. Adds administrative controls, including the ability to manage team members and workspaces. Workspace isolation means one team member's conversations are not visible to another. This is the entry point for organisations that need some governance around AI use, and it's where I'd recommend most SMEs with 3-10 people start.

Enterprise

No training on your data. Adds custom data retention policies, single sign-on (SSO), SCIM provisioning for user management, and audit logs. A Data Processing Agreement is available. Anthropic holds SOC 2 Type II compliance. This is the tier for organisations with formal compliance requirements, regulated industries, or significant data protection obligations.

The principle is straightforward: the more sensitive your work, the higher the plan tier you should be on. A freelance copywriter on Pro is fine. A financial services firm handling client portfolios needs Enterprise.

Claude Code and Local Data: A Different Model

Claude Code works differently from the web-based Claude interface, and the data flow is worth understanding if you're using it or considering it.

Claude Code runs in your terminal or in VS Code. It can read files on your local machine, explore project directories, run commands, and edit files directly. This is what makes it powerful for development and workspace automation, but it also means that anything in your local file system is potentially accessible.

When Claude Code reads a file, the contents of that file are sent to Anthropic's API as part of the conversation context. Anthropic's servers process the request and return the response. Your files are not stored on Anthropic's servers beyond the conversation session, but they are transmitted for processing.

What this means in practice

If your project directory contains sensitive files (environment variables with credentials, configuration files with API keys, client data files), Claude Code could potentially read those files when exploring the directory. It won't do this maliciously, but if you ask it to "look through my project" or "find the config file," it will read what it finds.

How to manage this

  • Use .claudeignore files to exclude sensitive directories and files from Claude Code's access. This works like .gitignore. Any file or directory pattern listed in .claudeignore will not be read by Claude Code, regardless of what you ask it to do.
  • Keep credentials in environment variables or secure vaults, not in plain text files within your project directories. This is good practice with or without AI tools in the picture.
  • Be intentional about which directories you open in Claude Code. If a workspace contains both code and sensitive client data, restructure so that sensitive data lives outside the workspace directory entirely.

Claude AI Limitations: An Honest Account

Privacy and security are about protecting what goes in. Limitations are about understanding what comes out. Both matter equally for responsible business use, and I find that limitations get far less attention than they should.

Hallucination is not a bug; it's a feature of the architecture

Claude will sometimes produce confident, plausible-sounding information that is factually wrong. This is not a rare edge case. It's a fundamental characteristic of how large language models work. They generate text based on patterns in training data, not by looking up verified facts in a database. The model doesn't "know" things in the way you or I know things; it predicts what text is likely to come next.

This means you should verify factual claims in Claude's output, especially numbers, dates, statistics, legal references, regulatory citations, and academic sources. If Claude tells you a specific regulation requires a specific thing, go check the regulation yourself. If Claude cites a study, verify the study exists and says what Claude claims it says. I've caught Claude inventing plausible-sounding journal articles complete with authors, years, and DOIs that don't exist. The confidence is what makes it dangerous.

Never publish AI-generated content without review. This applies to blog posts, reports, client communications, proposals, and anything that represents your business externally. The content may be excellent. It may also contain a confidently stated error that damages your credibility in ways that take years to repair.

Knowledge cutoff

Claude's training data has a cutoff date. It doesn't know about events, legislative changes, market developments, or new research published after that date. It doesn't browse the internet in real time unless connected to external tools via MCP (Model Context Protocol) integrations, which most business users won't have set up.

If you need Claude to work with current information, provide it directly. Paste the relevant text, upload the document, or summarise the key points yourself. Don't assume Claude knows about recent developments, even widely reported ones. I've had it confidently discuss legislation that was amended six months before the conversation, using the old provisions as if they were current.

Context window limits

As covered in How Claude Remembers (and Forgets), Claude can only hold a limited amount of text in its working memory at any one time. Long sessions degrade as older context is compressed or dropped. Very long documents may exceed what Claude can process in a single pass.

This is a practical constraint you need to design around. It means structuring your interactions, using Projects for persistent context, and not expecting Claude to remember the beginning of a conversation that's run for hundreds of exchanges. For a deeper explanation of how this works technically, the glossary covers context windows and tokens.

Bias and perspective

Large language models reflect patterns in their training data. That training data, drawn from the internet and published text, contains cultural biases, overrepresentation of certain perspectives (particularly English-language, Western, and US-centric viewpoints), and blind spots around underrepresented communities and contexts.

Be aware of this when using Claude for tasks involving people: HR policies, job descriptions, performance reviews, public-facing content, and anything touching on diversity, equity, or cultural sensitivity. Claude's output is a starting point, not the final word. Your judgement and your understanding of your specific context are irreplaceable. AI doesn't understand the culture of your organisation. You do. That's the bit that matters, and I've written about why this gap persists in Human vs Artificial Intelligence.

Claude is not a professional adviser

Claude is not a solicitor, accountant, doctor, or chartered engineer. It can draft documents that look like legal contracts, financial projections, medical guidance, or engineering specifications. The drafts may be useful as starting points. They are not substitutes for qualified professional advice, and treating them as such is a business risk that could cost you dearly.

A contract drafted by Claude may miss jurisdiction-specific requirements. A financial model may use incorrect tax rates (I've seen it apply US tax logic to Irish scenarios without flagging the assumption). A compliance document may reference outdated regulations. The output looks professional, which is precisely what makes the risk harder to spot.

Use Claude to accelerate your work with professionals, not to replace them. Draft a contract with Claude, then have your solicitor review it. Build a financial model with Claude, then have your accountant validate the assumptions. The efficiency gain is real; the risk of skipping the expert review is also real.

Inconsistency between runs

Give Claude the same prompt twice and you may get different results. Large language models are probabilistic. They generate text by predicting likely next words, and there's randomness built into that process. Two runs of the same prompt can produce different structures, different emphases, and occasionally different conclusions.

This is one of the reasons structured workspaces matter. When you set up CLAUDE.md files, rules, and skills (as covered in Direction and Judgement), you constrain Claude's output to follow specific patterns. Without structure, you get variability. With structure, you get reliability. You discipline the tool; it doesn't discipline itself.

A Practical Security Checklist

Before rolling Claude out across your team or using it for client work, work through this list. It's not exhaustive, but it covers the decisions that matter most.

  1. Use a paid plan. Pro at minimum for individual use. Team for organisations. Enterprise for regulated industries or significant data protection requirements. The free plan is for experimentation, not business operations.
  2. Establish a "do not share" policy. Make it explicit to everyone on your team what categories of information must not be entered into Claude. Credentials, unredacted personal data, privileged client information, and classified material are the baseline. Write it down. Keep it short. One page is enough.
  3. Anonymise sensitive data before sharing. Build this into your workflow so it becomes automatic. Before pasting client scenarios, project details, or case studies into Claude, strip out identifying information. It takes 30 seconds and eliminates the most common compliance risk.
  4. Review all AI output before it leaves your organisation. Whether it's an email, a report, a proposal, or a social media post, a human must review it before it's published or sent. This is non-negotiable.
  5. Train your team. A five-minute briefing on what not to share and why is more effective than a 20-page policy document nobody reads. Make it practical, not theoretical. Show examples.
  6. Request a DPA for Team and Enterprise use. If your organisation has formal data protection obligations, engage Anthropic on a Data Processing Agreement and review the terms with your legal adviser before processing any personal data.
  7. Use .claudeignore for sensitive directories. If you're using Claude Code, configure it to exclude directories containing credentials, client data, or other sensitive files. Set this up once and forget about it.
  8. Review periodically. Anthropic updates its policies and capabilities. GDPR guidance evolves. Your business changes. Revisit your AI data handling practices quarterly, or whenever there's a significant change in how you use the tool.

Where This Leaves You

Claude is a powerful tool with real limitations. The privacy controls on paid plans are solid. The data handling policies are clear and well-documented. But the tool doesn't make your compliance decisions for you, and it doesn't guarantee the accuracy of its output. Those responsibilities remain yours.

The limitations don't make Claude unusable. They make review and judgement essential, which is consistent with everything in this guide. The critical skill isn't prompt engineering or AI fluency. It's your ability to assess, direct, and verify. You bring the domain expertise. You bring the professional judgement. You bring the understanding of your specific business context and the regulations that apply to it. Claude brings speed, breadth, and the ability to process and generate text at a scale no individual can match. That combination is genuinely useful, as long as you're honest about what each side of the equation actually contributes.

Use Claude where it adds value. Apply your expertise to check the work. Maintain appropriate controls around sensitive data. And treat the tool as what it is: a capable assistant that requires supervision, not a replacement for the professional standards your business is built on. There's no ghost in this machine, and that's precisely why your judgement is the thing that matters most.

With that foundation in place, you're ready to move on to the practical setup. The next article in this series walks you through creating your first Claude workspace and configuring it for your business.

GenAI Skills Academy

Achieve Productivity Gains With AI Today

Send me your details and let’s book a 15 min no-obligation call to discuss your needs and concerns around AI.