AI NewsletterSubscribe →
Resource HubEnterprise

International Deployment and Data Residency

Regional availability, data residency, and cross-border transfer mechanics for Claude deployment across EU, UK, US, APAC, and sovereign clouds. Regulatory frameworks that shape the choice.

Larry Maguire

Larry Maguire

GenAI Skills Academy

Regulated enterprises in Dublin, Frankfurt, Singapore, and Toronto face a constant tension. The pressure to adopt Claude is real. The users want it. The business case is sound. But the moment a prompt containing personal data or regulated data leaves the corporate network for Anthropic's infrastructure in the United States, the compliance posture changes. Cowork 3P is designed to solve this problem, but which route you choose determines whether the solution actually works. This chapter maps the four deployment routes against the major regulatory frameworks that govern where data can be processed and stored.

For any international enterprise, this chapter answers a single question: which Cowork 3P route permits your regulated data to stay within your jurisdiction, and which routes present barriers you cannot accept. The answer is not the same across all frameworks.

Why data residency matters to regulated enterprises

Anthropic's Cowork 3P documentation includes a statement that is precise and requires careful reading. The data-residency, compliance, and "no conversation data sent to Anthropic" statements throughout the documentation apply only when the inference provider is Amazon Bedrock or Google Vertex AI. These statements do not apply when using Azure Foundry or a custom gateway. This single constraint reshapes which routes are available to which enterprises.

The practical implication is straightforward. Enterprises whose compliance posture depends on the assertion that no conversation data leaves the organisation and reaches Anthropic can choose only from Bedrock or Vertex. If your regulatory framework permits conversation data to flow to Anthropic-operated systems provided the processing is lawful and documented, your route choices expand. If your framework forbids it entirely, you are limited to on-premise deployments or data-platform-native inference routes such as Snowflake Cortex Code.

The GDPR and UK data protection framework

Article 28 of the GDPR establishes a processor framework that applies whenever an organisation (the controller) hires another service (the processor) to handle personal data on its behalf. Controllers must use processors providing "sufficient guarantees to implement appropriate technical and organisational measures" ensuring compliance with the regulation and protecting data subjects' rights. Processors cannot engage additional processors without prior written authorisation from the controller.

This means that when an enterprise uses Claude (whether direct or via Bedrock, Vertex, or another intermediary), the processor relationship is triggered. If the enterprise uses Bedrock, AWS becomes a sub-processor, and the original processor agreement must document that relationship. If the enterprise uses Vertex, Google becomes a sub-processor. The processor chain has legal consequences. Anthropic, as an AI model provider, may itself be a sub-processor of the cloud provider.

Article 32 of the GDPR requires both controllers and processors to implement security measures proportionate to the risk of the data being processed. This includes encryption, pseudonymisation, access controls, and the ability to recover from incidents. The standard is "state-of-the-art" security, not obsolete methods.

Article 35 (Data Protection Impact Assessment) requires that before deploying a new AI system (particularly one involving new technologies or automated decision-making), the controller must conduct a DPIA. For high-risk processing such as AI systems making decisions about individuals, the DPIA is mandatory. It must identify what data will be processed, what risks exist, and what safeguards are needed.

Data transfer rules

EU and UK law permit data transfers to third countries only when adequacy is established or appropriate safeguards (SCCs) are in place.

The UK Data Protection Act 2018 implements GDPR principles into UK law, with some distinctions. Enterprises with UK personal data must have separate Data Processing Agreements (DPAs) meeting UK GDPR requirements. UK to non-UK transfers must meet UK adequacy decisions or execute UK SCCs, which are distinct from EU SCCs.

International data transfers: adequacy and safeguards

Article 44 of the GDPR establishes the foundational rule: personal data cannot be transferred outside the EU/EEA unless specific legal safeguards are in place. Those safeguards are set out in Articles 45-50 and include adequacy decisions or Standard Contractual Clauses (SCCs).

Article 45 permits transfers to countries when the European Commission determines they provide adequate data protection. The Commission evaluates adequacy by examining the rule of law, respect for human rights and fundamental freedoms, the existence of independent supervisory authorities, and international commitments to data protection. The EU-US Data Privacy Framework (DPF) represents such an adequacy decision for US entities certified under the framework. As of 2026-04-22, cloud providers including AWS, Microsoft, and Google participate in the DPF. Anthropic's DPF certification status should be verified directly with the company before assuming DPF coverage applies to direct API deployments.

Article 46 provides the fallback mechanism when a country does not have an adequacy decision. Transfers require "appropriate safeguards, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available." Standard Contractual Clauses represent such safeguards. SCCs are contractual templates adopted by the EU Commission that bind the overseas recipient to comply with GDPR protections even though they are not subject to EU law.

The CJEU judgment in Schrems II established that SCCs alone are not sufficient. Supplementary measures are required to evaluate the adequacy of the data transfer mechanism itself. This means that even with SCCs in place, an enterprise must conduct a DPIA to assess whether the overseas jurisdiction (in this case, the United States) provides sufficient data protection when combined with the contractual safeguards.

Stay with the series

This is chapter seven of a hub that breaks down every Claude deployment route with primary-source references for pricing, residency, and contractual terms. New chapters as they publish, sent to your inbox. Subscribe to the newsletter.

HIPAA and healthcare data in the cloud

HIPAA (the Health Insurance Portability and Accountability Act) requires any service that processes, stores, or transmits Protected Health Information (PHI) to sign a Business Associate Agreement (BAA). The BAA is a legal document specifying permitted uses, safeguards, subcontractor requirements, audit rights, and breach notification obligations. If an enterprise processes PHI and uses Claude, every layer of that processor chain must either sign a BAA or be documented as a subcontractor with equivalent protections.

The practical consequence is that AWS, Microsoft, and Google each sign as Business Associates when enterprise accounts use their cloud services. For Bedrock and Vertex to be HIPAA-eligible, AWS and Google must provide BAA addenda covering AI services specifically. As of 2026-04-22, Bedrock and Vertex both carry HIPAA-ready certifications and offer BAA addenda for healthcare customers. Azure Foundry's BAA coverage status was not confirmed in the research corpus and should be verified with your account manager.

When using a custom gateway route (including data-platform-native inference such as Snowflake Cortex), the BAA requirements cascade through each layer. If Snowflake routes inference through Anthropic's APIs, both Snowflake and Anthropic must be covered by BAA or documented as subcontractors. This adds operational complexity compared to using a provider with pre-built HIPAA compliance.

Singapore MAS Framework (FSM-N06, superseding Notice 655)

As of 2026-04-22, MAS Notice 655 is no longer in force. It was superseded on 10 May 2024 by MAS FSM-N06 (Financial Sector Mandate on Enhanced Cybersecurity) and the MAS Guidelines on Outsourcing (Banks) 2023 and (Non-Banks) 2024. Any enterprise or consultant referencing the old notice is working from outdated guidance.

MAS FSM-N06 focuses on cybersecurity requirements for the financial sector. The outsourcing guidelines apply to both banks and non-banks, requiring material outsourcing (any material third-party service) to be disclosed, documented, and subject to due diligence. MAS expects enterprises to assess provider security maturity, disaster recovery capabilities, data location, and compliance certifications. Concentration risk (dependence on a single provider) must be documented, and exit planning must be contractually enforceable.

For non-financial regulated entities in Singapore, the Singapore Personal Data Protection Act (PDPA) governs personal data transfers overseas. Transfers to non-PDPA jurisdictions require a data transfer agreement. Anthropic, AWS, Google, and Microsoft as cloud providers must commit to PDPA-equivalent safeguards, or the enterprise must document consent for overseas transfer.

Current MAS Framework

MAS Notice 655 is cancelled. The current framework is MAS FSM-N06 (effective 10 May 2024) and the Guidelines on Outsourcing (Banks/Non-Banks).

Regional data availability and sovereignty considerations

Bedrock's regional availability is limited. Inference is available in US East (N. Virginia), US West (Oregon), EU (Ireland), and a small number of other regions. When you select an AWS region for Bedrock, inference runs in that region, but the underlying AWS infrastructure for Bedrock is US-based. For GDPR-regulated data, this means data flows out of the EU unless you select eu-west-1.

the Vertex AI regions reference lists more than 35 regions globally. When you configure Vertex for a specific region (such as eu-west-1 for Ireland or europe-west4 for the Netherlands), Google states that it stores and processes your data only in that region for all non-experimental, non-preview features. This regional lock-in simplifies data residency compliance for European enterprises. Google Cloud also holds FedRAMP High authorisation, which is relevant for US public-sector clients.

Azure AI Foundry offers Claude inference in only two regions: East US 2 and Sweden Central. For enterprises with data residency requirements, this limited footprint is a significant constraint. Sweden Central provides an EU-based option but only one. If your enterprise requires multi-region redundancy or failover to other EU data centres, Foundry cannot provide it.

Custom gateway routes depend entirely on the host infrastructure. A gateway running on an enterprise's own on-premise servers offers maximal control but requires you to manage the infrastructure, security, and compliance. A gateway running on Anthropic's infrastructure provides inference but does not change the fundamental constraint that conversation data may flow through Anthropic's systems (since Foundry and gateway routes do not carry Anthropic's "no data to Anthropic" guarantee).

Designing the topology for international enterprises

Enterprises with operations across multiple jurisdictions face a choice between single-provider simplicity and multi-provider regulatory compliance. A single global Bedrock or Vertex deployment offers operational simplicity: one API, one credential system, one monitoring and audit framework. But if your data spans EU, US, and Singapore jurisdictions with different regulatory requirements, a single region may not satisfy all of them simultaneously.

A multi-geography topology routes EU personal data through a Vertex instance in europe-west4 (Netherlands), US healthcare data through a Bedrock instance in us-east-1 with HIPAA BAA, and Singapore personal data through a Vertex instance in asia-southeast1 (Singapore). This approach requires managing multiple API endpoints, but it ensures each dataset stays within its regulatory jurisdiction. The operational overhead increases, but compliance assurance increases proportionally.

Foundry and custom gateway routes complicate this topology because they lack Anthropic's residency guarantee. If Foundry is required for other reasons (cost, existing Azure commitment, feature parity with other Azure services), you must verify that Foundry's processing does not send conversation data to Anthropic, which is not currently guaranteed. Consult your Microsoft account manager and legal counsel before betting compliance on Foundry.

When data residency is not the binding constraint

Not all enterprise data requires residency protection. Unstructured internal knowledge that is non-confidential (internal processes, research, strategy documents that are not linked to regulation or client confidentiality) can use standard Cowork to Anthropic without regulatory friction. Separating regulated workloads from non-regulated workloads simplifies the architecture and reduces the compliance overhead.

Cost, latency, and feature parity can outweigh residency preference for non-regulated data. If a European marketing team is using Claude to draft copy that contains no personal data and no trade secrets, sending that to Anthropic's US infrastructure carries minimal regulatory and business risk. Segmenting workloads by risk level allows enterprises to use lower-cost routes for low-risk data whilst reserving Bedrock and Vertex for high-risk regulated data.

What this chapter does not cover

Three things sit deliberately outside this chapter's scope. Per-jurisdiction deep dives (detailed analysis of how each regulatory framework applies in practice) would require jurisdiction-specific legal expertise. This chapter names the frameworks and describes how each deployment route affects compliance posture, but detailed implementation guidance for a specific jurisdiction belongs in regional supplements, not here. Sovereign cloud options (AWS GovCloud, Azure Government Cloud, Google Cloud for Government) exist and may be appropriate for some public-sector and heavily-regulated enterprises, but they are not standard Cowork 3P offerings and sit outside the scope of this analysis. Full contract negotiation guidance and legal advice are beyond the scope. Compliance interpretation for your specific regulatory context and jurisdiction must be reviewed with qualified legal counsel.

Verify current Anthropic, AWS, Microsoft, and Google documentation at the primary sources listed below before making a final procurement decision. Regulatory guidance changes, and cloud provider certifications are updated quarterly.

Primary sources

Nothing in this article is legal advice. It names regulatory frameworks that apply to enterprise AI deployment and outlines at a high level what each framework requires. Compliance interpretation for your specific regulatory context, jurisdiction, and client contracts must be reviewed with qualified legal counsel. Verify current Anthropic, AWS, Microsoft, and Google documentation before making a procurement decision.

GenAI Skills Academy

Achieve Productivity Gains With AI Today

Send me your details and let’s book a 15 min no-obligation call to discuss your needs and concerns around AI.