Most enterprises that want to use Claude get stuck at procurement rather than at product, because the assistant works well in trials, the integrations connect to the systems users already rely on, and the appetite across teams to adopt it is real. The default way Claude reaches a laptop, though, sends prompts, responses, and attached files to Anthropic's infrastructure in the United States, and for a regulated institution in Dublin, Frankfurt, Singapore, or Toronto, that single architectural fact is enough to end the evaluation.
Anthropic's answer to this problem is a deployment mode called Cowork on third-party, usually shortened to Cowork 3P, and it is the mechanism every chapter in this hub builds on. This first chapter explains what the mode actually is, where each piece of data lives under each route, and which Anthropic guarantees apply to which route. If you read one chapter of this series and skip the others, this is the one worth reading.
In this chapter
What Cowork on 3P actually is
Cowork on 3P is not a separate product. It is a configuration of the standard Claude Desktop application that changes where inference happens and where conversations are stored. In Anthropic's own words on the Cowork 3P overview page, it is "a deployment mode of Claude Desktop that routes all model inference through a provider you configure."
In practice, the same installer that ships Claude Desktop to any user can be configured, through device-management tooling, to send every model call to an inference endpoint your organisation owns or rents, rather than to Anthropic's public API. The web application that would normally load from claude.ai is bundled inside the desktop app so the browser never reaches Anthropic's servers either. Conversation history, attached files, and tool invocations are written to the user's local device rather than an Anthropic backend. The app feels identical to the standard one. The routes the traffic takes are different.
The four supported providers
Anthropic supports four inference provider categories for Cowork 3P, and signposts in the same paragraph that the four are not interchangeable for compliance purposes. The four options are Google Cloud Vertex AI, Amazon Bedrock, Azure Foundry, and compatible custom gateways implementing Anthropic's Messages API.
Each provider carries its own credential model. Google Vertex AI expects a project identifier and region, with either a service-account JSON file, OAuth client credentials, or a credential helper for dynamic token generation. Amazon Bedrock expects an AWS region and either a bearer token or a named AWS profile, and supports PrivateLink endpoints. Azure Foundry expects a resource name and an API key. A compatible gateway expects a base URL and an API key, with optional custom headers, and must implement Anthropic's Messages API with streaming and tool-use support. All four provider types can be driven by a credential helper for just-in-time token generation, which is how you integrate single sign-on or public-key infrastructure into the flow without storing long-lived secrets in MDM payloads.
The list matters, but the order of the list matters more, because two of these routes carry an Anthropic-level guarantee that no conversation data reaches Anthropic and the other two do not currently carry that guarantee. The full treatment of that boundary sits further down this chapter, and the operational consequences land in chapters three and five.
Standard Cowork versus Cowork on 3P
The clearest way to understand what changes is to put the two modes side by side. Standard Cowork sends inference to Anthropic's API and loads the web app from claude.ai. Cowork on 3P bundles the web application locally, routes inference to your configured provider, and stores conversation history on user devices rather than Anthropic's backend.
Unpacked across the components, the differences look like this. Inference moves from Anthropic's API to your configured provider's regional endpoint. The web application moves from a load over the open internet from claude.ai to a local bundle inside the desktop app. Conversation storage moves from Anthropic's backend to the user's local disk. Authentication moves from an Anthropic account to provider credentials on the device, with identity handled locally rather than by Anthropic. Configuration moves from a cloud admin console to operating-system-level managed preferences delivered by your existing mobile device management platform.
None of these shifts are minor, because each one opens a distinct compliance question and a distinct operational question that would not apply under the standard deployment. The change from a cloud admin console to device-level MDM is, for most large organisations, the one that quietly reshapes how Claude is governed and audited at scale, and that reshaping is what the rest of the hub is for.
Stay with the series
This is chapter one of a hub that breaks down every Claude deployment route with primary-source references for pricing, residency, and contractual terms. New chapters as they publish, sent to your inbox. Subscribe to the newsletter.
Where data actually goes on each route
The load-bearing statement in the entire Cowork 3P documentation is the one that defines the limits of Anthropic's guarantee, and it is worth quoting closely because the wording is precise and the precision matters a great deal for procurement. Anthropic states that the data-residency, compliance, and "no conversation data sent to Anthropic" statements throughout the Cowork 3P documentation apply only when the inference provider is Vertex or Bedrock, and that these statements do not apply when using Azure Foundry or a gateway. Equivalent guarantees for Azure Foundry are described as coming, and Anthropic has said those pages will be updated when they are available.
Read plainly, this wording carries two separate implications. On Bedrock and Vertex, as of 2026-04-22, Anthropic's position is that conversation data does not reach Anthropic's systems, that prompts and responses and files route only to the configured provider endpoint, and that conversation history remains on the user's device. That is the guarantee profile a regulated enterprise is usually buying. On Azure Foundry and on any gateway route, including data-platform-native inference paths such as Snowflake Cortex Code, the same guarantee does not yet apply, and Anthropic's current wording says only that it is working on parity for Foundry rather than that parity exists today.
The operational consequence is straightforward. If your compliance posture depends on the assertion that no conversation data reaches Anthropic, Bedrock and Vertex are the only two routes that currently support it. Foundry can still be a reasonable choice for an organisation whose compliance questions live elsewhere, and gateways can be the right answer for specific gateway providers under specific contractual terms, but neither route carries Anthropic's guarantee in this area today. Chapter three treats the Foundry gap in full, and chapter five treats the data-platform gateway pattern. For a CTO mapping routes to requirements, this boundary is the first filter to apply.
The guarantee boundary
The guarantee that no conversation data reaches Anthropic applies to Bedrock and Vertex only. Foundry and gateway routes do not yet qualify.
MDM as the deployment mechanism
Cowork 3P is delivered through the same mobile device management infrastructure that delivers every other managed application in a regulated enterprise. Anthropic's Cowork 3P configuration reference states that Cowork on 3P operates through OS-native managed preferences, with .mobileconfig profiles on macOS or registry policies on Windows.
In operational terms this arrangement means that Jamf, Intune, Workspace ONE, or Group Policy delivers a profile which the Claude Desktop installer reads at launch, and when a managed preference and a local preference disagree the managed value wins. There is no cloud admin console to maintain and no separate Claude control plane to audit, which means the same change-management process your platform team already uses for any other MDM-delivered configuration is the process that governs Cowork 3P.
One operational detail in the same reference page is easy to miss in evaluation and hard to unsee once you have run into it in production. Configuration is read once at launch, which means a user has to fully quit and reopen the application after any change, and configuration is therefore immutable at runtime. A revoked credential, a changed region, or an updated tool-permission policy propagates only when the user restarts the desktop application, which means credential rotation becomes a planned window rather than a hot swap and incident response cannot assume in-flight configuration changes will land on a running session. This behaviour is not a problem in itself, because the restart is cheap and the behaviour is predictable, but it is a constraint that needs to be written into your runbook rather than discovered during an audit.
Read once at launch
Configuration is read once at launch. Every change requires a restart, which makes rollback deliberate and credential rotation a planned event.
Security profiles and the air-gapped option
Anthropic publishes three pre-built security profiles that compose the individual configuration knobs into recognisable postures. Standard keeps telemetry and updates enabled, requires signed extensions, and lets users extend Cowork freely. Restricted disables non-essential telemetry, blocks user-added connectors, and enforces workspace isolation, while keeping enough telemetry active for Anthropic support to diagnose issues. Locked Down is the most isolated of the three, with all external communication blocked except inference and OpenTelemetry export, no auto-updates, and air-gapped compatibility.
Air-gapped caveat
Air-gapped in this context means zero runtime egress to Anthropic-operated hosts. The virtual machine bundle used by the desktop sandbox is downloaded once at initialisation, and that download is not runtime traffic but it is still traffic. For an organisation with a genuine air-gap requirement, the Locked Down profile is compatible, provided the initial bundle download is staged through an approved channel and refreshed deliberately when the bundle version changes.
Chapter eight covers the controls matrix in detail, and chapter nine covers the governance implications of running in an air-gapped posture.
What this chapter does not cover
Three things sit deliberately outside the scope of this chapter. The per-provider operational detail, including regional availability, pricing at the token level, private networking, customer-managed encryption keys, and the specific contractual clauses that vary between providers, is treated route-by-route in chapters two through five. The full controls matrix across telemetry, egress, sandbox isolation, customer-managed keys, and identity is treated in chapter eight. The governance questions of policy enforcement, change control, credential rotation cadence, and MDM audit evidence are treated in chapter nine. The mental model in this chapter is enough to judge whether a route fits your constraints at a high level. The later chapters are where the procurement decision gets made.
Primary sources
- Anthropic. Cowork on 3P — Overview. Retrieved 22 April 2026.
- Anthropic. Cowork on 3P — Configuration reference. Retrieved 22 April 2026.
- Anthropic. Cowork on 3P — Installation and setup. Retrieved 22 April 2026.
- Anthropic. Cowork on 3P — Telemetry and egress. Retrieved 22 April 2026.
Nothing in this article is legal advice. It names regulatory frameworks and describes how each deployment route affects compliance posture. Compliance interpretation for your specific regulatory context, jurisdiction, and client contracts must be reviewed with qualified legal counsel. Verify current Anthropic documentation before making a procurement decision.
