AI NewsletterSubscribe →
Resource HubEnterprise

AWS Bedrock as a Claude Deployment Target

Running Claude on Amazon Bedrock — regional availability, PrivateLink, customer-managed keys, CloudTrail logging, and the data residency position for regulated workloads.

Larry Maguire

Larry Maguire

GenAI Skills Academy

For most AWS-committed enterprises, Bedrock is the first choice because it combines the infrastructure familiarity of AWS with the guarantee that Anthropic never sees your conversation data. This chapter explains what Bedrock is as a service, which Claude models are available, how the data-isolation guarantee works in practice, what enterprise controls Bedrock provides, and crucially, what gaps remain that need clarification from AWS before you commit to a procurement decision.

Bedrock is not a separate product. It is a managed inference layer inside AWS that routes all Claude inference through AWS accounts isolated from Anthropic's own infrastructure. You configure it through the same identity and access controls you already use for every other AWS service. Data stays in your AWS region. Audit trails are captured in CloudTrail. Encryption is standard AWS. And because the architecture is built on AWS infrastructure, Cowork 3P on Bedrock integrates natively with AWS security, compliance, and governance frameworks.

What Bedrock is and why CTOs choose it

AWS describes Amazon Bedrock as a fully managed service that provides secure, enterprise-grade access to high-performing foundation models from leading AI companies, enabling customers to build and scale generative AI applications. The "fully managed" part means AWS handles the underlying infrastructure layer, isolation, and compliance scope. It does not mean zero configuration on your side. You still need to configure IAM policies, encryption keys, audit logging, and guardrails.

Operationally, Bedrock simplifies procurement because it sits inside your existing AWS account. You do not need a separate contract with Anthropic, a separate identity provider, or separate audit infrastructure. All Claude inference runs through AWS APIs authenticated with the same AWS Identity and Access Management system you already maintain. Regional isolation is built in. Encrypted transport and encrypted storage are standard AWS. This integration is the primary reason AWS-native enterprises reach for Bedrock first.

Stay with the series

This is chapter two of a hub that breaks down every Claude deployment route with primary-source references for pricing, residency, and contractual terms. New chapters as they publish, sent to your inbox. Subscribe to the newsletter.

Available Claude models and versions

Bedrock supports Claude Opus 4.7, Sonnet 4.6, Haiku 4.5, and a preview version of Claude Mythos. The Mythos preview is described by AWS as gated research access with state-of-the-art capabilities in cybersecurity, software coding, and complex reasoning, but regional availability and pricing are not documented.

One material caveat worth stating upfront: pricing for Claude Opus 4.7, Sonnet 4.6, and Haiku 4.5 is not published on the AWS Bedrock pricing page as of 2026-04-22. AWS publishes pricing only for Claude 3.5 Sonnet, which is older. This means any CTO evaluating Bedrock for the latest models needs to request a pricing quote from AWS directly rather than finding pricing in public documentation. Regional availability for the newer 4.x models is also not documented in the accessible Bedrock documentation. The regional matrix exists only for Claude 3.5 Sonnet. Verify both pricing and regional support with your AWS account manager before final procurement.

The "no data to Anthropic" guarantee on Bedrock

This is the claim that makes Bedrock and Vertex different from the other two routes. Anthropic's Cowork 3P documentation states explicitly that the data-residency, compliance, and "no conversation data sent to Anthropic" statements apply only when the inference provider is Bedrock or Vertex. These statements do not apply to Azure Foundry or custom gateway routes.

On Bedrock, here is how the isolation works. The AWS data-protection documentation states that after AWS receives a Claude model from Anthropic, AWS performs a deep copy of the model provider's inference and training software into AWS accounts for deployment, and because Anthropic does not have access to those accounts, Anthropic does not have access to Bedrock logs or to customer prompts and completions. More directly, the Bedrock FAQs state that customer content is not used to improve the base models and is not shared with any model providers.

Stated plainly, this means conversation data does not leave AWS. Prompts, responses, and attached files stay in the AWS region you configure. Conversation history lives on user devices, not on Anthropic's servers. Anthropic cannot access logs. Anthropic cannot improve Claude using your conversations.

Data isolation from Anthropic

Model providers cannot access Bedrock logs, prompts, or completions. Data flows to the configured region only.

This guarantee applies to Bedrock and Vertex only. It does not currently apply to Azure Foundry or to custom gateway deployments. For any enterprise whose compliance posture depends on no conversation data reaching Anthropic's infrastructure, Bedrock and Vertex are the only routes that currently meet this requirement. If you need to verify this claim against the primary source before procurement, the guarantee is documented at https://claude.com/docs/cowork/3p/overview.

Data protection, encryption, and audit

Bedrock provides encryption both in transit and at rest. In-transit encryption uses TLS 1.2 as a minimum, with TLS 1.3 recommended. Cipher suites must support Perfect Forward Secrecy, which means ephemeral key exchange that prevents decryption of recorded traffic even if a long-term key is compromised.

At-rest encryption is provided by AWS Key Management Service, which is AWS's encryption key management system. The Bedrock FAQs state that data is encrypted at rest, with optional encryption using your own keys. This statement indicates that customer-managed key support exists but is not detailed in the documentation. CTOs requiring customer-managed encryption keys (CMKs) should verify current CMK support and configuration options with AWS directly.

Audit logging is built through AWS CloudTrail, which captures API calls and authentication events across AWS services. The Bedrock data-protection documentation recommends CloudTrail for logging all API and user activity, but does not explicitly confirm that CloudTrail captures all Bedrock API invocations. CTOs implementing audit requirements should verify current CloudTrail coverage with their AWS account manager and confirm that Bedrock events include model invocations, not just authentication.

Bedrock provides six configurable content-safety guardrails that filter harmful content (hate, insults, sexual, violence, misconduct, prompt attack) and can be configured for custom denied topics, word filters, PII detection and masking, hallucination detection, and reasoning validation. These guardrails are table-stakes security controls and can be applied selectively per request or globally. For most enterprises, the pre-configured guardrails cover the common cases. Custom guardrails for industry-specific content policies are also supported.

Enterprise controls by default

IAM authentication, encryption in transit and at rest, CloudTrail logging, and content guardrails are built in.

PrivateLink, VPC endpoints, and network isolation

The Bedrock infrastructure-security documentation states that Bedrock is protected by AWS global network security as a managed service. It suggests that VPC endpoints or PrivateLink support exists, but does not detail it. AWS Bedrock's documentation does not currently specify VPC endpoint or PrivateLink support. This is a material gap if your deployment requires private-only network connectivity without egress to the public internet.

If private-only deployment is a hard requirement, verify with your AWS account manager whether Bedrock endpoints are available in AWS PrivateLink and whether private-only access can be enforced through network policies. If Bedrock is not available on PrivateLink, an air-gapped deployment would still require public egress for model invocations, which may violate your security posture.

Authentication, identity, and access control

Bedrock uses standard AWS IAM for all authentication and access control. The Bedrock IAM reference describes four authentication paths. Users authenticate as IAM users, as federated identities via AWS Identity Center or an external identity provider, through IAM roles for service-to-service access, or through temporary credentials from AWS Security Token Service.

Administrators define permissions using IAM policies that specify who gets access (principal), what operations they can perform (actions), which Bedrock resources they can access (resources), and under what circumstances (conditions). This model is identical to every other AWS service, which means if your platform team already manages AWS infrastructure, they already understand the access control model for Bedrock. There is no separate Claude authentication mechanism, no separate credential store, and no new identity tooling.

Federated identity is supported, which means you can integrate Bedrock access with your enterprise identity provider through AWS Identity Center or through an external identity provider that AWS IAM supports. This is the path for organisations that use Okta, Azure Active Directory, or other SAML or OIDC identity systems. Credential helpers for just-in-time token generation are also supported, which allows you to rotate credentials frequently without storing long-lived secrets in MDM profiles.

Compliance certifications and frameworks

The AWS compliance directory lists certifications across HIPAA, FedRAMP High, GDPR, SOC 2, ISO, and FIPS 140-3. Bedrock inherits these compliance certifications as a managed service within AWS, which means if AWS is in scope for your framework, Bedrock is in scope.

Two caveats are worth stating. First, GDPR data residency is not explicitly documented per region for Bedrock. AWS compliance documentation indicates GDPR eligibility broadly, but does not state whether customer data strictly remains in the selected AWS region or whether any processing happens outside the region. If GDPR data residency is a hard requirement, request written confirmation from AWS that customer data remains in the selected region.

Second, HIPAA eligibility on Bedrock requires that you enable a Business Associate Agreement (BAA) with AWS and configure HIPAA-eligible services. Bedrock itself is in scope for HIPAA, but the BAA is a separate contract with AWS. If HIPAA is a requirement, ensure the BAA covers Bedrock and that your AWS contract includes Bedrock in the list of in-scope services.

When Bedrock is the right choice and when it is not

Bedrock excels for enterprises that are already AWS-committed, have existing HIPAA or FedRAMP obligations that map cleanly to AWS, or have a strong preference to keep infrastructure, identity, and compliance tooling within a single vendor ecosystem. Bedrock is the default choice when your platform team already maintains AWS infrastructure and security controls.

Bedrock does not fit for multi-cloud enterprises or organisations with strong vendor-agnostic requirements. Bedrock is AWS-only. There is no equivalent Bedrock service in Azure or Google Cloud (Azure Foundry and Google Vertex are the alternatives, and they carry different terms). If your enterprise has a policy against cloud lock-in or needs to support deployments across multiple cloud providers, Bedrock constrains you to the AWS path.

Bedrock also may not fit if your regulatory jurisdiction or client contracts require vendor independence. Some financial services contracts, for example, mandate that core AI services come from an independent vendor, not from the same organisation that runs your infrastructure. In those cases, a standalone Anthropic contract or a different deployment route may be mandated.

What to ask your AWS account manager

Before committing to Bedrock, clarify these questions with AWS:

  1. What is the current regional availability for Claude Opus 4.7 and Sonnet 4.6 in your target deployment regions?
  2. What is the current pricing for Opus 4.7, Sonnet 4.6, and Haiku 4.5 on-demand and batch?
  3. Do you support VPC endpoints or PrivateLink for Bedrock inference, and if so, how do you configure private-only deployment?
  4. What is the scope of customer-managed KMS key support? Are there restrictions on key policies, key rotation, or cross-account key access?
  5. Does CloudTrail capture all Bedrock API invocations, including model inference calls? What is the event schema?
  6. For GDPR deployments, do you guarantee that customer data remains strictly in the selected AWS region?
  7. Is a Data Processing Agreement available via AWS Artifact, and does it cover Bedrock specifically?
  8. Can you provide the current list of AWS sub-processors used in Bedrock?

Primary sources

Nothing in this article is legal advice. It names regulatory frameworks and describes how each deployment route affects compliance posture. Compliance interpretation for your specific regulatory context, jurisdiction, and client contracts must be reviewed with qualified legal counsel. Verify current Anthropic documentation at https://claude.com/docs/cowork/3p/overview before making a procurement decision.

GenAI Skills Academy

Achieve Productivity Gains With AI Today

Send me your details and let’s book a 15 min no-obligation call to discuss your needs and concerns around AI.