When AWS announces a new region, for most of us the technical impact is minimal: change a region code in our Terraform provider and move on. But the AWS European Sovereign Cloud is not just another region. It’s an entirely new partition — the sixth in AWS history — and that changes the rules of the game.
In January 2026, AWS launched the first ESC region in the state of Brandenburg, Germany, under the name eusc-de-east-1. Unlike the European regions we already know — Ireland (eu-west-1), Frankfurt (eu-central-1) or Spain (eu-south-2) — the ESC operates under the aws-eusc partition, with its own control plane, its own IAM system, its own billing infrastructure and its own console at console.amazonaws-eusc.eu.
What does this mean in practice? That our existing AWS accounts have no visibility into the ESC. That we can’t assume roles across partitions. That ARNs change their prefix. That service endpoints are different. And that our Terraform code needs adaptations to work in this new environment.
In this article we’ll break down each of these technical implications, with concrete examples and Terraform configurations, so you can evaluate what it takes to deploy workloads in the ESC and how to prepare your infrastructure as code to support multiple partitions.
What is an AWS partition#
Before diving into the ESC’s technical differences, we need to understand what a partition is in AWS, since it’s a concept most of us haven’t had to deal with on a daily basis.
A partition is a group of AWS regions that share a common control infrastructure: IAM, billing, API endpoints and management console. Each AWS account belongs to a single partition, and it’s not possible to interact directly between partitions.
Until the ESC arrived, the publicly known partitions were:
| Partition | ARN Prefix | DNS Suffix | Purpose |
|---|---|---|---|
aws | arn:aws | amazonaws.com | Global commercial cloud |
aws-cn | arn:aws-cn | amazonaws.com.cn | China regions (operated by local partners) |
aws-us-gov | arn:aws-us-gov | amazonaws.com | GovCloud (US) |
aws-iso | arn:aws-iso | — | Classified regions (US)1 |
aws-iso-b | arn:aws-iso-b | — | Classified regions (US)1 |
aws-eusc | arn:aws-eusc | amazonaws.eu | European Sovereign Cloud |
What makes the ESC special is that it’s the first new public partition in years and the first designed specifically to meet European sovereignty requirements. This has direct consequences on how we build and operate our infrastructure.
The most visible difference is the DNS suffix. While in the commercial partition endpoints follow the pattern service.region.amazonaws.com, in the ESC the pattern is service.eusc-de-east-1.amazonaws.eu. Some examples:
| Service | aws partition | aws-eusc partition |
|---|---|---|
| EC2 | ec2.eu-west-1.amazonaws.com | ec2.eusc-de-east-1.amazonaws.eu |
| S3 | s3.eu-west-1.amazonaws.com | s3.eusc-de-east-1.amazonaws.eu |
| IAM | iam.amazonaws.com (global) | iam.eusc-de-east-1.amazonaws.eu (regional) |
| STS | sts.eu-west-1.amazonaws.com | sts.eusc-de-east-1.amazonaws.eu |
| Console | console.aws.amazon.com | console.amazonaws-eusc.eu |
Another important detail: in the commercial partition, IAM is a global service — policies, roles and users are created once and visible across all regions. In the ESC, IAM is regional (iam.eusc-de-east-1.amazonaws.eu), which reinforces isolation but changes how we manage identities.
Likewise, ARNs change their prefix. An S3 bucket in the commercial partition has an ARN like:
arn:aws:s3:::my-bucket
While the same resource in the ESC would be:
arn:aws-eusc:s3:::my-bucket
This directly affects any IAM policy, Terraform reference or CLI configuration that builds or validates ARNs.
Key technical differences#
We’ve already seen how endpoints and ARNs change, but the differences go much further. This table summarises the points that most impact how we design and operate infrastructure in the ESC:
| Aspect | Standard region (eu-west-1) | ESC (eusc-de-east-1) |
|---|---|---|
| Partition | aws | aws-eusc |
| ARN prefix | arn:aws | arn:aws-eusc |
| DNS suffix | amazonaws.com | amazonaws.eu |
| Console | console.aws.amazon.com | console.amazonaws-eusc.eu |
| IAM | Global (shared across regions) | Regional (isolated per region) |
| Organizations | Shared across the aws partition | Available, but as an independent instance — not connected to the aws one |
| Billing | USD, through AWS Inc. | EUR, through AWS EMEA SARL |
| Cross-partition connectivity | N/A (same partition) | No VPC peering, TGW or assume role between aws and aws-eusc |
| Route 53 | Global name servers | Dedicated name servers with European TLDs |
| Certificate authority | Amazon Trust Services (US) | European trust service provider |
| Service catalogue | ~200+ services | ~30 services in GA (expanding) |
| Access from outside the EU | No restrictions | Technical controls prevent it |
| Planned Local Zones | Multiple (already deployed) | Belgium, Netherlands, Portugal (announced) |
Some of these points deserve a more detailed explanation.
The most important concept to internalise is that there is no native connectivity between partitions. We cannot:
- Create VPC peering between a VPC in
eu-west-1and another ineusc-de-east-1 - Connect networks via Transit Gateway cross-partition
- Perform
sts:AssumeRolefrom an account inawsto a role inaws-eusc - Share resources with RAM (Resource Access Manager) across partitions
- Replicate an S3 bucket directly between partitions
If we need to communicate workloads between both partitions, we’ll have to resort to external mechanisms: site-to-site VPN, Direct Connect with separate connections, or application-level integration through public APIs.
Billing and contracting. Billing in the ESC is in euros and the contract is established with Amazon Web Services EMEA SARL (Luxembourg), not AWS Inc. (Seattle). This has implications for procurement and finance teams: Enterprise Discount Programs (EDP), Reserved Instances and Savings Plans from the commercial partition are not transferable to the ESC.
Operation by European staff. AWS has confirmed that the ESC is operated exclusively by EU-resident staff, with technical controls that prevent access from outside European territory. This includes both access to customer data and operation of the underlying infrastructure.
IAM and identity management#
This is probably the change that most impacts day-to-day operations. In the commercial partition we’re used to IAM being global: we create a role in us-east-1 and use it from any region. In the ESC, IAM is regional, and that changes many things.
We need new accounts in the ESC. We can’t use our existing accounts from the aws partition. The sign-up process is done from eusc-de-east-1.signin.amazonaws-eusc.eu, and generates a completely independent account with its own root user, its own billing and its own Organizations hierarchy. If our organisation has 50 accounts in the commercial partition and wants to deploy workloads in the ESC, it will need to create and manage a second set of accounts with its own OU structure, SCPs and policies.
No cross-partition assume role. In the commercial partition, it’s common to have a centralised identity account from which we sts:AssumeRole into workload accounts. This pattern doesn’t work between partitions. A role in arn:aws:iam::123456789012:role/MyRole cannot assume a role in arn:aws-eusc:iam::987654321098:role/MyRole — they simply can’t see each other. If we need a CI/CD pipeline in the commercial partition to also deploy to the ESC, we’ll need to manage separate credentials for each partition. One option is to configure two identity providers in our CI system (for example, two OIDC configurations in GitHub Actions or Forgejo Actions), each pointing to its respective partition.
IAM Identity Center (formerly SSO) is also available in the ESC, but as an independent instance. If we use it in the commercial partition to federate access with our corporate IdP (Entra ID, Okta, etc.), we’ll need to configure a second integration with the ESC’s Identity Center. The good news is that we can point both instances to the same corporate IdP, so our users use the same credentials. But permission sets, account assignments and groups will need to be managed separately.
Service principals and trust policies are also affected. In the commercial partition we use patterns like lambda.amazonaws.com or ec2.amazonaws.com. In the ESC, we need to verify whether the service principal keeps the same format or changes to the .amazonaws.eu domain. In our roles’ trust policies, any hardcoded reference to ARNs from the aws partition will stop working. The recommendation is to use the aws:PrincipalArn condition variable with patterns that include the correct partition prefix, or better yet, use Terraform’s aws_partition data source (which we’ll see later) to build ARNs dynamically.
Networking and connectivity#
As we’ve seen, there’s no native connectivity between partitions. But in practice, many organisations will need to communicate workloads between the ESC and their commercial regions — at least during a transition period. Let’s look at our options.
Site-to-site VPN is the most accessible option. We can establish IPSec tunnels between a Virtual Private Gateway (or Transit Gateway) in each partition, just as we would with an on-premises data centre. Traffic travels encrypted over the Internet, with the usual latency and bandwidth limitations. Nothing changes compared to how we configure VPNs today, except that both ends are AWS — but in different partitions.
Direct Connect is also available in the ESC, but requires a separate dedicated connection. We can’t reuse a Direct Connect Gateway from the commercial partition to reach VPCs in aws-eusc. If we already have a presence at a European interconnection point, we can request a second connection to the ESC, but it implies an additional contract and cost.
DNS and Route 53. The ESC has its own dedicated name servers with European TLDs, and an independent Route 53 instance. Hosted zones from the commercial partition are not visible from the ESC and vice versa. If we manage domains with Route 53 in the aws partition and want to resolve records from the ESC, we’ll need to configure subdomain delegation or use conditional forwarders. For new domains that only live in the ESC, we can use the aws-eusc Route 53 directly.
VPC Endpoints (PrivateLink). Interface endpoints and gateway endpoints work within the ESC for available services, but we can’t create a PrivateLink that crosses partitions. If we have a service exposed via PrivateLink in eu-west-1, consumers in eusc-de-east-1 won’t be able to access it directly — they’ll need to go through the VPN/Direct Connect and access the service via its private IP or an intermediate load balancer.
IP ranges. AWS publishes the ESC’s IP ranges in a separate ip-ranges.json file, available at https://docs.aws.eu/general/latest/gr/aws-ip-ranges.html. If we maintain security groups, NACLs or on-premises firewalls with AWS range lists, we’ll need to incorporate this new file into our update processes.
Adapting Terraform for the ESC#
If we work with Terraform, the good news is that ESC support is already built into recent versions. Terraform 1.14+ with the AWS provider 6.x resolves ESC endpoints natively — just configure the region:
provider "aws" {
region = "eusc-de-east-1"
}
OpenTofu 1.11+ also supports the ESC natively. If we’re on older Terraform versions, we’ll need to upgrade or configure endpoints manually.
The aws_partition data source is our best ally for writing modules that work across any partition. In the ESC it returns aws-eusc, allowing us to build ARNs dynamically:
data "aws_partition" "current" {}
resource "aws_iam_role" "example" {
name = "my-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
Action = "sts:AssumeRole"
}]
})
managed_policy_arns = [
"arn:${data.aws_partition.current.partition}:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
]
}
This pattern eliminates hardcoded arn:aws:... ARNs that would break in the ESC.
Multi-partition deployments with provider aliases allow us to manage resources in both partitions from the same code. Each alias needs its own credentials, since — as we’ve seen — there’s no cross-partition assume role:
provider "aws" {
alias = "esc"
region = "eusc-de-east-1"
}
provider "aws" {
alias = "commercial"
region = "eu-central-1"
}
resource "aws_s3_bucket" "sovereign_data" {
provider = aws.esc
bucket = "sovereign-data"
}
resource "aws_s3_bucket" "public_assets" {
provider = aws.commercial
bucket = "public-assets"
}
Separate Terraform state per partition. We shouldn’t share state files between partitions — isolation is the whole point. Each partition should have its own S3 backend:
terraform {
backend "s3" {
bucket = "my-tfstate-esc"
key = "infrastructure/terraform.tfstate"
region = "eusc-de-east-1"
}
}
Partition-aware modules. If we maintain reusable modules, we can use variables and locals to adapt behaviour based on the partition, especially for services not yet available in the ESC:
data "aws_partition" "current" {}
locals {
is_sovereign = data.aws_partition.current.partition == "aws-eusc"
enable_cloudfront = !local.is_sovereign # Not available in ESC yet
enable_guardduty_org = !local.is_sovereign # Limited in ESC
}
CloudFormation also works without modifications, as the AWS::Partition pseudo-parameter returns aws-eusc automatically. And AWS CDK has supported the ESC since August 2025 — just configure the region in the stack. However, there’s an important limitation: the Landing Zone Accelerator (LZA) doesn’t support multiple partitions. If we use it to deploy our landing zone in the commercial partition, we’ll need a separate LZA instance for the ESC with its own configuration.
Available services and limitations#
The ESC’s service catalogue is surprisingly broad for a new partition. This isn’t a launch with five services and a “coming soon” page. According to the official endpoints table, the ESC has over 90 services available from day one, covering the usual pillars: compute (EC2, Lambda), containers (ECS, EKS, Fargate), databases (Aurora, DynamoDB, RDS, Redshift), storage (S3, EBS, EFS, FSx), networking (VPC, Transit Gateway, Direct Connect, Route 53, ELB), security (IAM, KMS, Secrets Manager, Private CA, WAFv2, GuardDuty), AI/ML (Bedrock, SageMaker), observability (CloudWatch, X-Ray, CloudTrail) and integration (SQS, SNS, EventBridge, Step Functions, API Gateway). In practice, if we’re running a typical workload in Frankfurt today, we can very likely replicate it in the ESC.
That said, there are notable absences we need to account for:
- CloudFront — No CDN in the ESC. If our architecture relies on edge distribution, we’ll need alternatives. According to tecracer, it’s expected by the end of 2026.
- IAM Identity Center — Centralised SSO management at the organisation level isn’t available yet. We can federate with external IdPs on a per-account basis, but without the convenience of centralised permission sets.
- Shield Advanced and Firewall Manager — Advanced DDoS protection and centralised firewall rule management aren’t available. Basic Shield is included.
- Inspector — No automated vulnerability scanning for workloads.
- GuardDuty — Available but with limitations: no organisation-level management and missing some newer detection capabilities.
Regarding pricing, the ESC operates with the standard AWS pay-as-you-go model, but with an estimated 10-15% premium over Frankfurt (eu-central-1) for comparable services. Billing is in euros.
Technical migration strategy#
Moving workloads to the ESC is not a migration between regions — it’s a migration between clouds. The sooner we internalise this, the better we’ll plan. There are no shortcuts: we can’t copy EBS snapshots between partitions, we can’t replicate S3 buckets directly, and we can’t reuse our existing accounts or roles.
Phase 1: Identity foundation. First, establish the account structure in the ESC. Create the organisation, define OUs, configure SCPs and federate access with our corporate IdP. If we use Control Tower (available in the ESC), this is the time to deploy it. All of this should be codified in Terraform from day one.
Phase 2: Networking. VPCs, subnets, NAT Gateways, security groups. If we need connectivity with the commercial partition during the transition, set up the VPN or Direct Connect. Define the DNS strategy: which domains live in the ESC’s Route 53, which are delegated from the commercial partition.
Phase 3: CI/CD pipeline. Adapt our pipelines to deploy to both partitions. This means configuring separate credentials (OIDC or access keys) for the ESC and ensuring our Terraform modules use aws_partition instead of hardcoded ARNs. If we use ECR, we need the pipeline to push container images to the ESC registry as well.
Phase 4: Workload deployment. Start with non-critical workloads to validate that everything works: infrastructure, permissions, endpoints, monitoring. Resolve the inevitable issues before touching production.
Phase 5: Data migration. Usually the most complex part. Options depend on volume and downtime tolerance:
- S3: Use tools like
aws s3 syncor DataSync between partitions, going through the network (VPN/Direct Connect or Internet). There’s no native cross-partition replication. - Databases: Export/import with native dumps, or use DMS (available in the ESC) with a source endpoint in the commercial partition accessible over the network.
- EBS: Export snapshots to S3, transfer to S3 in the ESC, re-import. Not direct but it works.
Phase 6: Cutover. Redirect traffic to the ESC workloads. Keep the commercial partition infrastructure as a fallback until confident, then decommission.
A practical tip: don’t try to migrate everything at once. Identify the workloads that truly need sovereignty guarantees and start there. The rest can stay in the commercial partition — not everything needs to be in the ESC.
Conclusion#
The AWS European Sovereign Cloud is not simply another region with a sovereignty label. It’s an independent partition with all the technical consequences that implies: separate accounts, regional IAM, different endpoints, ARNs with a new prefix, no native connectivity with the commercial partition and a service catalogue that, while broad, still has relevant gaps.
For those of us working with infrastructure as code, the good news is that the tools are ready. Terraform with provider 6.x resolves endpoints natively, aws_partition lets us write modules that work across any partition, and provider alias patterns with separate state give us a solid foundation for managing multi-partition deployments.
The key is not to underestimate the effort. This isn’t about changing eu-central-1 to eusc-de-east-1 in a variables file. It’s about rethinking account structure, identity management, network connectivity, CI/CD pipelines and data strategy. It’s also about acknowledging that the service catalogue, while covering the fundamental pillars, is not yet on par with the commercial partition — and planning our architectures accordingly. Above all, it’s about deciding with good judgement which workloads truly need to be in the ESC and which can remain in the commercial partition.
The ESC addresses a real need in the European market. But like every architecture decision, it requires understanding the trade-offs before making the leap.
Did you enjoy this article? Subscribe to the newsletter and be the first to know when a new one is published!
SubscribeReferences#
- AWS European Sovereign Cloud
- Opening the AWS European Sovereign Cloud
- AWS European Sovereign Cloud FAQs
- The AWS European Sovereign Cloud: A New Horizon for Digital Sovereignty (Keepler)
The
aws-isoandaws-iso-bpartitions are not publicly documented by AWS, but their identifiers appear in the AWS SDK (botocore)partitions.jsonfile and on the AWS Secret Cloud and AWS Top Secret Cloud product pages. ↩︎ ↩︎


