Deploy secure static websites with Amazon S3, CloudFront, and Route 53 using Terraform

Hosting static websites on traditional servers requires managing OS patches, web server configuration, SSL renewal, and scaling—tasks that add no value for static content. Amazon S3, CloudFront, and Route 53 remove this overhead while providing global CDN distribution, managed SSL, and DNS. This note demonstrates deploying this infrastructure with Terraform in three stages: DNS setup, infrastructure provisioning, and automated content deployment. You’ll learn to build a production-ready hosting platform that follows security best practices.

In this note, I used Amazon S3 to store website files, Amazon CloudFront to cache and serve them globally over HTTPS, and Amazon Route 53 to route the custom domain to CloudFront. Together, they form a complete static hosting platform. The setup also includes ACM for SSL certificates, AWS KMS encryption for storage, and DNSSEC for DNS security. If you want to follow along, the code is in my GitHub repository: kunduso/aws-s3-cloudfront-route53-terraform.

Solution Overview

This solution consists of three dependent stages. Stage 1 provisions the Route 53 hosted zone and enables DNSSEC. Stage 2 deploys the S3 buckets, CloudFront distribution, and SSL certificate – using the hosted zone from Stage 1 for DNS validation. Stage 3 handles content deployment separately, updating website content without re-provisioning infrastructure, and relies on the infrastructure built in Stage 2.
deploy-static-content-s3-cloudfront-route53
Stage 1: Provision the Route 53 hosted zone

Stage 2: Deploy S3 buckets, CloudFront distribution, and

Stage 3: Deploy content and invalidate cache

Prerequisites

This use case requires the following prerequisites. These are:

PreReq-1: An AWS account with an IAM role configured for GitHub Actions via OIDC. This eliminates long-lived credentials. I covered this setup in this note. Then store the role ARN as a GitHub secret named IAM_ROLE

PreReq-2: An Amazon S3 bucket for Terraform remote state

PreReq-3: A registered domain with access to change its nameservers

(Optional) PreReq-4: An Infracost API key stored as a GitHub secret named INFRACOST_API_KEY which is used for cost estimation on pull requests

Implementation

This implementation has three stages, each with code in a separate folder: dns (Stage 1), infrastructure (Stage 2), and content (Stage 3), and its own GitHub Actions workflow. I’ll now detail each stage.

Stage 1: Provision the Route 53 hosted zone
In this stage (code in the dns folder), I provisioned the Route 53 hosted zone, which became the domain’s authoritative DNS zone once I configured its nameservers at the domain registrar (SquareSpace).

I also enabled DNSSEC using an AWS KMS (Key Management System) Key Signing Key to cryptographically protect DNS responses against tampering.

The hosted zone setup follows the pattern from my previous note on Route 53 with a load balancer, and DNSSEC configuration is detailed in this note.

After Terraform created the hosted zone, I found the nameserver values from the route53_nameservers output in the GitHub Actions workflow logs (visible in the Terraform apply step), then updated the domain registrar’s nameserver entries with those values. DNS propagation can take upward of 24 hours, though it’s typically much faster. Stage 2 is dependent on this completion; the ACM certificate validation in Stage 2 will not succeed until Route 53 is authoritative for your domain, making this stage a prerequisite for subsequent infrastructure provisioning.

Additionally, to complete the DNSSEC chain of trust, I configured the DS record at my domain registrar (Squarespace) using the ds_record output from the Terraform apply step. This output contains the key tag, algorithm, digest type, and digest value that the registrar needs. Without this step, the zone is signed but resolvers cannot verify the signatures since the parent zone has no delegation signer record pointing to the KSK.

Stage 2: Deploy S3 buckets, CloudFront distribution, and SSL certificate

The second stage focuses on creating the remaining infrastructure components. Let us review this.

Step 2.1: Provision SSL certificate (ACM + DNS validation)

I requested an ACM certificate for the root domain and www subdomain, then validated it using DNS records in the Route 53 hosted zone from Stage 1. The ACM certificate must be provisioned in us-east-1 since CloudFront requires certificates in that region. I covered the ACM provisioning and DNS validation pattern in detail in my note on Route 53 with a load balancer (steps 2-4 apply here).

One thing that my previous post doesn’t cover that’s worth mentioning here: the provider = aws.us_east_1 requirement. The ACM resources use an aliased provider because CloudFront only accepts certificates from us-east-1, regardless of where your other infrastructure lives (which wasn’t the case in the previous note).

Step 2.2: Create an AWS KMS key for S3 encryption

The following image shows the aws_kms_key and associated resources for Amazon S3 encryption.

This setup uses two S3 buckets — one for website content and one for operational files such as error pages and access logs (detailed in Steps 2.3 and 2.4). Both use a custom AWS KMS key for server-side encryption instead of the default AWS-managed key. A custom key provides control over the key policy, which is necessary here because CloudFront needs the kms:Decrypt permission to read objects from encrypted buckets via Origin Access Control.

The key policy also grants the S3 service access to encryption/decryption and the account root full management permissions.

I also created an alias for easier identification.

Step 2.3: Create an S3 website bucket with encryption, versioning, and lifecycle policies

The website bucket stores the static HTML files served through CloudFront. It uses the KMS key from Step 2.2 for server-side encryption, blocks all public access, and enables versioning so you can roll back content changes. A lifecycle policy expires noncurrent versions after 90 days and cleans up incomplete multipart uploads. The bucket also logs S3 access requests to the ops bucket under the access_logs/ prefix.

Step 2.4: Create S3 ops bucket for error pages and CloudFront access logs

The ops bucket serves two purposes: it hosts custom error pages (404, 500) that CloudFront serves when errors occur, and it receives CloudFront access logs under the logs/ prefix. Because CloudFront’s legacy logging requires bucket ACL support, this bucket uses BucketOwnerPreferred ownership controls — unlike the website bucket, which is purely private. Lifecycle policies transition logs to Standard-IA after 30 days, Glacier after 90 days, and delete them after one year to control storage costs.

Step 2.5: Create CloudFront Origin Access Controls

Origin Access Control (OAC) is the mechanism that allows CloudFront to access private S3 buckets. I created two OAC resources — one for the website bucket and one for the ops bucket’s error pages — each using SigV4 signing to authenticate requests to origin_access_control_origin_type , which is set to s3.

Step 2.6: Create CloudFront distribution

The distribution is the central resource that ties everything together. It defines two origins (the backend sources from where CloudFront fetches content): the website bucket for static content and the ops bucket for error pages.

The distribution is configured with aliases for both the root domain and the www subdomain, and uses index.html as the default root object, so visitors hitting the base URL get the homepage.

The default cache behavior routes all requests to the website bucket. It redirects HTTP visitors to HTTPS automatically (viewer_protocol_policy = "redirect-to-https"), enables compression for faster delivery, and uses the Managed-CachingOptimized AWS cache policy. The Managed-SecurityHeadersPolicy AWS policy is also attached, which adds security headers like Strict-Transport-Security, X-Content-Type-Options, and X-Frame-Options to all responses.

An ordered cache behavior matches the /error-pages/* path pattern and routes those requests to the ops bucket — a separate behavior is needed because the error pages are stored in a different origin than the website content.

Custom error responses map the HTTP 404, 403, and 500 error codes to the corresponding error pages.

Finally, the access logging writes to the ops bucket under the logs/ prefix, and the ACM certificate from Step 2.1 enables HTTPS on the custom domain.

Step 2.7: Create Route 53 A records for CloudFront

In this step, I created two alias A records — one for the root domain and one for www — to point to the CloudFront distribution. These are AWS alias records, which means they resolve within the AWS network with no additional DNS hop. This is also what connects the custom domain names (configured as aliases on the distribution in Step 2.6) to actual DNS routing.

Step 2.8: Configure the two S3 bucket policies for CloudFront access

The bucket policies are what actually grant CloudFront permission to access the S3 buckets through OAC.

The website bucket policy allows s3:GetObject for the CloudFront service principal, scoped to the distribution’s ARN.

The ops bucket policy has two statements: s3:GetObject scoped to the error-pages/* prefix for serving error pages, and s3:PutObject scoped to the logs/* prefix for writing access logs.

Finally, using the aws_s3_bucket_policy resources, these two policies were attached to the correct S3 buckets.

Step 2.9: Store infrastructure outputs in SSM Parameter Store

The final step in Stage 2 stores the key infrastructure values — the S3 bucket names, CloudFront distribution ID, and CloudFront domain name — in an SSM Parameter Store SecureString parameter encrypted with a dedicated KMS key.

Storing these values in SSM Parameter Store enables the GitHub Actions workflow in Stage 3 to deploy content and invalidate the cache, without needing to share the Terraform state between stages.

Stage 3: Deploy content and invalidate cache

The final stage separates content deployment from infrastructure, allowing static website updates without re-running Terraform. Static HTML files are in the content folder, and a GitHub Actions workflow (deploy-content.yml) automates deployment on every merge to the main branch.

Step 3.1: Authenticate to AWS via OIDC

The workflow uses the same OIDC-based authentication as Stages 1 and 2; no long-lived credentials are stored anywhere. GitHub Actions assumes the IAM role and receives temporary credentials scoped to the session.

Step 3.2: Read infrastructure outputs from SSM Parameter Store

Using the AWS CLI command aws ssm get-parameter, the workflow retrieves the S3 bucket name and CloudFront distribution ID from the SecureString parameter created in Step 2.9. This is how the stages connect without sharing Terraform state — SSM acts as the bridge between the infrastructure and content pipelines.

Step 3.3: Deploy files to S3

The workflow syncs the content folder to the S3 website bucket using aws s3 sync with the --delete flag, which removes any files from the bucket that no longer exist in the source folder. The --cache-control header is set to enable browser caching for better performance.

Step 3.4: Invalidate CloudFront cache

After uploading new content, the workflow creates a CloudFront cache invalidation for all paths (/*) using aws cloudfront create-invalidation command. Without this step, CloudFront would continue serving the previously cached version for up to 24 hours (the default TTL of the Managed-CachingOptimized cache policy). The invalidation ensures visitors see the updated content immediately

This separation enables infrastructure changes and content updates follow independent workflows. A new HTML change triggers only the content pipeline — no Terraform plan, no infrastructure risk.

Verification

After deploying all three stages, the site was live at https://kunduso.com/ with a secure connection confirmed by the browser.

On the AWS console, the CloudFront distribution showed the alternate domain names (kunduso.com and www.kunduso.com), the attached ACM certificate, the TLSv1.2_2021 security policy, and standard logging enabled.

The two S3 origins — one for website content and one for the ops bucket — were configured correctly.

The custom error responses are in place — HTTP 403 and 404 errors serve the custom 404 page, while 500 errors serve the 500 page, both from the ops bucket’s /error-pages/ prefix.

The Route 53 hosted zone shows the A records (root and www) aliased to CloudFront, the NS and SOA records, and the CNAME records created during ACM DNS validation.

Conclusion

In this note, I walked through deploying a secure static website on AWS using Terraform across three stages — provisioning DNS with Route 53 and DNSSEC, building the infrastructure with S3, CloudFront, ACM, and KMS, and automating content deployment through GitHub Actions. Each stage operates independently, so updating content never risks the infrastructure.

If you have a static site idea — a portfolio, documentation, a landing page — and were wondering how to host it on AWS with proper security and automation, this note has the blueprint. Fork the repo, swap in your domain, and ship it.

If you have any questions or suggestions, feel free to comment or get in touch.

Leave a comment