Nothing is more ubiquitous in the world of cloud computing services than Amazon S3. Many organizations use S3 to store their data, from tiny startups to massive organizations. Thanks to its ease of use, scalability, and (in most cases) dirt-cheap pricing, S3 has become the top cloud storage solution in the world.
Most companies use S3 to store and serve media like images and videos quickly and cheaply. Some places take advantage of the different storage classes like Glacier for durable long-term archiving of essential documents. You can also leverage other AWS security tools like Macie to alert you of potentially sensitive data exposure automatically. S3 also helps with security by encrypting your data at rest and in transit, recently including encryption by default.
Besides having access to all of this functionality, you can also use S3 to host a static website without the need to run web servers or any other backend processes requiring your attention. Building static websites that don’t require dynamic, server-side processing is hugely beneficial since they’re quick to load and far less expensive to run since you don’t need to deal with too many moving parts.
Why use Amazon S3 to host static websites?
Many online services where you can host a static website. However, Amazon S3 is a perfect fit for hosting static websites for various reasons, especially if you’ve already invested your efforts using the AWS ecosystem.
Pay for what you use
Most hosting companies charge a flat monthly or annual fee for serving your website, whether 10 or 100,000 visitors land on your website. They also might charge overage fees for bandwidth used, which can take you by surprise if you’re not careful. Some companies offer free hosting, but with caveats such as low usage limits and poor performance or stability.
With S3, the only payment you need to make is for what you use, without limits. You can host thousands of static files in your S3 bucket without any storage issues, and it can handle whatever traffic goes your site’s direction. You’ll end up paying only for the storage used and the amount of data transferred. Depending on your account and region, AWS also gives you free usage. You’ll only pay pennies per month for the typical static website.
Great performance and scalability for the cost
Website hosting companies have a negative reputation for being hit or miss regarding their performance. Some fantastic hosting services exist, but many times this bad rap is warranted. In my experience, almost all of the free and cheap tiers of static website hosters provide lousy performance. They won’t come close to handling any surge in activity on your site—some even shutting down your site unexpectedly if a sudden influx of traffic arrives. Even on most higher-tier paid hosting services, you’re still at the mercy of their server limitations.
With Amazon S3, you don’t have to worry about performance or scalability. Using a cloud service like S3 has the main benefit of being built for high availability with servers across the globe. That means there’s a meager chance your static website will become unavailable due to high traffic or other issues. You also have a choice of the region you want to put your files so it can serve your site closer to where most of your customers are, leading to even faster loading times for the end users.
Tight integration with other AWS services
Almost every static website hosting company only provides a single service—hosting your website. It’s becoming more common these days to have these companies offer additional services like domain registration and email hosting for a fee. However, if you need services your hosting company doesn’t provide, you’re left to find other businesses and deal with the hassle of making them work together.
One of the primary benefits of Amazon AWS is that it provides tons of tightly integrated services. When it comes to hosting a static website on S3, you have dozens of AWS services you can integrate with, like CloudFront (content delivery network for fast site loading across the globe), Route 53 (domain registration and DNS), and Lambda (serverless functions if you need dynamic functionality). These integrations are designed to play well with each other, significantly reducing your overall maintenance cost.
Setting up Amazon S3 to serve your static website
Setting up the infrastructure on AWS for hosting a static website is straightforward but requires manual setup. In a nutshell, you need to create an Amazon S3 bucket, set permissions to allow public access to your files, and enable and configure website hosting on the bucket. While the process consists of only a few steps, it can feel overwhelming for those unfamiliar with AWS.
Thankfully, you can automate every single step in this process, so you don’t have to go through each of these steps one by one. I’ll show you how you can easily set up the infrastructure to serve static websites on Amazon S3 using Terraform.
Terraform is an open-source infrastructure as code tool that allows users to define and manage their infrastructure consistently and repeatedly. Using code, you can quickly create your desired infrastructure, modify it, and destroy everything with a single command. It also helps you keep track of the current state of your services. In short, it’s an excellent tool for keeping track of your systems, especially your cloud architecture.
Why use Terraform to create an Amazon S3 bucket?
While you can always set up your S3 bucket to host a static website in various ways (like through the AWS Console, the AWS command-line tool, and so on), using Terraform provides additional benefits you won’t have using those other methods:
- Seeing your infrastructure written down as code will document your systems without needing to track down every setting.
- You can place your Terraform files under version control, showing how your infrastructure has changed over time.
- If you need to replicate your environment for another S3-backed static website, you can easily reuse your existing Terraform files.
- Once you no longer need to host your static website on S3, a single command will delete everything, so you don’t have to worry about cleaning things up in your AWS account.
Terraform isn’t the only infrastructure as code tool out there. AWS has its own tool, CloudFormation, which accomplishes the same result of provisioning your architecture. If you plan to stick with AWS, it’s worth exploring CloudFormation since it will always have first-class support for its ecosystem. However, Terraform is an excellent alternative since it’s platform agnostic, allowing you to mix and match access to different providers using a single tool.
Using Terraform to easily create an Amazon S3 bucket for website hosting
This article assumes you have Terraform installed in your local system. While you don’t need to know the ins and outs of Terraform to follow along with this article, it’s helpful to understand what infrastructure as code is and how Terraform works. The Terraform website has a good primer on this topic.
For this article, we’ll use Terraform to go through the following steps:
- Setting up the AWS Terraform provider and creating an Amazon S3 bucket.
- Setting the correct permissions to allow public access to your website.
- Enabling website hosting on the created bucket.
Setting up the AWS provider
The first step for most Terraform files is to set up a provider, a plugin that tells Terraform how to interact with other services to manage your infrastructure. Since we want to deploy our static website to S3, we’re using the official AWS Provider. This provider lets us connect to AWS and manage our system infrastructure.
We’ll create a new file called main.tf
with the following setup code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.54.0"
}
}
}
We’re defining the settings we want to use for our Terraform project using the terraform
block. The only setting we’ll use for our Terraform project is to define the AWS provider from the Terraform registry using the required_providers
setting. We’re also locking down the current version of the provider to avoid any issues with future updates.
You may notice that we’re not explicitly defining how Terraform knows which AWS account to use for managing your infrastructure. You should never hard-code any credentials in your Terraform files for security purposes. Most Terraform providers offer more secure ways to authenticate with external services. In the case of the AWS provider, you can authenticate using various methods like environment variables or shared credentials files.
Initializing the Terraform directory
To verify that your setup is correct, let’s initialize the directory where your Terraform file is located to download and install the AWS provider. You only need to do this the first time you use a specific Terraform configuration or modify your providers.
To initialize the directory, you can run terraform init
. Terraform will show the progress of the initialization and additional information about this particular Terraform configuration.
❯ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "4.54.0"...
- Installing hashicorp/aws v4.54.0...
- Installed hashicorp/aws v4.54.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Creating an Amazon S3 bucket
After initializing your Terraform project, you can define how to set up the S3 bucket using resource blocks in Terraform, which represent the infrastructure to use based on your Terraform providers. In the main.tf
file, add the following code after the terraform
block to define the S3 bucket we’ll use for our static site:
resource "aws_s3_bucket" "static_site_bucket" {
bucket = "dennis-static-site"
}
We’ll manage our S3 bucket using the aws_s3_bucket
resource type. The resource
block accepts some optional configuration, but all we’re adding here is the bucket’s name. If you omit the bucket name in this block, Terraform assigns a random name for your bucket. As you’ll see later, we’ll refer to this bucket name as part of the default URL endpoint to access your static site.
Verifying your configuration and applying changes
You can validate your Terraform code using the terraform validate
command. This command will check that your configuration files are valid and consistent but won’t check if it can access your AWS account.
❯ terraform validate
Success! The configuration is valid.
After validating your configuration, you’ll usually want to check what changes it will make if you apply these changes to your AWS account. To do this, you can use the terraform plan
command.
❯ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.static_site_bucket will be created
+ resource "aws_s3_bucket" "static_site_bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = "dennis-static-site"
... Omitting for brevity
}
Plan: 1 to add, 0 to change, 0 to destroy.
When you run this command the first time, Terraform checks your current directory for a state file, which contains the information of the infrastructure that Terraform is managing for you in this configuration. This information is known as the current state in Terraform lingo. Since we have yet to apply our changes using Terraform, our current state is empty and this file won’t exist. Terraform will assume you need to create your infrastructure due to the empty state. Note that terraform plan
only displays your execution plan and doesn’t create the S3 bucket or even check if the bucket already exists.
To go ahead with the creation of your S3 bucket, use the terraform apply
command. This command will show you the proposed changes similar to terraform plan
but will prompt you this time to proceed with the changes. If you accept the proposed changes, Terraform will perform the provisioning live on your AWS account (assuming you’ve set up the AWS provider credentials correctly).
❯ terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.static_site_bucket will be created
+ resource "aws_s3_bucket" "static_site_bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = "dennis-static-site"
... Omitting for brevity
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_s3_bucket.static_site_bucket: Creating...
aws_s3_bucket.static_site_bucket: Creation complete after 2s [id=dennis-static-site]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
If you’ve set everything up correctly, you’ll observe that Terraform has created the S3 bucket as indicated in the configuration. You can log in to your AWS account and go to S3 to see the newly created bucket.
Now that Terraform has created the S3 bucket, it will save the information into its current state to compare it to what’s in your AWS account (also known as the remote state). That means any time you run terraform plan
or terraform apply
, the AWS provider will compare the remote state on AWS and the current state defined by Terraform to see if something needs changing.
❯ terraform plan
aws_s3_bucket.static_site_bucket: Refreshing state... [id=dennis-static-site]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Although setting up an Amazon S3 bucket is a straightforward process that can be done quickly and easily via the AWS console, having your configuration defined as code and using Terraform to manage it makes it much more efficient and organized to maintain your infrastructure in the long run.
Allowing public access to your S3 bucket objects
Setting up a bucket is the easy part of hosting a static website on S3. The tricky and confusing aspects of AWS typically stem from permissions. By default, Terraform-created S3 buckets are set to private access, where only the bucket owner can access its files. Of course, this setting doesn’t help serve a website since no one would be able to access it.
To make your website publicly accessible, you need to grant public read access to the bucket and its contents. This procedure involves creating an S3 bucket policy. In AWS, a policy is defined as a JSON document containing which permissions you want to allow or deny to specific resources. For a publicly hosted static website, you’ll need to add a policy to your S3 bucket that allows anyone to read the HTML files you host inside.
The JSON object defining this policy for the S3 bucket created earlier would look like the contents below:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Principal": "*",
"Resource": ["arn:aws:s3:::dennis-static-site/*"]
}
]
}
In summary, this policy tells AWS we want to allow read access to all files inside the dennis-static-site
S3 bucket. You can set up this policy through the AWS console by going to the S3 bucket settings, but we can also automate this process and manage it through Terraform. The AWS provider includes the aws_s3_bucket_policy
resource type that lets you easily attach this policy to an existing S3 bucket.
In the main.tf
file, we can add the following code at the end of the file:
resource "aws_s3_bucket_policy" "static_site_policy" {
bucket = aws_s3_bucket.static_site_bucket.id
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = [
"s3:GetObject"
],
Effect = "Allow",
Principal = "*",
Resource = [
"${aws_s3_bucket.static_site_bucket.arn}/*"
]
}
]
})
}
This resource
block looks similar to the previous one that created our S3 bucket. The aws_s3_bucket_policy
resource type requires two arguments: the bucket where we want to apply the policy (the bucket
argument) and the contents of the policy itself (the policy
argument).
Since the policy
argument needs to be a string that uses JSON syntax, we’re taking advantage of a built-in function called jsonencode
, which will take a Terraform type (in this case, a map) and convert it to a JSON string. This function will convert the map into a string equal to the JSON object shown earlier in this article. There are other ways to set a similar policy with Terraform, such as using the aws_iam_policy_document
data source, but for simplicity, we’ll stick with this straightforward approach.
If you notice, we’re not explicitly defining the bucket name or the ARN (Amazon Resource Name) in the policy argument. Instead, we’re referencing the values of another resource Terraform manages for us. In this example, we’re referencing the aws_s3_bucket.static_site_bucket
resource that creates our S3 bucket. Terraform resources let you access specific attributes based on the current state of your infrastructure so you can refer to them elsewhere.
Here, we’re using two attributes from the S3 bucket:
aws_s3_bucket.static_site_bucket.id
refers to the name of the managed S3 bucket.aws_s3_bucket.static_site_bucket.arn
refers to the Amazon Resource Name (ARN) of the managed S3 bucket.
When verifying your execution plan with terraform plan
or applying changes with terraform apply
, Terraform will replace these values with what it has in its current state. We already created the S3 bucket, so we have its information in our current state. Let’s apply these changes to see how they appear:
❯ terraform apply
aws_s3_bucket.static_site_bucket: Refreshing state... [id=dennis-static-site]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket_policy.static_site_policy will be created
+ resource "aws_s3_bucket_policy" "static_site_policy" {
+ bucket = "dennis-static-site"
+ id = (known after apply)
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = [
+ "s3:GetObject",
]
+ Effect = "Allow"
+ Principal = "*"
+ Resource = [
+ "arn:aws:s3:::dennis-static-site/*",
]
},
]
+ Version = "2012-10-17"
}
)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_s3_bucket_policy.static_site_policy: Creating...
aws_s3_bucket_policy.static_site_policy: Creation complete after 1s [id=dennis-static-site]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
As you can see, Terraform set the bucket name for the resource block and included the ARN inside of the policy. After accepting and applying these changes, your S3 bucket will now have public read access to anyone worldwide.
Enabling website hosting on an Amazon S3 bucket
After setting up the S3 bucket policy to allow public read access, you can enable website hosting on your bucket as the final step in this process. With our current configuration, anyone can directly access any HTML files inside the bucket using any method for accessing S3 bucket objects. However, it’s not an ideal scenario for hosting websites. You’ll have to point directly to your index file (e.g., your home page), and there are no custom error pages if someone accesses a non-existing file in your bucket.
Enabling website hosting on the S3 bucket handles these issues by setting up an endpoint that works as your static website’s URL and loads your index file automatically. It also allows you to set up a custom error page if it can’t handle a request that matches your website’s files.
Like everything we’ve done so far, enabling website hosting on an Amazon S3 bucket can also be handled by Terraform. The aws_s3_bucket_website_configuration
resource type from AWS provider lets you switch this setting on for any of your account’s S3 buckets. Since Terraform is already managing our S3 bucket, we can use the bucket attributes directly in our code to reference it like we did when setting up the public read policy earlier.
In the main.tf
file, let’s add the following code to enable website hosting in our Terraform-managed S3 bucket:
resource "aws_s3_bucket_website_configuration" "static_site_configuration" {
bucket = aws_s3_bucket.static_site_bucket.id
index_document {
suffix = "index.html"
}
error_document {
key = "404.html"
}
}
This new resource
block configures the necessary settings to enable website hosting on the S3 bucket. The aws_s3_bucket_website_configuration
requires the S3 bucket that will host your static site (using the bucket
argument). Once again, we’re using aws_s3_bucket.static_site_bucket.id
to get the name of the managed S3 bucket to set in this argument.
We’re configuring our S3 bucket with two optional arguments. The index_document
block includes the suffix
argument to define the index of any directory on the hosted website. For example, if the website URL is https://dennis-static-site.s3-website-ap-northeast-1.amazonaws.com
, it will automatically serve the HTML file defined in index_document
instead of requiring you to explicitly include the filename in the path, like https://dennis-static-site.s3-website-ap-northeast-1.amazonaws.com/index.html
.
The second optional argument is error_document
, a block that allows us to define the error page shown when there’s an error on the site, typically a “404 Not Found” error due to an invalid URL. The key
argument lets us set which file to display in these scenarios instead of a generic S3 error page.
With this configuration in place, we can tell Terraform to apply these changes to enable website hosting using our existing S3 bucket:
❯ terraform apply
aws_s3_bucket.static_site_bucket: Refreshing state... [id=dennis-static-site]
aws_s3_bucket_policy.static_site_policy: Refreshing state... [id=dennis-static-site]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket_website_configuration.static_site_configuration will be created
+ resource "aws_s3_bucket_website_configuration" "static_site_configuration" {
+ bucket = "dennis-static-site"
+ id = (known after apply)
+ routing_rules = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ error_document {
+ key = "404.html"
}
+ index_document {
+ suffix = "index.html"
}
... Omitting for brevity
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_s3_bucket_website_configuration.static_site_configuration: Creating...
aws_s3_bucket_website_configuration.static_site_configuration: Creation complete after 0s [id=dennis-static-site]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Now that your S3 bucket is ready to serve your static website, you might be asking yourself “Great, so what’s the URL of my site?” On website-enabled S3 buckets, the site URL uses the following format: https://[bucket-name].s3-website-[region].amazonaws.com
, where [bucket-name]
is the name of your S3 bucket, and [region]
is the bucket’s AWS region. In the configuration done for this article, we placed our S3 bucket in the ap-northeast-1
region, so the URL is https://dennis-static-site.s3-website-ap-northeast-1.amazonaws.com
.
You can also grab this information from Terraform’s state file using the terraform state
command:
❯ terraform state show aws_s3_bucket_website_configuration.static_site_configuration
# aws_s3_bucket_website_configuration.static_site_configuration:
resource "aws_s3_bucket_website_configuration" "static_site_configuration" {
bucket = "dennis-static-site"
id = "dennis-static-site"
website_domain = "s3-website-ap-northeast-1.amazonaws.com"
website_endpoint = "dennis-static-site.s3-website-ap-northeast-1.amazonaws.com"
error_document {
key = "404.html"
}
index_document {
suffix = "index.html"
}
}
The terraform state show
command allows you to view the details of resources managed by Terraform. The command shown above prints out all the information for the aws_s3_bucket_website_configuration.static_site_configuration
resource, including the website_endpoint
attribute, which is the URL you can use for your static website. With this information, you can start uploading your site files and accessing them from your website.
With this, you have all you need to begin hosting files on your S3 bucket, all while being managed by Terraform to make it easier for you to modify and even delete everything with a single command when you no longer want to use S3 to host a static site. Add a few HTML files into the bucket, enter the website endpoint for your bucket, and everything should work great.
The completed Terraform setup
Here’s how the completed Terraform file (main.tf
) should look if you follow this guide:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.54.0"
}
}
}
resource "aws_s3_bucket" "static_site_bucket" {
bucket = "dennis-static-site"
}
resource "aws_s3_bucket_policy" "static_site_policy" {
bucket = aws_s3_bucket.static_site_bucket.id
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = [
"s3:GetObject"
],
Effect = "Allow",
Principal = "*",
Resource = [
"${aws_s3_bucket.static_site_bucket.arn}/*"
]
}
]
})
}
resource "aws_s3_bucket_website_configuration" "static_site_configuration" {
bucket = aws_s3_bucket.static_site_bucket.id
index_document {
suffix = "index.html"
}
error_document {
key = "404.html"
}
}
Terraform has additional functionality to make scripts like these more robust and maintainable, such as using variables to dynamically set up your bucket name instead of hard-coding it in the file itself. However, this script is an introductory look at how you can leverage the power of infrastructure as code tools to manage your systems better.
Summary
Amazon S3 is an excellent choice for hosting a static website thanks to its scalability, reliability, and cost-effectiveness. Although it’s straightforward to set this up, it can be a tedious process. If you’re not too experienced with the AWS ecosystem, you might be confused about how bucket permissions and policies work. It’s also too troublesome to track how you set up your S3-backed website and how it’s changed over time.
Fortunately, you can simplify the entire process using infrastructure as code, where tools like Terraform come into play. With Terraform, you can build and manage your systems through code. In this article, we used Terraform to create an Amazon S3 bucket, update its policies to allow public read access, and enable website hosting. Instead of dealing with the setup and management of your infrastructure, Terraform lets you focus on building your static website.
Are you looking for a DevOps expert to help you with AWS, Terraform, or other DevOps services? With 20 years of experience at startups and an AWS Certified DevOps Engineer, I can guide you through the complexities of modern infrastructure to get the most out of your investment. I’d love to help you build and maintain reliable, scalable, and secure systems, from infrastructure management and deployment to cloud architecture and monitoring. Contact me today to learn more about my services.
Also, if you’re hosting a static website on Amazon S3 because your AWS costs are out of control, I can help with that, too. Check out my free guide, Simple Yet Powerful Ways to Shrink Your AWS Expenses, and learn how you can take small steps today to cut down on your cloud expenses.