Hosting static websites is becoming more and more popular, and there is no doubt that we will be seeing many more websites shifting towards serverless. Why? The answer is simple: static website hosting is convenient. With all the advantages of static websites, here it is: we are serving our entire website sufle.io serverless! Let’s start by looking at the advantages of hosting static websites on Amazon S3. Then we’ll walk through the steps to create our example static website: configure our S3 bucket, provision our custom SSL certificate for our custom DNS and lastly speed up our static website with Amazon CloudFront.
First things first, hosting a static website is extremely simple, no specific programming language or framework is needed. All you need to host your website on Amazon S3 is serving your static assets. You don’t worry about server management or maintenance: create your static file, then replicate them all around the world for maximum speed via CDN. Static websites reduce the development time, effort, cost and expertise needed to serve your website.
Speaking of the speed, your content will be ready to be served when requested since your website is not connected to a database or a template. This reduces the TTFB drastically, and allows you to achieve what you need to compete: maximum speed. Beyond the simplicity and speed advantages of hosting static websites, there are also huge performance improvement opportunities through reliability and scalability of the cloud. No database means no worries on server health when there is unexpected traffic. You can easily scale your website without compromising your performance. Also the static files and their replicates increase your reliability, which means that you don’t have to worry about downtimes or failures when something goes wrong.
Last but not the least, static websites also improve your security posture, since you don’t have to manage and handle the server updates or patches for continuous security events and fixes.
In this blog post, we will walk through basic steps to host our website on Amazon S3. Other services we will be using are Amazon Route 53, Amazon CloudFront and AWS Certificate Manager.
In this example, we want to serve our static website with the custom domain of our choice rather than auto-generated endpoints, which would be confusing and inconvenient for our visitors. To do this, we’ll use Amazon Route 53, AWS’s DNS solution. We have an existing domain named sufle.cloud in our account, but for those who are new to AWS Route 53 you can simply buy a domain name. You can also import your existing zone records and update the name server records in Route 53 to direct traffic to your existing domain name that you have already bought from a different DNS provider. Said that, we will be continuing with creating a subdomain named anyonecandoit.sufle.cloud for our example static website. Please note that, the name of your subdomain (or the name of your root domain, if you will be directing your subdomain to the root domain) should be exactly the same as your bucket name.
So, let’s go ahead and create our bucket. Remember, although the bucket view is global and you can see all your separate buckets within the same view, each bucket is created in a specific region of your choice.
For our example static website, I’ll name my S3 bucket anyonecandoit.sufle.cloud (the same name with my subdomain name) and choose the region as eu-west-1 (Ireland), which is the closest region for my users. Besides the bucket name and region, we’ll leave everything as default for now.
Now, it is time to upload our files to our S3 bucket. We must have an index.html
and error.html
files by default. I’ve also added a new-page.html
and my assets to the bucket to add some flavor for our example static website. You can easily drag your chosen files to the uploading area
and hit Upload, leaving permissions and properties as default for now. We’ll handle the permissions in the following steps.
Now, it is time to enable our S3 bucket’s static website hosting option. Go to the Properties section from the top of your bucket view and choose Static Website Hosting. Don’t forget to type your required index.html and error.html files, and make sure you have checked the option: Use this bucket to host a website.
Now, you can see that the bucket hosting is enabled.
Now, it is time to change our access level permissions of our bucket, since we do want to serve our static website to the users all around the world. However, as you should have noticed when you first created your S3 bucket, the buckets and objects are not public by default.
For example when we try to access our index.html file using the object URL within the object, we get the error message that says access is denied. Since the bucket is not public at all, we can’t make the individual index.html file public through the object level actions. So, we’ll start by enabling public access to our bucket.
Select your bucket, click on edit public access settings of your bucket and uncheck “block all public access”.
The console will ask you to confirm your choice in this step. Simply type confirm
and go ahead.
Before we get to work with Amazon CloudFront to speed up our website, there is one last thing we should do. We want to enable only HTTPS access to our static website for security. Since we are using our custom domain, we need a Custom SSL certificate. We can simply provision our Custom SSL Certificate with AWS Certificate Manager. Go to AWS Certificate Manager and choose “Provision Certificates”. We’ll request a public certificate for our static website. I’ll simply type my domain name, including the wildcard “*” and a dot (.) before my domain name and also add my root domain to get my domain fully qualified. Please note that the wildcard SSL certificate is not a requirement here. I plan to use my domain in my future test projects with maybe some other subdomains, so the wildcard SSL certificate will enable me to use it for all my subdomains. You can always create individual certificates for your subdomains if you like. Just one little but important detail: you must create your ACM certificates in the us-east-1 region to be able to use them in CloudFront distribution.
In the next step, choose DNS validation as the validation method and confirm your validation.
Finally, export your DNS configuration in the last step of ACM to a file and download it. Copy the record name of your domain. Now go to your hosted zone on Route 53, select “Create Record”. I’ve done this using the old console, so the steps might differ. Anyway, the record name will be the record name written in your CSV file that you’ve copied. Choose CNAME - Canonical Name for the type of your record. Leaving TTL as default, copy and paste the record value written in the CSV file to the value area. Setting our routing policy as simple, we then go ahead and create our record set. Now, we’ve validated our DNS to become eligible for the certification. For simplicity, you can also simply click to Create a Record Set in Route 53 when you are done creating your certificates in ACM.
Your certificate will be approved in a short amount of time. In this step, we finally get to the Amazon CloudFront to speed up our static website. Amazon CloudFront is simply AWS's CDN offering, which enables you to send and cache your website at edge locations all around the world and serve it much faster. In this way, the users will be able to reach the cached content within the nearest edge location instead of requesting it from the origin of your S3 bucket. Click to create a web distribution, and select your bucket’s endpoint in the dropdown menu of the Origin Domain Name.
We’ll also select Yes for restricting bucket access, since we don’t want our website visitors to reach our bucket directly. Using our existing identity, we’ll also select updating our S3 bucket policy to enable read permissions.
One last thing to do for origin settings is that defining a header and value for our web distribution. Define your custom origin header as Referer
and type a value that only you know to ensure only you can have direct access
to your bucket. In this way, we will only grant read access to our users through CloudFront distribution, restricting direct access and protecting our bucket.
Continuing with the Default Cache Behaviour Settings section, we enable Redirect HTTP to HTTPS
because we only want secure access to our website.
For the Distribution Settings section, we type our chosen subdomain name, anyonecandoit.sufle.cloud for the CNAME area. Also, we will be using the Custom SSL Certificate that we have just created because we want to use our custom domain rather than the CloudFront domain name. Let's go ahead and select our custom SSL certificate.
Leaving everything else as default, we create our distribution. Now, we’ll wait until the distribution status is Deployed, which might take some time.
Now, go back to your bucket, and create a bucket policy based on your newly generated custom origin header and value in the CloudFront web distribution.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from yoursubdomain.",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject", "s3:GetObjectVersion"],
"Resource": "arn:aws:s3:::yourbucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "your key value"
}
}
}
]
}
Go to Amazon Route 53, select your zone and create a record set again. For the record name, we’ll type our subdomain name, anyonecandoit
. Choose the type as CNAME, select Alias as No. We want our custom domain to serve from the CloudFront web
distribution, so go ahead and copy your CloudFront domain name and paste it into the value area of your new record set in Route 53. Leave TTL and Routing Policy as default and save the record set.
Voila! Go type your domain name into your browser and see your website. We are now serving our static website through our S3 bucket: https://anyonecandoit.sufle.cloud/ No servers, no extra development effort, almost no time!
To make our links look prettier, we can remove the page extensions, Amazon S3 supports that. However, please note that your index.html file has to stay with the extension and while uploading your extensionless
html files, you should make sure that their Content-Type
metadata is set to text/html
. That is the way Amazon S3 understands and recognizes it as an html file.
Open your html file and remove extensions from your links.
<!--Remove .html from link-->
<a href="/new-page">Let's go to new page!</a>
Now go back to your S3 bucket and upload your updated files. Select your file, click actions and rename, and remove the .html
extension.
Then, from the Actions tab again, select metadata and make sure that there is a Content-Type
key with text/html
value exists. If not, add this key/value pair through adding a new metadata.
You can generally upload your extensionless files to your bucket and then change their metadata, this also works without updating your files’ extensions individually.
That’s all! You can go further and integrate your statically generated website's (Gatsby, Hugo, Next.js etc) repository to AWS CodePipeline to build, copy to output to your bucket and automate all of this process, as we do for our website, sufle.io.
A fresh new graduate and specializing in marketing, Deniz is excited to learn and share her knowledge on business technologies and technology culture. With her experience in technology companies during her school years, she is always excited to learn more about how technology transforms businesses.
Subscribe to Our Newsletter
Our Service
Specialties
Copyright © 2018-2024 Sufle
We use cookies to offer you a better experience with personalized content.
Cookies are small files that are sent to and stored in your computer by the websites you visit. Next time you visit the site, your browser will read the cookie and relay the information back to the website or element that originally set the cookie.
Cookies allow us to recognize you automatically whenever you visit our site so that we can personalize your experience and provide you with better service.