Add an application load balancer to Amazon EC2 using Terraform

A highly available application has higher chances of attracting customers because they are assured of consistency in service. Load balancing is a cost-effective way to increase an application’s availability. In this note, I describe the steps to add an application load balancer to three EC2 instances hosted in three different availability zones in a region using Terraform.

You may read about the application load balancing at AWS-Docs. I used the concepts related to elastic load balancing, Amazon EC2, and Amazon VPC in this use case. In addition, I used Terraform to create and manage all the underlying resources to simplify my effort.

In my previous note (create-a-web-server-on-amazon-ec2-instance-using-terraform-and-user-data), I described three Amazon EC2 instances hosting a static web page and displaying that when someone requested the public DNS hostname of the EC2 instances. Each EC2 instance had its own specific public DNS hostname. If an EC2 instance were deleted or was unavailable, the request to that particular DNS hostname would be denied. Placing a load balancer in front of the EC2 instances ensures that the application is not unavailable if a few EC2 instances go down.

If you are new to load balancing, I’d recommend this wiki page. In AWS, load balancing is supported via Elastic Load Balancing, which, per AWS-Docs, automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. Elastic Load Balancing supports the following load balancers: Application Load Balancers, Network Load Balancers, Gateway Load Balancers, and Classic Load Balancers.

I borrowed heavily from the concepts described in my previous note and built upon the available code. Previously, I used Terraform to provision an AWS VPC, subnets, route table, internet gateway, EC2 instance, etc. You may read about that at -create a web server. In this use case, I added an application load balancer too. To demonstrate how an application load balancer works, I hosted a static web page on three EC2 instances hosted in three different availability zones that listed some specific info related to the EC2 -private IP address and availability zone. On each refresh of the load balancer DNS name, the request is routed to a different EC2 instance.
If you have the underlying VPC, subnets, security group, and EC2 instances ready like I had, you may add a load balancer to the stack in a 4-step process. Here’s the link to my Github repository with the complete code.

Note: In 2023, I automated this process using GitHub Action. After going through all the steps in this note, if you want to learn how to automate the process, head over to ci-cd-with-terraform-and-github-actions.

Step 1: Create a target group
Per AWS-Docs, a target group is used to route requests to one or more registered targets. I specified the port and protocol that this target group was listening to.
65.Image-2

Step 2: Attach the target group to the AWS instances
In this step, I attached the aws_lb_target_group to the three Amazon EC2 instances and specified the port number on which traffic would be routed.
65.Image-3
Step 3: Create the load balancer
Then I created the load balancer and attached a security group and a set of subnets.
65.Image-5
Step 4: Create a listener
Finally, I attached the load balancer to the target group. Per AWS-Docs, a listener is a process that checks for connection requests, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes requests to its registered targets.
65.Image-4
With all the above code, I ran the usual Terraform commands terraform apply (after initializing the repository). And after Terraform provisioned the resources, I received the message below. That is due to the specification in the output block (in the output.tf file).
65.Image-6
This is the load balancer DNS name. I typed that on my web browser, and the request was randomly routed to one of the EC2 instances’ index.html page. The request was routed to a different EC2 instance with each web browser refresh. And that is how an application load balancer works. Other concepts are at play, like health checks and session stickiness, which I will discuss in a subsequent post.

If you followed this note, I hope you found something useful. So go ahead and fork the git repository and give it a try. Let me know if you have any questions or suggestions.

Please note that I did not discuss the security aspect of managing the secret and access keys in this note. There are two methods I am comfortable with storing the values in the terraform.tfvars file OR passing them through the command line with terraform command. Depending on your use case and security posture, you can decide which option is secure and manageable.

3 thoughts on “Add an application load balancer to Amazon EC2 using Terraform

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s