Creating a Highly-Available Web Infrastructure using Terraform, Auto Scaling Groups and Security Groups

Abidoye Joshua mayowa
9 min readJun 4, 2023


For my current project tutorial, I am utilizing Terraform to establish an Auto Scaling group that covers two subnets within my default VPC, thus guaranteeing optimal availability and fault tolerance. Additionally, I will select a security group and initiate an Apache web server to ensure the website is readily accessible to clients. Finally, I plan to create an S3 bucket as a remote backend, guaranteeing reliable infrastructure management.

Project scenario:

An e-commerce company needs to handle a surge in traffic during the holiday season. The company wants to ensure that its website remains available and responsive to customers even during high-traffic periods.

The project can be used to launch an Auto Scaling group that spans two subnets in the default VPC, ensuring high availability and fault tolerance. The Auto Scaling group will automatically scale up or down based on traffic, ensuring that the website remains responsive to customers at all times.

To ensure that the instances in the Auto Scaling group are secure, a security group is created that allows traffic from the internet and associates it with the instances. Additionally, a script is included in the user data to launch an Apache web server, ensuring that the website is available to customers.

To ensure that the Auto Scaling group has the appropriate capacity, it is set to have a minimum of two instances and a maximum of five. This ensures that the website can handle a surge in traffic during the holiday season while also minimizing costs during periods of low traffic.

To verify that everything is working correctly, the public IP addresses of the two instances are checked. One of the instances is manually terminated to verify that another one spins up to meet the minimum requirement of two instances.

Finally, to ensure that the infrastructure is reliable and can be easily managed, an S3 bucket is created and set as the remote backend for Terraform. This ensures that the infrastructure is versioned and can be easily rolled back if necessary.


  1. Launch an Auto Scaling group that spans 2 subnets in your default VPC.
  2. Create a security group that allows traffic from the internet and associates it with the Auto Scaling group instances.
  3. Include a script in your user data to launch an Apache webserver. The Auto Scaling group should have a min of 2 and a max of 5.
  4. To verify everything is working, check the public IP addresses of the two instances. Manually terminate one of the instances to verify that another one spins up to meet the minimum requirement of 2 instances.
  5. Create an S3 bucket and set it as your remote backend.

I have compiled a list of the files that I created using my IDE: We have a shell script file named script. sh, which comprises a set of shell commands written in Bash scripting language. The said file will facilitate the installation of the Apache web server on our AWS instances.

In the file, you can specify the infrastructure resources that Terraform will handle. You can do this by using resource blocks to define the resources that Terraform will manage.

The backend. tf file stores important information about infrastructure resources for Terraform's management.

The "variables. tf" file lets users customize infrastructure code with input variables. It contains blocks that define each variable.

Step 1: Compile the codes for the file

provider "aws" {
region = var.region

# Security Group
resource "aws_security_group" "terraform_sg" {
name = "allow_http"
description = "Allow inbound HTTP, HTTPS, and SSH traffic"
vpc_id = # ensure this is the same VPC

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [""]

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = [""]

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [""]

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [""]

# EC2 Configuration
resource "aws_launch_configuration" "Apache_Bootstrap" {
image_id = var.image_id # Amazon Linux 2 AMI
instance_type = "t2.micro"
security_groups = []
associate_public_ip_address = true
user_data = file("")


# Auto Scaling Group
resource "aws_autoscaling_group" "terraform_autoscaling_group" {
vpc_zone_identifier = [,]
launch_configuration =
min_size = 2
max_size = 5
desired_capacity = 2

tag {
key = "Name"
value = "tf_asg_group"
propagate_at_launch = true

# Default VPC and Subnets
resource "aws_vpc" "my_vpc" {
cidr_block = var.cidr_block

resource "aws_subnet" "subnet1" {
vpc_id =
cidr_block = "" # CIDR block for subnet1

resource "aws_subnet" "subnet2" {
vpc_id =
cidr_block = "" # CIDR block for subnet2

# Internet Gateway
resource "aws_internet_gateway" "terraform_asg_gateway" {
vpc_id =

# Route Table
resource "aws_route_table" "terraform_asg_rt" {
vpc_id =

route {
cidr_block = ""
gateway_id =

# Subnet Association
resource "aws_route_table_association" "terraform_asg_subnet_association" {
subnet_id =
route_table_id =

resource "aws_route_table_association" "terraform_asg_subnet_association_2" {
subnet_id =
route_table_id =

Use an AWS launch template in the file to configure instances at launch. Utilize variables to make it more flexible and reusable. Set the minimum size to 2, maximum size to 5, and desired capacity to 2 in the "" file.

To establish new VPC subnets, utilize the code below. The VPC will generate two private subnets.

We need to create a security group to allow internet traffic to reach our Auto Scaling group instances. Using AWS Security Group, we’ll set it up in the default VPC and enable inbound traffic on port 80 from a specified CIDR block. This will ensure that any IP address on the internet can access the resource associated with the security group.

variable "region" {
description = "The region to deploy the resources"
default = "us-east-1"

variable "image_id" {
description = "The image id for the launch configuration"
default = "ami-0bef6cc322bfff646"

variable "instance_type" {
description = "The instance type for the launch configuration"
default = "t2.micro"

variable "desired_capacity" {
description = "The desired capacity for the Auto Scaling group"
default = 2

variable "max_size" {
description = "The maximum size of the Auto Scaling group"
default = 5

variable "min_size" {
description = "The minimum size of the Auto Scaling group"
default = 2

variable "bucket_name" {
description = "The name of the S3 bucket"
default = "my-terraform-backend-bucket-luit"

variable "key_name" {
description = "The pem key for this weeks project"
default = "newkey.pem"

variable "cidr_block" {
type = string
description = "Variable for VPC CIDR block"
default = ""

Step 2: Include the user data script to launch Apache web server

In the script file named “”, add these necessary commands to install and start the Apache web server:

To include the file in the aws_launch_template resource, you just need to add it to the user_data field. You can reference the script file using the file() function, which will read its contents and add them to the user data section.

Step 3: Create an S3 bucket and set it as your remote backend

Use AWS S3 as a remote backend in Terraform to store state files for resource management. During “terraform apply,” the state file is generated and updated automatically.

To create a bucket, go to the Amazon S3 Dashboard, choose a unique name, select your region, and disable the ACLs.

Go ahead and click that “Create bucket” button, and you should see the Successfully created bucket prompt on the top of your screen.

Now, let's go back to your IDE. Open the "" file and modify the code to configure your newly created S3 bucket as the backend.

After configuring all your code,

proceed to run "terraform init" in your working directory to download all necessary provider plugins.

Next, run "terraform fmt" to tidy up your code and enhance its presentation.

Validate all configuration files by running "terraform validate".

To review an execution plan illustrating Terraform's actions when implementing changes to the infrastructure, run "terraform plan".

Finally, apply the changes to the infrastructure as defined in your configuration files by running "terraform apply".

Step 4: Verify everything is working

Once you have created your Auto Scaling group and launched the instances, it is crucial to ensure that everything is functioning properly. To achieve this, head over to the EC2 Dashboard on the AWS console. Two fresh instances should be visible, currently in the initialization phase. Give them a minute or two to complete the initialization process and successfully pass their status checks.

After the instances have been launched, select one of them and obtain the Public IPv4 address by copying it.

Paste it into your web browser with “http://” in front of it to check if the Apache web server was successfully installed.

Once you've confirmed that everything is functioning properly, you ought to be able to view the Apache test page. Be certain to also verify the other instance.

first instance.
Second instance.

Now it's time to verify the proper functioning of the Auto Scaling group. Let's head back to the EC2 instance Dashboard and terminate one of the instances.

Wait for a minute or two and refresh the instance page. You should see another instance pop up, and it will take a moment to finish initializing.

It appears that the Auto Scaling group is performing as anticipated!

Go to EC2 Dashboard, click on Autoscaling groups, select the group and click "Activity" to observe the phenomenon.

The history should show that an instance was launched in response to an unhealthy instance needing to be replaced.

Verify backend configuration in S3 Dashboard by checking for presence of "terraform.tfstate" file in established bucket.

To access the complete terraform state file on the webpage, simply select the file and click on the "Open" option..

Step 5: Clean up

To clean up your infrastructure, run "terraform destroy" in your terminal after completing necessary tests and verifications.

Type “yes” to confirm the action, and Terraform will destroy all the resources.


I would like to express my gratitude for accompanying me on this Terraform tutorial. Let’s continue learning together as I release more Terraform tutorials in the following weeks. Don’t forget to check back every week!

Check out my GitHub link for this tutorial here .See you next time!



Abidoye Joshua mayowa

DevOps Engineer. I'm interested in collaborating with anyone interested in cloud engineer or cloud DevOPs.