Discover the process of implementing a two-tier architecture with the help of AWS and Terraform Cloud.

Abidoye Joshua mayowa
10 min readJun 9, 2023

Greetings, and thank you for joining us for the 22nd week of the LevelUp In Tech Bootcamp! Our current project revolves around AWS and Terraform, two of the most beloved tools on my journey as a cloud engineer. If you, too find joy in infrastructure and task automation, then this piece of writing will surely interest you

What is Terraform & How does it work?

Terraform automates the process of creating and managing resources on any cloud platform, making it one of the most popular Infrastructure as Code tools available.

  • Write: As a user, you have the ability to designate resources from various cloud providers and services. An illustration of this is setting up a configuration for the deployment of an application on virtual machines within a Virtual Private Cloud (VPC) network, complete with security groups and a load balancer.
  • Plan: Terraform plans infrastructure changes based on your configuration.
  • Apply: Once approved, Terraform executes the intended actions in the appropriate sequence while considering any dependencies between resources.

What is a two-tier architecture?

In a two-tier architecture, there are two tiers. The first tier is the web tier, which includes the web server and the user interface. The user interface is responsible for interacting with users and processing their input. The second tier is the database tier, which is responsible for processing and storing data. These two tiers communicate over a network to exchange data. This architecture is suitable for small to medium-sized applications that do not require a large infrastructure.

Objectives:

  1. Create a highly available two-tier AWS architecture containing the following:

a. Custom VPC with:

  • 2 Public Subnets for the Web Server Tier
  • 2 Private Subnets for the RDS Tier
  • A public route table
  • A private route table

b. Create an Auto Scaling Group to launch an EC2 Instance with an NGINX web server in each public subnet in the web tier. Configure necessary security groups.

c. Create one RDS MySQL Instance (micro) in the private RDS subnets with appropriate security groups.

2. Deploy this using Terraform Cloud as a CI/CD tool to check your build.

3. Push your code to GitHub and include the link in your write up.

Prerequisites:

  • An AWS Admin Account with Access Key and Secret Access Key
  • Vs Code IDE environment or AWS Cloud9.
  • A free Terraform Cloud Account.
  • A GitHub Account.
  • Familiarity with Linux and Git commands.

Step 1: Setting up an IDE and Terraform Cloud

To complete this project, we’ll utilize Visual Studio code for coding and formatting. Our task involves crafting five files- main.tf, variables.tf, providers.tf, database.tf, and apache.sh.

It is essential that we utilize Terraform Cloud as our remote backend. It will keep track of our state, allowing Terraform to know what resources are being managed and created.

To login, enter the following command:

terraform login

Terraform will now request an API token for you to use. Enter “yes.”

To set up remote access to Terraform Cloud, kindly open it and create a token.

Give the token a description.

Copy the token into the terminal and press “enter.”

If done correctly, you should see the Terraform logo on your screen with directions on how to use Terraform Cloud. When seeking to acquire knowledge on the usage of Terraform Cloud, it is highly recommended that you peruse the available instructions in the image above.

Create an organization in Terraform Cloud by clicking “create an organization” on the organization page.

Create a workspace after successfully creating your organization. It will help you deploy resources quickly in various environments.

Select the “CLI-driven workflow” and create the workspace by following the prompts.

To create AWS resources, set the access key and secret access key in IAM’s security credentials tab.

After acquiring your access keys, proceed to the “Variable Sets” tab within your organization.

Then click the “Create variable set” button.

Enter the necessary information and add the variables for AWS to Terraform Cloud.

Then click “create variable set.”

Alright, let's move on to the next task at hand.

Step 2: Configuring providers.tf and main. tf files

Terraform providers allow teams to manage their infrastructure consistently with a single configuration file. Naming and versioning providers, as well as setting up a remote backend, ensures desired state and easy management through Terraform.

provider "aws" {
region = "us-east-1"
}

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "two_tier_architecture"
workspaces {
name = "devops"
}
}
}

In this code below, We’ll create a main.tf file for network configuration with VPC, subnets, route tables, security groups, and two instances for the web tier.’

Also, we will create a custom VPC that is going to use the CIDR block from our variables file.

The internet gateway is created and attached to the custom VPC by including the vpc_id. This will provide internet access for the custom VPC.

Copy the following code into the main.tf file and save it.

#VPC
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr
tags = {
Name = "vpc"
}
}

#INTERNET GATEWAY
resource "aws_internet_gateway" "internet_gateway" {
vpc_id = aws_vpc.vpc.id
tags = {
Name = "igw"
}
}
#SUBNETS

data "aws_availability_zones" "available_az" {}

resource "aws_subnet" "public_subnet1" {

vpc_id = aws_vpc.vpc.id
cidr_block = var.subnet1_cidr
map_public_ip_on_launch = true
availability_zone = data.aws_availability_zones.available_az.names[0]
depends_on = [aws_vpc.vpc]

tags = {
Name = "public_subnet1"
}
}

resource "aws_subnet" "public_subnet2" {

vpc_id = aws_vpc.vpc.id
cidr_block = var.subnet2_cidr
map_public_ip_on_launch = true
availability_zone = data.aws_availability_zones.available_az.names[1]
depends_on = [aws_vpc.vpc]

tags = {
Name = "public_subnet2"
}
}

resource "aws_subnet" "private_subnet1" {

vpc_id = aws_vpc.vpc.id
cidr_block = var.subnet3_cidr
map_public_ip_on_launch = true
availability_zone = data.aws_availability_zones.available_az.names[0]
depends_on = [aws_vpc.vpc]

tags = {
Name = "private_subnet1"
}
}

resource "aws_subnet" "private_subnet2" {

vpc_id = aws_vpc.vpc.id
cidr_block = var.subnet4_cidr
map_public_ip_on_launch = true
availability_zone = data.aws_availability_zones.available_az.names[1]
depends_on = [aws_vpc.vpc]

tags = {
Name = "private_subnet2"
}
}

#ROUTE TABLES
resource "aws_route_table" "public_route_table" {
vpc_id = aws_vpc.vpc.id

route {
cidr_block = var.route_table_cidr
gateway_id = aws_internet_gateway.internet_gateway.id
}

tags = {
Name = "public_route_table"
}
}

resource "aws_route_table_association" "public_subnet1_association" {
subnet_id = aws_subnet.public_subnet1.id
route_table_id = aws_route_table.public_route_table.id
}

resource "aws_route_table_association" "public_subnet2_association" {
subnet_id = aws_subnet.public_subnet2.id
route_table_id = aws_route_table.public_route_table.id
}


resource "aws_instance" "instance1" {
ami = var.ami_id
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.web_server_sg.id]
user_data = file("apache.sh")
subnet_id = aws_subnet.public_subnet1.id

tags = {
Name = "instance webtier1"
}
}

resource "aws_instance" "instance2" {
ami = var.ami_id
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.web_server_sg.id]
user_data = file("apache.sh")
subnet_id = aws_subnet.public_subnet2.id

tags = {
Name = "instance webtier2"
}
}

Step 3: Apache. sh and variables. tf file configuration

To bootstrap the Apache web browser, the user data script needs to be compiled in the Apache. sh file. Simply copy the code below and paste it into the Apache.sh file.

#!/bin/bash
yum update -y
yum install -y httpd.x86_64
systemctl start httpd.service
systemctl enable httpd.service
echo "<html><body><h1>This is Joshua Abidoye Week22 Project Tier 2 </h1></body></html>" > /var/www/html/index.html

To enhance the reusability of our configuration and facilitate the deployment of our resources, we need to create some variables. Thus, let’s create a new file named “variables.tf” and replicate the code below in it.

variable "vpc_cidr" {
type = string
default = "10.0.0.0/16"
}

variable "subnet1_cidr" {
type = string
description = "Subnet 1 cidr block"
default = "10.0.1.0/24"
}

variable "subnet2_cidr" {
type = string
description = "Subnet 2 cidr block"
default = "10.0.2.0/24"
}

variable "subnet3_cidr" {
type = string
description = "Subnet 3 cidr block"
default = "10.0.3.0/24"
}

variable "subnet4_cidr" {
type = string
description = "Subnet 4 cidr block"
default = "10.0.4.0/24"
}

variable "route_table_cidr" {
type = string
description = "cidr block for public route table"
default = "0.0.0.0/0"
}

variable "ami_id" {
type = string
description = " AMI ID"
default = "ami-09988af04120b3591"
}

variable "instance_type" {
type = string
description = "The instance type"
default = "t2.micro"
}

We’ll use Amazon Linux 2 and a customized VPC to launch EC2 instances in two public subnets across different AZs for high availability. Two private subnets will be in the same AZs.

Step 4: The Database.tf file configuration

In today's demonstration, we will utilize an RDS database. Amazon Relational Database Service (RDS) is a managed SQL database server provided by AWS. This particular database will be responsible for processing and storing the data obtained from the web tier.

The database file, "database.tf," consists of a resource block that creates a RDS MySQL instance and a subnet group for it. The "allocated_storage" parameter is set to a minimum value of 10. Lastly, the minimum value of 10 falls within the free-tier plan.

# The security group for launch_template
#to allow http and ssh

resource "aws_db_subnet_group" "subnet_group" {
name = "subnet_group"
subnet_ids = [aws_subnet.private_subnet2.id, aws_subnet.private_subnet1.id]

tags = {
Name = "My Database subnet group"
}
}


resource "aws_db_instance" "instance_db" {
allocated_storage = 10
db_name = "instance_db"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
username = "project22"
password = "project22"
skip_final_snapshot = true
publicly_accessible = false
vpc_security_group_ids = [aws_security_group.database_sg.id]
availability_zone = data.aws_availability_zones.available_az.names[0]
db_subnet_group_name = aws_db_subnet_group.subnet_group.id
port = "3306"

}

#security group for launch_template
#allowing http and ssh
resource "aws_security_group" "web_server_sg" {
name = "web_server_sg"
description = "Allow traffic"
vpc_id = aws_vpc.vpc.id


ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_security_group" "database_sg" {
name = "mydatabase_sg"
description = "Allow traffic"
vpc_id = aws_vpc.vpc.id

ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"

security_groups = [aws_security_group.web_server_sg.id]



}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]

}
}

Step 5: Deploy the Infrastructure

Before we deploy the Infrastructure, we must initialize the configuration files.The terraform init command initializes a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

Terraform init

If you look into the state tab to the left on the workspaces page you can see that Terraform Cloud is now tracking the state.

We'll run code in Terraform Cloud and CLI through VS code.

Click on “start run” You’ll be prompted with a full plan of all the resources that you plan on applying to your AWS account.

Check your AWS resources in the console to ensure they were created successfully.

Step 6: Verify Our Infrastructure

Please go to the EC2 dashboard and locate the recently created instances.

As you can see,, we have two EC2 instances that were created.

We will now see if we can grab the public IP of each Instance and put it into a browser.

If done successfully, you should see the following page.

First Instance
Second Instance

Now let’s check to verify we have our RDS Database as well.

Please install mariadb on the web tier instance to check the database connection.

To install MariaDB, enter the following command:

sudo yum install mariadb

Access the database endpoint from the web tier Ec2 Instan.

Enter the following command to log into the database:

mysql -h <Endpoint> -P 3306 -u <username> -p

We have successfully logged into the RDS server from our web tier. Congratulations!

Step 7: Cleanup

Ready to tidy up and wipe out all the infrastructure resources we’ve put in place? Here’s the command you need to run:

terraform destroy -auto-approve

After successfully deleting the resources, it is recommended to return to the console to confirm their deletion.

Conclusion

I would like to express my gratitude for accompanying me on this Terraform tutorial. Let’s continue learning together as I release more Terraform tutorials in the following weeks. Don’t forget to check back every week!

Check out my GitHub link for this tutorial code here https://github.com/Abidoye95/terraform-project-22. See you next time!

--

--

Abidoye Joshua mayowa

DevOps Engineer. I'm interested in collaborating with anyone interested in cloud engineer or cloud DevOPs.