Deploying a 3-Tier Architecture on AWS using Docker Swarm.
Introduction
What is a Docker Swarm?
This container orchestration tool allows you to manage a cluster of Docker nodes and deploy and scale your applications across them.
Each docker container in a swarm can be deployed and managed as a node in that cluster environment.
Docker Swarm Basic Terminologies
Docker Swarm cluster
A Docker Swarm cluster is a group of Docker nodes that work together to provide a highly available and scalable platform for deploying and running applications using Docker containers.
Docker Swarm node
A node in Docker Swarm refers to a physical or virtual machine part of a Docker Swarm cluster. It can be either a Manager node or a Worker node.
Docker stack deploy
This command is used to deploy a stack to a Docker Swarm cluster.
Manager node
This is a node that manages the Swarm cluster and coordinates the tasks that run on worker nodes.
Worker node
This is a node that runs tasks and services as directed by the Manager node.
Docker service
The docker service command is used to manage services in a Docker Swarm cluster.
Objectives:
Using AWS, create a Docker Swarm that consists of one manager and three worker nodes.
Verify the cluster is working by deploying the following tiered architecture:
- a service based on the Redis docker image with 4 replicas
- a service based on the Apache docker image with 10 replicas
- a service based on the Postgres docker image with 1 replica
Prerequisites
- AWS account
- Basic knowledge and understanding of Docker
- Basic Linux command line knowledge
- Docker hub account.
Step 1: Set up Amazon EC2 Instance Environment
For a refresher on how to create EC2 instances, click here to view my “How to Launch an EC2 Instance with Apache Server using AWS CLI” article, where I outline steps to create ec2 instances.
Create swarm security group.
First, we need to set up our EC2 instance environment to use Docker Swarm to open several ports specific to Swarm. We will need two security groups, a swarm manager security group and a swarm worker security group.
Create swarm manager security group.
Head to the Amazon VPC console — click security groups — click create security group — click inbound rules — add rule — add the following ports: to the inbound rule: TCP 2377, TCP 7946, UDP 7946, UDP 4789, 50 all.
Create swarm worker security group.
Follow the same steps above to create the manager security group, but instead, add the following ports to the inbound rule: TCP 7946, UDP 7946, UDP 4789, and 50 all.
Create manager instance
Under the Advanced details section, scroll down to User Data and paste the script below into the text box. This will bootstrap the EC2 instances to install, enable and start docker once the instances have been launched. This will save us from having to configure each instance later. Click launch instance.
In the Summary section on the right, we need to change the Number of instances to 1.
#!/bin/bash
#Update all yum packages
sudo yum update -y
#Install Docker
sudo yum install -y docker
#Enable Docker
sudo system enable docker.service
#Start Docker
sudo systemctl start docker.service
#Install Docker Compose
sudo yum install -y docker-compose
Create worker instance
Repeat the above steps to create the worker nodes.
In the Summary section on the right, change the Number of instances to 3.
Once created, head back to the EC2 console and rename your worker nodes 1–3. You also need to create a keypair to ssh into our ec2 instances.
Step 2: Connect To Nodes and Verify Docker Installation
Let’s ssh into our manager node instance to see if it’s running.
Select your manager node — Under actions, click connect — select the SSH client tab — copy the SSH command displayed under “Example.”
From the image above, we can see that we could ssh into our manager node instance using our key pair.
After connecting to our ec2, let’s confirm docker has been successfully installed on our manager node instance by running the command below:
docker --version
Next step, we need to Run the “exit” command to log out of the manager node instance. Repeat this process and SSH into all three worker nodes and confirm that docker has been successfully installed on them.
Step 3: Create Swarm and Assign Manager/Worker roles
We need to create a swarm and ssh back into our manager node. To do this, we need to run the “sudo su” command to change to root privileges to allow us not to need to run sudo before every command.
To set up our swarm, run the following command:
docker swarm init
The output shows that our node “is now a manager”.
The next step is to copy the “docker swarm join” command that was outputted. We need to open up three more terminals. SSH into your three worker nodes, paste and run the docker swarm join command. Ensure that you have first run the “sudo su” command to have access to root privileges.
Your output should read, “This node joined a swarm as a worker.”
Head back to our manager node and run the following command to verify that the swarm has been set up:
docker node ls
The node with the asterisk * next to it indicates the manager node(Leader).
Step 4: Create Services
Create Redis Service with x4 Replicas:
Navigate to your manager swarm, and we will be working from the swarm manager since our worker nodes have been added to the swarm.
To create our Redis service, we can get the official image from DockerHub.
To create our Redis service, run the following command:
docker service create --name redis --replicas 4 redis
Create Apache Service with x10 Replicas:
To create our Apache service, run the following command:
docker service create --name apache --replicas 10 httpd:latest
Create Postgres Service with x1 Replica:
To create our Postgres service, we will create a docker compose file.
From your swarm_manager_node, create a new directory with the “mkdir” command. Then change into that directory with the “cd” command.
We will use the built-in Vim text editor to create and edit our compose file. Run the following command:
vim docker-compose.yml
Once inside the editor, press “i” for the insert command, paste in the following code, then press “esc” to exit insert mode and finally type and enter “:wq” to save your file and exit the text editor.
version: '3.8'
services:
db:
image: postgres
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
Let’s Run the following command to run our docker compose file:
docker stack deploy -c docker-compose.yml postgres
Run the following command to verify that all your services are up and running:
docker service ls
As we can see from the image above, we have all our desired services and their replicas.
To manage the cluster and view the tasks of our services, run the following command on any specific service name:
docker service ps <service_name>
Step 5: Clean Up Our Environment
Run the following command in your nodes to release them from the swarm:
docker swarm leave --force
We need to exit all our instances and then head to the EC2 console to terminate all our instances.
Great! That’s the end of the project.
We have successfully scaled a 3-tier architecture on AWS using Docker Swarm. Thank you for taking the time to read this write-up. I will be posting more hands-on projects. Join me next time to see more of my project.
If you enjoy cloud engineering content like this and would like to see more, follow me at https://www.linkedin.com/in/joshaby/.