Configure Reverse Proxy using Ansible on AWS

Configuring Reverse Proxy on AWS using Ansible Role and getting the IP of Instance Dynamically

MishanRG
16 min readMay 31, 2021

--

Hello everyone, I hope you all are doing well and fine. Here I am with another blog on Configuring Reverse Proxy on AWS and getting instance IP dynamically in inventory.

New to Ansible?? Want to know about it??

Here are some of my blogs you can check out about Ansible and its Configuration.

So let’s get started with our task.

To connect with AWS, we can use AWS UI, AWS CLI, API, and SDK. Here we will connect using the SDK for Python called “Boto3". Now to connect to AWS, we need to authenticate ourselves, so we will be using the AWS key to log in. We can create the key from the IAM user for Programmatic access, where we get access key ID and secret access key.

Ansible Vault

We will be doing this practice in our RedHat 8 Linux, and it will be Controller Node. First, create a workspace to create all the files and store them in one place, which will be better for management. So here, I created a directory called “playbook-rponAWS” to store all the required files here.

Now we have downloaded the keys, and they are essential for us, and we cannot show them to anyone or give them to anyone. So we will use Ansible Vault. The vault helps keep our files encrypted, and no one can see the internal content. To create a vault, we can use the command:

ansible-vault create <fileName>

We can see I have created a file called credential.yml and gave a password to the file. The internal content of the file will be like this.

We need to give access_key and secret_key, which we got from AWS.

Ansible Configuration File

We also need a pem key from AWS which can help us log in or ssh inside the instance, we can create with Ansible, or I already have an old key, so I will use the key in all of my instances their keys are used to ssh the instance which are going to be created. All instances will be using this key.

I have transferred the key in this same workspace with the name “awsKeyApril.pem”.

We have to create an “ansible.cfg” file where we will configure the ansible what to do and where to run and how to run.

I have created a file called “ansible.cfg” in the directory and filled the above. Let me break it down. First, I have given private_key_file, which I downloaded from AWS and is in the same location. Then the remote_user is who will be running our scripts in managed node. Then I have disabled host_key_checking to make it less interactive. The same goes for ask_pass. Here I have used Ansible Roles, so the roles_path is also mentioned (which we will create after some time). Then I have used privilege escalation to become root user for my Now; we have to create an “ansible.cfg” file where we will configure the ansible what to do and where to run and how to run. And get the root power in the managed node.

With this, our Ansible is configured to run for this workspace playbook.

Play Book for Back End Servers

Now we will create a playbook to run our task. We will create a file “ec2.yml” in our working directory, and then we will write the code inside. I have updated the code of the playbook below, and I will explain each line.

First, I have used host as my localhost because we will be running the playbook in the same system. Then I have used the var_files, which we have created to store the access keys.

- hosts: localhost  
gather_facts: no
vars_files:
- credential.yml

The first task of the play is we will be installing boto3 in our system, which will help us connect with AWS.

tasks:  -  
name: "boto3 installation"
pip:
name: "boto3"
state: present

The second task is we are creating a Security Group for our Backend WebServers. Here I have used the module “ec2_group,” which will help us create a Security Group. I have given the region passed the access key and secret key as a variable. Then I have defined the rules for the Security Group to allow SSH and TCP, which we needed to connect to the system and connect to web services from outside. And for egress, I have allowed all the IP and Port.

- name: "creating security group for Web Server(Backend Server)"       
ec2_group:
name: backendSecurity
description: "Security Group made for backend webserver"
region: "ap-south-1"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: "all"
cidr_ip: 0.0.0.0/0

Our next task is to check if we already have an instance running or not, so we won't be launching the same instance again and again. This task will create an idempotence play. So our play won’t launch an instance if it is already running. Here we have used a command to check any instance with a particular tag and store the output in the “query_private_ip” register keyword. We will use this on the next task as a when statement, which means if this task gave an output, we would not run our next task of creating an instance.

- name: Query for existing instance    
command: aws ec2 describe-instances --region ap-south-1 --filter
Name=tag:Name,Values=backendServer --query
"Reservations[].Instances[].PrivateIpAddress" --output
text
register: query_private_ip

Our next task will be creating an AWS EC2 instance where we will have conditions when it should run the task, and we also have used our security group, which we created above. We have used the module called ec2, which will help us create an EC2 instance. We have used all the required keywords to launch the EC2 instance and stated the value per our requirements. The count is also there, which means it will launch that count of instance, i.e., 3 in this case with the same configuration.

And this task runs when the “query_private_ip.stdout” == ‘ ’ value is NULL. We have also stored the output of this task, or we can say facts in the keyword “backend” using the register.

- name: "launching ec2 instance for backend webserver"
ec2:
key_name: awsKeyApril
instance_type: t2.micro
image: ami-0e306788ff2473ccb
wait: true
group: backendSecurity
count: 3
vpc_subnet_id: subnet-d5626bbd
assign_public_ip: yes
region: ap-south-1
state: present
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
instance_tags:
Name: backendServer
when: query_private_ip.stdout == ''
register: backend

Now 3 instances will be created in our AWS, and we need to add these instances in a host group where we can run our further task. For this, we have used the module “add_host,” which will get all the public IP of the instance and store it in a group called “backendServer.” The loop keyword will loop all the available instances and then store each instance's required IP details. The item keyword help as an increment. Here also the when keyword is used where the same condition is being checked as above.

- name: "Adding our instances launched above --> to the backend 
server group"
add_host:
hostname: "{{ item.public_ip }}"
groupname: backendServer
loop: "{{ backend.instances }}" #this we are getting from above
"register"
when: query_private_ip.stdout == ''

We sometimes need to wait so that our instance get launched in AWS because it may take some second to launch, so for this, we can use 2 ways either we can pause the playbook for sometime before launching the next task, or we can check if the SSH is enabled in the instance or not before we process to next task. If SSH is enabled, then we can go on with the next task.

- name: "Pausing the Playbook for 10 sec so that EC2 instance can 
start(NOT RECOMMENDED IF USING BELOW TASK)"
pause:
seconds: 10
when: query_private_ip.stdout == ''
- name: "Waiting for system to get ready and allow us to SSH"
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
state: started
loop: "{{ backend.instances }}"
when: query_private_ip.stdout == ''

This was the end of the playbook for localhost. We have installed boto3 in the local system, and then we have created a Security Group for BackEnd Servers. Then we have created the backend servers and added them to a host group name as “backendServer” group.

Role For Back End Servers (Web Servers)

When we work with Ansible, we have to create many files and manage many files at once. There are many tasks inside also, and some playbooks might become very long and hard to debug or manage later. So Ansible has a concept of ROLES. The roles help us to manage Ansible Task and different files. A role is basically a folder with subfolders for different tasks and variables and templates and handlers to store. This is a great way to manage Ansible Projects.

Here we will create a role now to manage our Back End Servers. We have to create our backend server to host a website. Basically, the servers will be web servers here.

To create a role, we can use the command:

ansible-galaxy init <roleName>

Here I have created 2 roles. I have created a role inside the directory called “webserverLB” inside our main directory. As I already have a role with that name, so it gave me an error. For now, we will be working on the role “lbonAWS.” This role will be for the configuration of our backend servers as a webserver.

Let’s get inside the role directory “lbonAWS.”

Here we have a lot of folders. These folders are to store the yml code for a specific purpose. When we write the playbook, we use many tasks, templates, variables, and other files. So we store them separately in each folder which can be useful for management, and then we will add the required content inside the required folders.

First, we went to the task directory → $ cd tasks. We can see that there is one file, main.yml, inside this file; we will write our content of the task. We don’t need to do anything. We can write all the tasks here.

I have started with the first task, where I have installed http and PHP inside the system. Then I have added the content of file “index.php” from my local system to the “/var/www/html/index.php” of the managed node using the copy module.

The content of the file was:

The PHP code gets the IP address of the system and prints it out.

Then on the last task, I have started the HTTPD service.

So this was the whole task that I will be doing. We wanted to download HTTP and PHP and put the content in the location and start the service, so now our web server is ready.

I do not need to state where I am using it or which system it will use because anyone can use roles. We can upload the roles in Ansible Galaxy or download it and use it as per our needs. They are like custom build playbooks which are very managed.

Role For Front End Servers (Load Balancer)

In the same way, we now need to create the role that will create and configure our backend servers. Now here we first will go inside the role folder “flbonAWS.” Now inside, we will go to the task folder first, and then in the main.yml, we will start our code.

Our first task is installing haproxy software to create a reverse proxy in the system. Then we have some configured file which we will be updating in the configuration folder of haproxy. Here we are trying to update the configuration file of haproxy from our system. The file name is haproxy.cfg and the file location is /etc/haproxy/haproxy/cfg.

Here in the code, we have also used a template, and it updates our config file, so whenever we update the configuration file, we have to restart our servers to I have used the concept of handlers here. The notify keyword will notify the handler named restarting lodabalancer when this task is changed other it won't remind the handler. The final task is to start the service.

If we were using handlers in a simple playbook, we would have mentioned them in the last file, but this is a role, so the role has a different location to store the handler code. We will go back to the role directory, and there we have a folder called handlers. Inside there, we have the main.yml file.

There we mention this code.

And the same way, we have also used a template file in the same task; the template file is also stored in the folder called templates in the same role directory.

So what our template folder file haproxy.cfg contains is:

Let me explain this file here. This file is actually used to denote the port which the frontend server will be using and then the list of backend servers which the front end server will be balancing.

In line 85 to 89, we have used some loop which will help us print all available IP from the backendServer which we created during creating EC2 instance and then I have used port 80 as our webservers are on that port and our Load Balancer should direct or bind with that port or send traffic to those IP in that port when someone hit the front end servers.

The image of the whole process is mentioned below:

Main PlayBook Again

Play Book for Back EndServer

Now we have created roles for frontend and backend configuration. Let’s get to our playbook again. We have installed boto3 in the local system, created a Security Group for back-end webservers, created instances for Web Servers, and saved the IP in the host group.

Now we will have to run the role we created in our servers which we created for the front end. So we can create another play in our same ec2.yml file.

- hosts: backendServer
gather_facts: no
tasks:
- name: running role for webserver
include_role:
name: lbonAWS

We have the role “lbonAWS” which we have made to install httpd, add the content and start the website. So the role we have used here. We have used the hosts as backendServer, which we have created our task will be running in those systems. Then the task is to use the role, so we used include_role.

You may feel that why create this role for such a small thing. But trust me if our task was more bigger and we have to do many other configuration and use many variable the role would be very useful and it can even be used by other people where they can just change what they want and run the playbook.

For now, our systems are configured with a web server, and the content is also there. We now have to add these WWebsitesto the LoadBalancer, which will help us balance load and distribute user traffic to all these systems.

Play Book to create Front End Server

Now, after this, we will again create a Security Group for our frontend load balancer server.

- hosts: localhost
gather_facts: no
vars_files:
- credential.yml
tasks:
- name: "Creating security group for LoadBalancer Server(Frontend Server)"
ec2_group:
name: frontendSecurity
description: "Security Group made for frontend webserver"
region: ap-south-1
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 8080
to_port: 8080
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0

Let's go to our code of ec2.yml. Above is the code where I have used the same task and module to create a Security Group named “frontendSecurity,” where the allowed port is 22 for SSH and 8080 where our haproxy is active on and for egress, we have allowed everything.

Our other task is to create an EC2 instance for our front-end load balancer and attach the above Security Group.

- name: Query for existing instance
command: aws ec2 describe-instances --region ap-south-1 --filter Name=tag:Name,Values=frontendServer --query "Reservations[].Instances[].PrivateIpAddress" --output text
register: query_private_ip
- name: "Launching ec2 instance for frontend loadbalancer"
ec2:
key_name: awsKeyApril
instance_type: t2.micro
image: ami-0e306788ff2473ccb
wait: true
group: frontendSecurity
count: 1
vpc_subnet_id: subnet-d5626bbd
assign_public_ip: yes
region: ap-south-1
state: present
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
instance_tags:
Name: frontendServer
when: query_private_ip.stdout == ''
register: frontend

At first, we checked if the instance is available or now like before with the tag name and value, and then if there is no instance with the name, launch our instance, and here count is 1 as we need only 1 instance this time.

Then, we added the IP to a host group with the “add_host” module and named the group frontendServer; this group holds the instance launched. We can see the below code.

- name: "Adding new instance to the host group frontendServer"
add_host:
hostname: "{{ item.public_ip }}"
groupname: frontendServer
loop: "{{ frontend.instances }}"
when: query_private_ip.stdout == ''

We then again can pause or check if the SSH is up; for instance, we created it just now.

- name: "Pausing the Playbook for 10 sec so that EC2 instance can start(NOT RECOMMENDED IF USING BELOW TASK)"
pause:
seconds: 10
when: query_private_ip.stdout == ''
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
state: started
loop: "{{ frontend.instances }}"
when: query_private_ip.stdout == ''

Play For Front End Server (LoadBalancer)

Now we have a Security Group and instance created for the front-end load balancer too.

We have created the instance now. We need to configure the instance, too, so we will use the role we created. The role installs haproxy and uses our template of haproxy.cfg and then starts the haproxy service and even has handlers to make it idempotent.

- hosts: frontendServer
gather_facts: no
tasks:
- name: "Running role for Loadbalancer"
include_role:
name: flbonAWS

So we created another play for the “frontendServer” host group, our recently created EC2 server, and we have used the role flbonAWS, which we created before.

Running PlayBook

Finally, our playbook is ready. We can see the workspace status in tree format.

We have the content now. Let’s run our playbook. Before running our playbook, we can check our AWS if any instance is running.

No instances are running. Now let’s run the playbook.

We have to use --ask-vault-pass also, we have a credential file that we are using, and it is secured with the vault. When we run the playbook, it will ask us the pass. We can see the same in the above image.

When the playbook runs, we can see the output below:

Now when playbook runs it creates 2 Security Groups and 4 Instances

Let’s check our AWS (this we are checking when the playbook is running in the middle. So we can see the backend web server being launched first.

In the above image, we can see 3 backend servers are live and being status checked. Then in the below image, we can see our frontendServer is also live and running.

And we also have 2 security groups created. We can see that in the below image.

Checking the Web Servers

During the check, we can see that we tried to connect to the front end system with its IP on port 8080, and then it redirected us to 3 different systems 3 different times. This means that the load balancer is balancing load with the 3 backend servers. We can see the IP because I have used the index.php file in my webserver with the PHP code to print IP.

This proves that our work is done and absolutely working file.

Conclusion

We have successfully launched 4 systems in AWS, 3 as webservers and 1 as loadbalancer connected to the 3 and route the traffic of users there. We have used Ansible Vault to keep our credential file safe, and we have also used the role to manage our code.

We could even have used more variables to make the playbook more dynamic and user roles for the whole play, like creating the instance. The more dynamic concept we use, the better the PlayBook becomes.

You can find the whole code in my GitHub, i.e., the below-mentioned link.

I hope I have explained everything, and if you have any doubts or suggestions, you can comment on this blog or contact me on my LinkedIn.

Thank you for staying until the end of the blog, and please do suggest some ideas for improvement. Your suggestions will really motivate me.

--

--

MishanRG

I blog about ML, Big Data, Cloud Computing. And improving to be the best.