High Availability Architecture with AWS CLI Using Command Prompt

MishanRG
9 min readNov 15, 2020

--

Configuring a Web Server in AWS EC2 instance and setting Content Delivery Network using Cloud Front

Hello Everyone, Here I am back again with another blog. This blog will be about creating a high availability architecture will be doing the listed task:

  • Launch an AWS EC2 instance with an EBS volume.
  • Configure a WebServer in the same EC2 instance.
  • Making the WebServer Root folder ( /var/www/html ) persistent by adding it in EBS.
  • Storing static object(image) in the S3 bucket.
  • Setting up a Content Delivery Network using CloudFront and using the origin domain as an S3 bucket.
  • Placing the Cloud Front URL on the web app code for security and low latency.

We will be doing all the listed tasks in AWS Cloud-like launching EC2 instance, attaching EBS volume. If you are not comfortable with AWS Cloud, you can check my previous blogs on the cloud and its various services here.

So, let’s get started with our work…

Launch an AWS EC2 instance with an EBS Volume

First, we will create an AWS EC2 instance using AWS CLI on Windows Command Prompt. First, as we can see below the image, we don’t have any running instance.

Now we will open our Command Prompt, and we will check if there is AWS CLI software available or not. If you don’t have the AWS CLI program, you can download it from here.

After installing, we can use the command to check if AWS CLI has been installed or not in the command system.

aws --version

And then, to launch an instance, we can use the command:

aws ec2 run-instances --image-id </> --count </> --instance-type </> --key-name </> --security-group-ids </>

Now we have launched an EC2 instance on CLI; let's check the web console of AWS. And we can see that our new instance has been launched.

Now we have to add an EBS volume also. So from CLI, we create an EBS volume and attach to our EC2 instance with the ID using the command:

Note: Always create an EBS volume on the same availability zone where EC2 is created as EBS is a zonal service.

aws ec2 create-volume --availability-zone </> --size </>

I created two-volume of the same size as CLI, and we can check the console.

Now to attach the volume, we have to use the command:

aws ec2 attach-volume --volume-id </> --instance-id </> --device </>

And we attached both volumes, and we can see in the instance we can see two block devices.

Now we will connect with our instance using SSH protocol and giving the sudo power we ran date command of that instance as seen in the below image.

ssh - l ec2-user <IP of instance> <keyname> sudo <any command goes here>

Configure a WebServer in the same EC2 instance

We now will configure a WebServer on our EC2 instance using the command we have to confirm the installation:

ssh -l ec2-user <Public IP> <pen key> sudo yum install httpd

As we have installed httpd when we try to start the http service, we cannot start it due to SELinux Security as Apache WebServer cannot execute a setting file, so we have to disable it (temporary solution) using the command shown in the picture:

Now we will start the httpd(webserver) service using the command:

ssh -l ec2-user <IP> <pem key> sudo systemctl start httpd

We can see in the image below how we started our server:

When we start the webserver, it creates a folder called “/var/www/html” in the root folder where the server will run the html files.

Making the WebServer Root folder ( /var/www/html ) persistent by adding it in EBS.

Now, as our task requirement, we have to make the document root directory permanent/persistent, and we can do that by storing it in EBS volume as when our OS crashed, the program will crash. Still, the data will be safe, and data is the most important for the company. And EBS is persistent storage, so we store our web server data in EBS.

We have configured two EBS storage from which we will be using one. Now we have to do a disk partition for the EBS device we created to use them, and we can do that with CLI.

You can refer to my previous blog for disk partition here.

In the above images, we did the partition, and we formatted the storage device, and now we have to mount it to use it. We usually create a directory in the root folder and mount the device, but here we will be mounting the storage in our server directory, i.e. “ /var/www/html.” And we can do that with the command:

ssh -l ec2-user <IP> -i <pem key> sudo mount <devicename> <location with is /var/www/html here> 

Our EBS volume has been configured, and it has been used by our web server to store all its files.

We can check if our web server is running or not by typing our instance's public IP on the browser, and it will redirect us to the Apache Web Server test page.

As we can see in the above image, our WebServer is configured, and it is live.

No, we have to write a code to display it using the server. And we write our code inside our /var/www/html directory, and we wrote an html file “index.html” with name and image tag inside.

And we save the file using:wq in the command mode.

Storing static object(image) in the S3 bucket

As per the task requirement, we have to store the image we will show on the website in an S3 bucket and access it. As S3 is a global service, we can create an S3 bucket in any zone and use it from any zone.

As we can see, there is no bucket at this time, as seen from the web console. Now using AWS CLI, we will create a bucket with an image that is located in our local system.

We can use the command:

aws s3api create-bucket --bucket <bucket name> --region <any region> --create-bucket-configuration LocationConstraint=<location>

Now in the below image, we can see our bucket has been creating with the same name.

We will now add our image from the local storage to the bucket using the command:

aws s3 sync "Location of folder" s3://<bucketname> upload : <folder/filename> to s3://<bucketname/filename>

When we check the web console, we can see that the image has been created inside our bucket, as seen in the below image.

By default, the image is private to make it public and accessed by the user in our web server. We can use the command:

aws s3api put-object-acl --bucket <bucketname> --key <filename> --acl public-read

Now the image is public, when we click on the image file, we can see the URL of the image, so we have to copy that URL and copy it in the “ <img src=“URL HERE”> ” src of the image tag inside our code.

When we accessed the webserver, we can see the image as seen in the image below when running the webserver.

We have successfully launched our web server as passes the image object in the server from the S3 bucket.

Setting up a Content Delivery Network using CloudFront and using the origin domain as an S3 bucket.

As we have launched our S3 in ap-south-1, which is Mumbai, India. Now when a person from the USA tries to access our image, it will take a bit of time for him to access that image.

Note: Even a second or milisecond delay should be taken care in the industry level. As a company won’t compromize on their client User Experience.

So we use one distributed network service called Content Delivery Network(CDN), which will help us deliver our content to any location, which will be easy and fast for the client to access. And this feature is provided by one service called CloudFront.

CloudFront in AWS helps us set a CDN that is a globally-distributed network of proxy servers that cache content, such as web videos or other bulky media, more locally to consumers, thus improving access speed for downloading the content. When our file is accessed at first in any location, AWS CloudFront service makes a copy of it and store it as a cache in a near edge location. When the user from that area reaccesses the file, then he doesn’t need to come to Mumbai to get the file. He can get it from the edge location, which gives a low latency and good UX.

We can set a CDN using the command:

aws cloudfront create-distribution --origin-domain-name <bucketimage URL>

As seen in the below image, we can see that our CloudFront Distribution has been made, and we can see the domain name there, and the origin is the S3 bucket image (link is there).

As per task requirement, we have set up Content Delivery Network using CloudFront and using the origin domain as an S3 bucket object.

Placing the Cloud Front URL on the web app code for security and low latency

In this task, we will be using the Cloud Front URL in our web server image source to create low latency and give access to the Cloud Front distribution to the countries and areas we require it becomes secure. As seen below image, I have changed the source of our image in the html code.

And now, we can access the website using the IP as before.

We will see the same website as before, but when we access the website from any place at first, it will take time to load, and then anyone who access it can get the image with low latency.

CONCLUSION

We have completed all the tasks as we planned to start, like configuring a web server and setting the web server document root folder in EBS, getting the image object from the S3 bucket, and using CDN to deliver the object file.

I hope I have explained every detailed bit by bit, and if you have any doubts or suggestions, you can comment on this blog or contact me on my LinkedIn.

Thank you for staying till the end of the blog, and please do suggest to me some ideas for improvement. Your suggestions will really motivate me.

--

--

MishanRG

I blog about ML, Big Data, Cloud Computing. And improving to be the best.