Hey guys. I hope you all are doing well and fine.
This blog is about how we can customize our networking in our system as per our requirement. This is Part 2. You can check out Part 1 in my previous blog.
Here we will have three systems, i.e., System A, B, and C. We will create a network rule so that System A can ping B and C and vice versa, but System B and C won't be able to ping each other. Our finding will be how two systems (B and C) can ping another system(A) and vice versa but can’t ping each other. …
Hello Everyone, I hope you all are doing well and fine.
Before starting the blog, a big thanks to Linux World India for arranging such a great session with some great personalities.
Yesterday(Dec-28–2020), I got a chance to attend a webinar about Automation in Ansible Industry Use Case Demo organized by Linux World India. The webinar was delivered by none other than RedHat India Principle Instructor Sreejith Anujan and RedHat India Director - APAC, Mr. Arun Eapen. Being a technical student, I was happy and cheerful to attend the session with both the IT industry's renowned personalities. …
Greetings Everyone!!! Merry Christmas and A Happy New Year!! I hope you all are doing well and good.
In this blog, I will be explaining What is Kubernetes? Why we use it? How has the industry been benefited from Kubernetes? So let’s get started…
We haven’t imagined, and many are still unaware that we can launch an OS in just a command and a few seconds. Yes, it is possible due to container technology.
Greeting Everyone!!! I hope you all are having a good day.
“The world is one Big Data Problem” ~ Andrew McAfee
Here in this blog, we will discuss how Hadoop Cluster is created and how we transfer data in the cluster and retrieve it. Lastly, what will happen when a data node (system holding data) crashes while we are retrieving the data. So, let’s get started…..
For the Hadoop Cluster creation part, please refer to my previous blog where I have explained how we can create a Hadoop Cluster consisting of Name Node, Data Node, and Client Node.
Our second goal is to know how the Hadoop file system works? When the Client uploads the data, it contacts the Name Node(NN), and NN contains the information and availability of the Data Node and guides the Client, and the client uploads the data to Data Node. In my other blog, I have clearly explained how the data goes, and I have busted some myths about data transfer in that blog, so please do check the blog for a better understanding of this blog. …
Hello Everyone, I hope you all are doing fine.
In this blog, we will learn how data gets uploaded in the data node in the Hadoop cluster? We can see many posts, blogs, and articles that state that Hadoop uses parallelism, and some state Hadoop uses serialism, so let's find out how Hadoop uploads data in data nodes with proof. So let’s get started…
NOTE: Please refer to above link to get more clear understanding in Hadoop Cluster and its formation.
First, we created a Hadoop Cluster with 1 Name Node, 4 Data nodes, and 1 Client. …
How to customize networking so that we can ping Facebook but not Google?
Welcome, Everyone!!! I hope you are having a great day.
This blog is about how we can customize the rules in our networking routing table so that we can connect to the network range IP’s which we want. So let's get started…
First, let's understand something about the vocabulary we will be using.
IP address: IP address are the number which uniquely identifies any devices in the world. Mobile Phone, Computer, Routers, everything have a unique IP address. IP addresses are made up of 4 octet binary numbers. …
Hello Geeks. I guess your curiosity about Ansible and Automation and has bought you here. So, let's get started with some interesting facts on what Ansible is, automation, and its use cases.
Nowadays, many big companies on the market carry out lots and lots of processing, deployment, configuration within their system and infrastructure. The requirement and the configuration need to be changed continuously. Now, this change manually is a very lengthy process. It might take a lot of human resources, time and that delay are not accepted by the clients. As a solution, we have “Automation.”
Automation is a process or application created to minimize configuration deployment efforts and any changes required in the system. It helps to save time, resources, human resources, and it is reliable, efficient, and fast that manual process. Various automation tools like Jenkins, Nagios, Docker, Kubernetes, etc., are available in the industry. Each tool has its own feature, and each can be used in their field. …
Hello Geeks. Let’s learn something new today.
Today we will be learning how to create Logical Volume on a Data Node storage and contribute to Hadoop Cluster to make the data node elastic by nature.
Some basic terms we need to understand:
Logical Volume Management: LVM is away in the storage system that provides a method of allocating storage devices that are more flexible than conventional partitioning schemes to store volume. It is elastic by nature, which means we can increase and decrease storage size when we require without losing data.
Elasticity: Elasticity refers to a storage system's capability to adapt to variable workload changes by allocating and deallocating resources as required by each application. …
Hello Geeks, I Hope you are here to learn about Web-Server and Docker, so let's get started with the blog…
We will explain how we can launch a container technology inside a cloud instance and configure a Web-Server inside the blog's same container.
The task we are going to complete:
Here is basic information about the technology we will be using in the blog.
Container Technology: Container technology is a method of packaging an application to be run with isolated dependencies. They have fundamentally altered the development of software today due to their compartmentalization of a computer system. …
Configuring a Web Server in AWS EC2 instance and setting Content Delivery Network using Cloud Front
Hello Everyone, Here I am back again with another blog. This blog will be about creating a high availability architecture will be doing the listed task: