Hadoop Working and Handling Failures !!!

How Hadoop works and how it handles data node failure during file transfer.

Greeting Everyone!!! I hope you all are having a good day.

“The world is one Big Data Problem” ~ Andrew McAfee

Here in this blog, we will discuss how Hadoop Cluster is created and how we transfer data in the cluster and retrieve it. Lastly, what will happen when a data node (system holding data) crashes while we are retrieving the data. So, let’s get started…..

Reading File and Crashing Data Node

Now we have a Hadoop Cluster with 1 Name Node, 4 Data Node, and 1 Client. And as per our previous, we have uploaded the file from the data node into the cluster, and we also found out that the file gets uploaded in the data nodes using the serialism concept.

# hadoop fs -cat /<filename>

Video Link:

System 1:

System 2:

# hadoop fs -cat /testhadoop.txt
# hadoop-daemon.sh stop datanode


In the end, we came to know that when the Client is getting data from the Data Nodes, and if the Data Node crash, it will connect to another Data Node and start collecting the data. We even saw the Client was connecting to the Name Node when the data node crashed and collected information of another Data Node.

I blog about ML, Big Data, Cloud Computing. And improving to be the best.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store