Datanode not starts correctly

You can do the following method,

copy to clipboard datanode clusterID for your example, CID-8bf63244-0510-4db6-a949-8f74b50f2be9

and run following command under HADOOP_HOME/bin directory

./hdfs namenode -format -clusterId CID-8bf63244-0510-4db6-a949-8f74b50f2be9

then this code formatted the namenode with datanode cluster ids.


Whenever you are getting below error, trying to start a DN on a slave machine:

java.io.IOException: Incompatible clusterIDs in /home/hadoop/dfs/data: namenode clusterID= ****; datanode clusterID = ****

It is because after you set up your cluster, you, for whatever reason, decided to reformat your NN. Your DNs on slaves still bear reference to the old NN.

To resolve this simply delete and recreate data folder on that machine in local Linux FS, namely /home/hadoop/dfs/data.

Restarting that DN's daemon on that machine will recreate data/ folder's content and resolve the problem.


You must do as follow :

  • bin/stop-all.sh
  • rm -Rf /home/prassanna/usr/local/hadoop/yarn_data/hdfs/*
  • bin/hadoop namenode -format

I had the same problem until I found an answer in this web site.

Tags:

Hadoop

Hadoop2