Found this in datanode’s log file? :
1 2 3 4 5 6 7 8 9 |
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid value for validVolsRequired : 0 , Current valid volumes: 1 at org.apache.hadoop.hdfs.server.datanode.FSDataset.<init>(FSDataset.java:983) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:418) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812) |
This means you have configured your datanode’s dfs.datanode.failed.volumes.tolerated
to 1 (or more) in hdfs-site.xml
. Delete that configuration option and done.