hadoop 2.8.3 UnregisteredNodeException
阿新 • • 發佈:2019-02-17
起因:hadoop 已經叢集完畢,想測試動態新增節點,然後直接克隆已有的節點虛擬系統作為新節點。
現象:使用jps 之後發現只有NodeManager 並沒有發現DataNode.
排查:重新啟動叢集,現象還是一樣,排除動態節點新增,然後開啟master日誌(logs/hadoop-username-namenode-hadoop-master.log)發現一下錯誤:
org.apache.hadoop.hdfs.protocol.UnregisteredNodeException: Data node DatanodeRegistration(192.168.128.134:50010, datanodeUuid=8 ed33ae6-bd0b-4031-9e1c-6390d2f431a0, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=CID-8beeaa57-7bdc-445f-973f-d9b23199a333;nsid=1129364615;c=1522465770780) is attempting to report storage ID 8ed33ae6-bd0b-4031-9e1c-6390d2f431a0. Node 192.168.128.132:50010 is expected to serve this storage.
at org.apache .hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanode(DatanodeManager.java:509)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1967)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer$1.call(NameNodeRpcServer.java:1434)
at org.apache .hadoop.hdfs.server.namenode.NameNodeRpcServer$1.call(NameNodeRpcServer.java:1431)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:4020)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:3999)
根據日誌提示是因為datanodeUuid兩臺機器使用了相同的值,解決辦法,停止hadoop叢集,刪除新加節點的hdfs下的datanode資料夾,然後重啟叢集就可以了。