service to hadoop01/hadoop01:8020 Datanode denied communication with namenode because the host is not in the include-list
阿新 • • 發佈:2021-06-28
Hadoop新增節點報錯:
2021-06-28 16:02:12,489 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-433041383-192.168.10.249-1494331993586 (Datanode Uuid 1e3d6ada-e61d-46fd-840b-1de724dd4aa0) service to yz-tpl-hadoop-10-251/192.168.10.251:8020 beginning handshake with NN 2021-06-28 16:02:12,492 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-433041383-192.168.10.249-1494331993586 (Datanode Uuid 1e3d6ada-e61d-46fd-840b-1de724dd4aa0) service to yz-tpl-hadoop-10-251/192.168.10.251:8020 Datanode denied communication with namenode because the host is not in the include-list: DatanodeRegistration(192.168.10.252:50010, datanodeUuid=1e3d6ada-e61d-46fd-840b-1de724dd4aa0, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-2d472729-3f74-4306-91f4-359b84bf2e26;nsid=652116751;c=0) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:876) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4529) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1279) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:95) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28539) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
根據報錯顯示,初始化失敗,新增節點不在include-list檔案列表中,檢視hadoop hdfs-site.xml配置檔案,確定指定的include檔案是哪一個,檢驗新增節點是否在該配置檔案中。如果不存在則加入,然後在namenode重新重新整理DataNode節點;
命令:hdfsdfsadmin -fs hdfs://xx.xxx.x.xxx:8020 -refreshNodes //其中xx.xxx.x.xxx是active/standby namenode的IP(具體是哪個IP可根據報錯節點,或者配置不對的節點進行操作);
參考:
https://blog.csdn.net/qq_34477362/article/details/84584381