hadoop 9000埠不通判斷
hadoop搭建完畢後,9000埠不能正常訪問,namenode上telnet自己的9000埠居然不通,namenode不能正常搭載datanode。
看埠nestata -ano 發現9000埠是用的ipv6的格式,關閉ipv6格式,重啟機器,搞定。
下面是datanode到namenode不通的log
2013-08-28 17:51:31,202 INFO org.apache.hadoop.ipc.RPC: Server at hadoop1/192.168.70.115:9000 not available yet, Zzzzz...
2013-08-28 17:51:33,224 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:51:34,241 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:51:35,253 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:51:36,262 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:51:37,278 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:51:38,294 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:51:59,483 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 0 time(s); maxRetries=45
2013-08-28 17:52:12,533 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:52:16,566 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:52:20,603 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:52:24,636 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop1/192.168.70.115:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-28 17:52:27,664 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to hadoop1/192.168.70.115:9000 failed on local exception: java.net.NoRouteToHostException: No route to host
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
at org.apache.hadoop.ipc.Client.call(Client.java:1118)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at $Proxy5.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:374)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:453)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:335)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:300)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)