hadoop環境報failed on connection exception
阿新 • • 發佈:2019-02-10
ls: Call From slaver1/127.0.0.1 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused;
For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
在搭建分散式環境的時候,發現在DataNode中使用命令:
hdfs dfs -ls /
報ls: Call From slaver1/127.0.0.1 to master:9000 failed on connection exception: java.net.ConnectException: Connection refused;
For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
我們在通過http://wiki.apache.org/hadoop/ConnectionRefused 網頁的最後一句:
None of these are Hadoop problems, they are host, network and firewall configuration issues. As it is your cluster,only
you can find out and track down the problem.
瞭解到,如果你的所有配置都是正確的,那麼這個問題只能是host,network,firewall. 在幫助文件中,我看到有這麼一句話: Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this)
於是我開啟master中的hosts檔案,檢視是否有127.0.0.1
還真的存在這麼個東西,於是我將這行給註釋掉
然後將hdfs環境重啟,再次執行,發現成功了。。。 這個是master節點的資訊:
這個是slaver1節點的資訊:
最後,送大家文件最後的一句話: As it is your cluster,only you can find out and track down the problem.
大家共勉!
我們在通過http://wiki.apache.org/hadoop/ConnectionRefused
瞭解到,如果你的所有配置都是正確的,那麼這個問題只能是host,network,firewall. 在幫助文件中,我看到有這麼一句話: Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this)
還真的存在這麼個東西,於是我將這行給註釋掉
然後將hdfs環境重啟,再次執行,發現成功了。。。 這個是master節點的資訊:
這個是slaver1節點的資訊:
最後,送大家文件最後的一句話: As it is your cluster,only you can find out and track down the problem.
大家共勉!