1. 程式人生 > >hadoop2.7.6偽分佈模式配置

hadoop2.7.6偽分佈模式配置

1、本文目標是在linux單機環境下配置hadoop2.7.6的偽分佈模式。

2、在hadoop-2.7.6/etc/hadoop目錄下修改如下配置檔案內容(如果沒有配置檔案,自己建立一個即可):

2.1、core-site.xml:
<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>file:/home/hadoop/hadoop/tmp</value>
    <description>Abase for other temporary directories.</description>
  </property>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:8020</value>
  </property>
</configuration>

2.2、hdfs-site.xml:
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///home/hadoop/hadoopinfra/hdfs/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///home/hadoop/hadoopinfra/hdfs/datanode</value>
  </property>
</configuration>

2.3、mapred-site.xml:
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

2.4、yarn-site.xml:
<configuration>
<!-- Site specific YARN configuration properties -->
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
</configuration>

3、基於ssh實現對localhost的無密碼訪問:
生成無密碼訪問的祕鑰:
ssh-keygen -t rsa -P " " -f ~/.ssh/id_rsa
儲存祕鑰,實現無密碼訪問:
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

4、啟動hadoop啟動程序:
1、格式化hadoop檔案系統:
hadoop namenode -format

2、啟動hadoop的相關程序:

cd hadoop/sbin

./start-all.sh
(啟動過程中遇到互動式選擇,選擇yes即可)

5、執行hadoop fs -copyFromLocal命令時遇到的問題:

問題1、

[email protected]:~> hadoop fs -copyFromLocal ./test.txt hdfs://localhost/test.txt
18/05/14 14:31:36 WARN hdfs.DFSClient: DataStreamer Exception

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

......

解決辦法:
1、rm -rf ~/hadoopinfra/hdfs/datanode/current
2、重新格式化:hadoop namenode -format
3、使用jps檢視:
9195 DataNode
20514 Jps
10261 ResourceManager
10505 NodeManager
9693 SecondaryNameNode
8848 NameNode

發現有了datanode,問題解決。

問題2、

sh hadoop-2.7.6/sbin/start-all.sh啟動報錯:

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

hadoop/hadoop-2.7.6/sbin/start-all.sh: 111: /home/yang/hadoop/hadoop-2.7.6/sbin/../libexec/hadoop-config.sh: Syntax error: word unexpected (expecting ")")

解決方法:

不要使用sh去執行啟動指令碼,進入到指令碼所在的sbin目錄,直接執行指令碼啟動:

./start-all.sh