hadoop整合yarn高可用HA的搭建
阿新 • • 發佈:2018-12-23
1、修改配置檔案;
具體的修改內容為:
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
修改:yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster1</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>node13</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>node14</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>node12:2181,node13:2181,node14:2181</value> </property> </configuration>
分別將這兩個配置檔案進行分發:
[[email protected] hadoop]# scp mapred-site.xml yarn-site.xml node14:`pwd`
完成之後不需要進行格式化,直接啟動就成。
[[email protected] ~]# start-yarn.sh
然後jps檢視程序,發現沒有啟動,需要進行單獨啟動:
[[email protected] ~]# yarn-daemon.sh start resourcemanager
[[email protected] ~]# yarn-daemon.sh start resourcemanager
jps查詢14上邊的程序,按照程序號進行殺死:kill -9 程序號
檢查,
重新啟動:yarn-daemon.sh start resourcemanager
hdfs dfs -get /data/wc/output/* ./
hdfs dfs -ls /data/wc/output
hdfs dfs -put ./test.txt /user/root