hadoop高可用
阿新 • • 發佈:2018-12-28
[[email protected] ~]# yum install -y nfs-utils
[[email protected] ~]# /etc/init.d/rpcbind start
[[email protected] ~]# mount 172.25.40.1:/home/hadoop/ /home/hadoop
[[email protected] ~]# useradd -u 800 hadoop
在三個節點上掛載
mount 172.25.40.1:/home/hadoop/ /home/hadoop/
[[email protected] ~]# su - hadoop
[[email protected] ~]$ tar zxf zookeeper-3.4.9.tar.gz
[[email protected] ~]$ tar zxf zookeeper-3.4.9.tar.gz
[[email protected] ~]$ cd zookeeper-3.4.9
[[email protected] zookeeper-3.4.9]$ cd conf/
[[email protected] conf]$ cp zoo_sample.cfg zoo.cfg
[[email protected] conf]$ vim zoo.cfg
server.1=172.25.40.2:2888:3888
server.2=172.25.40.3:2888:3888
server.3=172.25.40.4:2888:3888
[[email protected] conf]$ mkdir /tmp/zookeeper
[[email protected] conf]$ cd /tmp/zookeeper/
[[email protected] zookeeper]$ echo 1 > myid
[[email protected] ~]# su - hadoop
[ [email protected] ~]$ mkdir /tmp/zookeeper
[[email protected] ~]$ cd /tmp/zookeeper/
[[email protected] zookeeper]$ echo 2 > myid
[[email protected] ~]# su - hadoop
[[email protected] ~]$ mkdir /tmp/zookeeper
[[email protected] ~]$ cd /tmp/zookeeper/
[[email protected] zookeeper]$ echo 3 > myid
在各節點啟動服務
[[email protected] zookeeper]$ cd
[[email protected] ~]$ cd zookeeper-3.4.9
[[email protected] zookeeper-3.4.9]$ cd bin/
[[email protected] bin]$ ./zkServer.sh start
連線 zookeeper
[[email protected] bin]$ ./zkCli.sh
[zk: localhost:2181(CONNECTED) 0] get /zookeeper/quota
Hadoop 配置
編輯 core-site.xml 檔案:
[[email protected] ~]$ rm -fr /tmp/*
[[email protected] ~]$ cd hadoop
[[email protected] hadoop]$ cd etc/hadoop/
[[email protected] hadoop]$ vim core-site.xml
<configuration>
##< 指定 hdfs 的 namenode 為 masters >
<property>
<name>fs.defaultFS</name>
<value>hdfs://masters</value>
</property>
##<指定 zookeeper 叢集主機地址>
<property>
<name>ha.zookeeper.quorum</name>
<value>172.25.40.2:2181,172.25.40.3:2181,172.25.40.4:2181</value>
</property>
</configuration>
編輯 hdfs-site.xml 檔案:
[[email protected] hadoop]$ vim hdfs-site.xml
<configuration>
<!-- 指定 hdfs 的 nameservices 為 masters,和 core-site.xml 檔案中的設定保持一
致 -->
<property>
<name>dfs.nameservices</name>
<value>masters</value>
</property>
<!-- masters 下面有兩個 namenode 節點,分別是 h1 和 h2 (名稱可自定義)
-->
<property>
<name>dfs.ha.namenodes.masters</name>
<value>h1,h2</value>
</property>
<!-- 指定 h1 節點的 rpc 通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.masters.h1</name>
<value>172.25.40.1:9000</value>
</property>
<!-- 指定 h1 節點的 http 通訊地址 -->
<property>
<name>dfs.namenode.http-address.masters.h1</name>
<value>172.25.40.1:50070</value>
</property>
<!-- 指定 h2 節點的 rpc 通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.masters.h2</name>
<value>172.25.40.5:9000</value>
</property>
<!-- 指定 h2 節點的 http 通訊地址 -->
<property>
<name>dfs.namenode.http-address.masters.h2</name>
<value>172.25.40.5:50070</value>
</property>
<!-- 指定 NameNode 元資料在 JournalNode 上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://172.25.40.2:8485;172.25.40.3:8485;172.25.40.4:8485/masters</value>
</property>
<!-- 指定 JournalNode 在本地磁碟存放資料的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/tmp/journaldata</value></property>
<!-- 開啟 NameNode 失敗自動切換 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失敗自動切換實現方式 -->
<property>
<name>dfs.client.failover.proxy.provider.masters</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔離機制方法,每個機制佔用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用 sshfence 隔離機制時需要 ssh 免密碼 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置 sshfence 隔離機制超時時間 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
[[email protected] hadoop]$ vim slaves
[[email protected] hadoop]$ cat slaves
172.25.40.2
172.25.40.3
172.25.40.4
在三個 DN(server2,3,4) 上依次啟動 journalnode(第一次啟動 hdfs 必須先啟動 journalnode)
[[email protected] bin]$ cd
[[email protected] ~]$ cd hadoop
[[email protected] hadoop]$ sbin/hadoop-daemon.sh start journalnode
[[email protected] hadoop]$ jps
2881 Jps
1849 QuorumPeerMain
2832 JournalNode
格式化 HDFS 叢集
[[email protected] ~]$ cd hadoop
[[email protected] hadoop]$ bin/hdfs namenode -format
格式化 zookeeper
[[email protected] hadoop]$ scp -r /tmp/hadoop-hadoop/ 172.25.40.5:/tmp/
[[email protected] hadoop]$ bin/hdfs zkfc -formatZK
啟動 hdfs 叢集
[[email protected] hadoop]$ sbin/start-dfs.sh