Hadoop HA集群的搭建
HA 集群搭建的難度主要在於配置文件的編寫, 心細,心細,心細!
ha模式下,secondary namenode節點不存在...
集群部署節點角色的規劃(7節點)
------------------
server01 namenode zkfc
server02 namenode zkfc
server03 resourcemanager
server04 resourcemanager
server05 datanode nodemanager zookeeper journal node
server06 datanode nodemanager zookeeper journal nodeserver07 datanode nodemanager zookeeper journal node------------------
集群部署節點角色的規劃(3節點)
------------------
server01 namenode resourcemanager zkfc nodemanager datanode zookeeper journal node
server02 namenode resourcemanager zkfc nodemanager datanode zookeeper journal nodeserver03 datanode nodemanager zookeeper journal node
------------------
1.修改Linux主機名
2.修改IP
3.修改主機名和IP的映射關系 /etc/hosts
4.關閉防火墻
5.ssh免登陸
6.安裝JDK,配置環境變量等
7.集群時間同步
安裝步驟:
1.安裝配置zooekeeper集群
1.1解壓
tar -zxvf zookeeper-3.4.5.tar.gz -C /home/hadoop/app/
1.2修改配置
cd /home/hadoop/app/zookeeper-3.4.5/conf/ cp zoo_sample.cfg zoo.cfg vim zoo.cfg
修改:
dataDir=/home/hadoop/app/zookeeper-3.4.5/tmp
在最後添加:
server.1=hadoop05:2888:3888 server.2=hadoop06:2888:3888 server.3=hadoop07:2888:3888
保存退出
然後創建一個tmp文件夾
mkdir /home/hadoop/app/zookeeper-3.4.5/tmp echo 1 > /home/hadoop/app/zookeeper-3.4.5/tmp/myid
1.3將配置好的zookeeper拷貝到其他節點(首先分別在hadoop06、hadoop07根目錄下創建一個hadoop目錄:mkdir /hadoop)
scp -r /home/hadoop/app/zookeeper-3.4.5/ hadoop06:/home/hadoop/app/ scp -r /home/hadoop/app/zookeeper-3.4.5/ hadoop07:/home/hadoop/app/
註意:修改hadoop06、hadoop07對應/hadoop/zookeeper-3.4.5/tmp/myid內容
hadoop06:
echo 2 > /home/hadoop/app/zookeeper-3.4.5/tmp/myid
hadoop07:
echo 3 > /home/hadoop/app/zookeeper-3.4.5/tmp/myid
2.安裝配置hadoop集群
2.1解壓
tar -zxvf hadoop-2.6.4.tar.gz -C /home/hadoop/app/
2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目錄下)
#將hadoop添加到環境變量中
vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_55 export HADOOP_HOME=/hadoop/hadoop-2.6.4 export PATH=$PATH:$JAVA_HOME/cluster1n:$HADOOP_HOME/cluster1n
#hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
cd /home/hadoop/app/hadoop-2.6.4/etc/hadoop
2.2.1修改hadoop-env.sh
export JAVA_HOME=/home/hadoop/app/jdk1.7.0_55
2.2.2修改core-site.xml
<configuration> <!-- 集群名稱在這裏指定!該值來自於hdfs-site.xml中的配置 --> <property> <name>fs.defaultFS</name> <value>hdfs://cluster1</value> </property> <!-- 這裏的路徑默認是NameNode、DataNode、JournalNode等存放數據的公共目錄 --> <property> <name>hadoop.tmp.dir</name> <value>/root/apps/hadoop/tmp</value> </property> <!-- ZooKeeper集群的地址和端口。註意,數量一定是奇數,且不少於三個節點--> <property> <name>ha.zookeeper.quorum</name> <value>hadoop05:2181,hadoop06:2181,hadoop07:2181</value> </property> </configuration>
2.2.3修改hdfs-site.xml
<configuration> <!--指定hdfs的nameservice為cluster1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>cluster1</value> </property> <!-- cluster1下面有兩個NameNode,分別是nn1,nn2 --> <property> <name>dfs.ha.namenodes.cluster1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.cluster1.nn1</name> <value>mini1:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.cluster1.nn1</name> <value>hadoop00:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.cluster1.nn2</name> <value>hadoop01:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.cluster1.nn2</name> <value>hadoop01:50070</value> </property> <!-- 指定NameNode的edits元數據在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop05:8485;hadoop06:8485;hadoop07:8485/cluster1</value> </property> <!-- 指定JournalNode在本地磁盤存放數據的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/app/hdpdata/journaldata</value> </property> <!-- 開啟NameNode失敗自動切換 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 指定該集群出故障時,哪個實現類負責執行故障切換 --> <property> <name>dfs.client.failover.proxy.provider.cluster1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence </value> </property> <!-- 使用sshfence隔離機制時需要ssh免登陸 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔離機制超時時間 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration>
2.2.4修改mapred-site.xml
<configuration> <!-- 指定mr框架為yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
2.2.5修改yarn-site.xml
<configuration> <!-- 開啟RM高可用 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分別指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>node-1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>node-2</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>node-1:2181,node-2:2181,node-3:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
2.2.6修改slaves
(slaves是指定子節點的位置,因為要在hadoop01上啟動HDFS、在hadoop03啟動yarn,所以hadoop01上的slaves文件指定的是datanode的位置,hadoop03上的slaves文件指定的是nodemanager的位置)
hadoop05
hadoop06
hadoop07
2.2.7下發安裝包
#將安裝包分發給其他機器
scp -r /home/hadoop/app/hadoop-2.6.4 root@hadoop06:/home/hadoop/app/
2.2.8配置免密碼登陸
#首先要配置hadoop00到hadoop01、hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密碼登陸
#在hadoop01上生產一對鑰匙
ssh-keygen -t rsa
#將公鑰拷貝到其他節點,****包括自己****
ssh-coyp-id hadoop00 ssh-coyp-id hadoop01 ssh-coyp-id hadoop02 ssh-coyp-id hadoop03 ssh-coyp-id hadoop04 ssh-coyp-id hadoop05 ssh-coyp-id hadoop06 ssh-coyp-id hadoop07
#註意:兩個namenode之間要配置ssh免密碼登陸 ssh遠程補刀時候需要
###註意:嚴格按照下面的步驟!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2.5啟動zookeeper集群
(分別在hadoop05、hadoop06、tcast07上啟動zk)
bin/zkServer.sh start #查看狀態:一個leader,兩個follower bin/zkServer.sh status
2.6手動啟動journalnode
(分別在在hadoop05、hadoop06、hadoop07上執行)
hadoop-daemon.sh start journalnode #運行jps命令檢驗,hadoop05、hadoop06、hadoop07上多了JournalNode進程
2.7格式化namenode
#在hadoop00上執行命令: hdfs namenode -format #格式化後會在根據core-site.xml中的hadoop.tmp.dir配置的目錄下生成個hdfs初始化文件
把hadoop.tmp.dir配置的目錄下所有文件拷貝到另一臺namenode節點所在的機器
scp -r tmp/ hadoop02:/home/hadoop/app/hadoop-2.6.4/
##也可以這樣,建議hdfs namenode -bootstrapStandby
2.8格式化ZKFC(在active上執行即可)
hdfs zkfc -formatZK
2.9啟動HDFS(在hadoop00上執行)
start-dfs.sh
2.10啟動YARN
start-yarn.sh #還需要手動在standby上手動啟動備份的 resourcemanager yarn-daemon.sh start resourcemanager
到此,hadoop-2.6.4配置完畢,可以統計瀏覽器訪問:
http://hadoop00:50070
NameNode ‘hadoop01:9000‘ (active)
http://hadoop01:50070
NameNode ‘hadoop02:9000‘ (standby)
驗證HDFS HA
首先向hdfs上傳一個文件
hadoop fs -put /etc/profile /profile
hadoop fs -ls /
然後再kill掉active的NameNode
kill -9 <pid of NN>
通過瀏覽器訪問:http://192.168.1.202:50070
NameNode ‘hadoop02:9000‘ (active)
這個時候hadoop02上的NameNode變成了active
在執行命令:
hadoop fs -ls /
-rw-r--r-- 3 root supergroup 1926 2014-02-06 15:36 /profile
剛才上傳的文件依然存在!!!
手動啟動那個掛掉的NameNode
hadoop-daemon.sh start namenode
通過瀏覽器訪問:http://192.168.1.201:50070
NameNode ‘hadoop01:9000‘ (standby)
驗證YARN:
運行一下hadoop提供的demo中的WordCount程序:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount /profile /out
OK,大功告成!!!
測試集群工作狀態的一些指令 :
hdfs dfsadmin -report #查看hdfs的各節點狀態信息
cluster1n/hdfs haadmin -getServiceState nn1 #獲取一個namenode節點的HA狀態
scluster1n/hadoop-daemon.sh start namenode #單獨啟動一個namenode進程
./hadoop-daemon.sh start zkfc #單獨啟動一個zkfc進程
Hadoop HA集群的搭建