1. 程式人生 > >Hadoop HA和Hbase HA

Hadoop HA和Hbase HA

ide provide class dir spa efault home journal 修改

Hadoop Hbase HA

保證所有的服務器時間都相同

一、Hadoop HA

HDFS HA

/root/hd/hadoop-2.8.4/etc/hadoop 下是所有hadoop配置文件

1、core-site.xml
<configuration>
    <property>
         <name>fs.defaultFS</name>
         <value>hdfs://mycluster</value>
    </property>
    <property>
          <name>
ha.zookeeper.quorum</name> <value>hsiehchou123:2181,hsiehchou124:2181</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/root/hd/hadoop-2.8.4/tmp</value>: </property> </configuration>
2、hdfs-site.xml
<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.mycluster</name>
        <value>nn1,nn2</value>
    </property>
<property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>hsiehchou121:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>hsiehchou122:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>hsiehchou121:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>hsiehchou122:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hsiehchou123:8485;hsiehchou124:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration>

NameNode節點一般配置2臺;qjournal—— journal節點一般配置3臺
我這裏開始只有四臺,所以,journal節點我只分配了兩臺

3、yarn-site.xml
<configuration>
    <!-- Site specific YARN configuration properties -->
    <!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yarncluster</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hsiehchou121</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hsiehchou122</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hsiehchou123,hsiehchou124</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>32768</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>32768</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>24</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>/tmp/yarn-logs</value>
    </property>
</configuration>

scp -r hadoop/ hsiehchou122:/root/hd/hadoop-2.8.4/etc
scp -r hadoop/ hsiehchou123:/root/hd/hadoop-2.8.4/etc
scp -r hadoop/ hsiehchou124:/root/hd/hadoop-2.8.4/etc

配置好後,分發到所有節點,啟動zookeeper後
start-all.sh 即可啟動所有

二、Hbase HA

修改配置文件,分發到所有幾點,啟動即可
註意:要啟動兩個master,其中一個需要手動啟動

註意:Hbase安裝時,需要對應Hadoop版本

hbase hbase-2.1.4 對應 hadoop 2.8.4

通常情況下,把Hadoop core-site hdfs-site 拷貝到hbase conf下

修改 hbase-env.sh
修改 hbase-site.xml

1、hbase-env.sh

export JAVA_HOME=/root/hd/jdk1.8.0_192

export HBASE_MANAGES_ZK=false
關閉hbase自帶的zookeeper,使用集群zookeeper

2、hbase-site.xml
<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
 </property>
 <property>
<name>hbase.rootdir</name>
<value>hdfs://mycluster/hbase</value>
 </property>
 <property>
<name>hbase.zookeeper.quorum</name>
<value>hsiehchou123,hsiehchou124</value>
 </property>
 <property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
 </property>
 <property>
<name>zookeeper.session.timeout</name>
<value>120000</value>
 </property>
 <property>
<name>hbase.zookeeper.property.tickTime</name>
<value>6000</value>
 </property>
</configuration>

啟動hbbase
需要從另一臺服務器上單獨啟動master
./hbase-daemon.sh start master

通過以下網站可以看到信息
http://192.168.116.122:16010/master-status

Hadoop HA和Hbase HA