hbase學習(二)hbase單機和高可用完全分布式安裝部署
阿新 • • 發佈:2019-01-22
logger .com 。。 4.6 ast family pac containe common hbase版本 2.0.4 與hadoop兼容表http://hbase.apache.org/book.html#hadoop
我的 hadoop版本是3.1
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
running master, logging to /opt/hbase-2.0.4/logs/hbase-root-master-node04.out
原因是有兩個log4j的jar起了沖突,只需要刪除其中一個
1.單機版hbase
1.1解壓安裝包
tar xf hbase-2.0.4-bin.tar.gz -C /opt/1.2配置環境變量
編輯/etc/profileexport HBASE_HOME=/opt/hbase-2.0.4 export PATH=$PATH:$HBASE_HOME/bin生效環境變量
source /etc/profile
1.3.配置hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64 export HBASE_MANAGES_ZK=false
1.4配置hbase-env.sh
<property> <name>hbase.rootdir</name> <value>file:///home/testuser/hbase</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/testuser/zookeeper</value> </property>
1.5啟動hbase
start-hbase.sh
報錯 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase-2.0.4/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
mv /opt/hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar /opt/hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar.bak
在hbase-env.sh 配置中默認使用的是hbase自帶的zk實例,在完全分布式環境中需要改為false 註意:如果是下面配置是true,就關閉當前的zk實例。
進入hbase shell
2.高可用完全分布式部署
2.1節點角色分配
節點 | namenode01 | namenode02 | zk | datanode | zkfc | journalnode | Hmaster | Hregionserver |
node01 | √ | √ | √ | √ | ||||
node02 | √ | √ | √ | √ | √ | √ | √ | |
node03 | √ | √ | √ | √ | ||||
node04 | √ | √ | √ |
2.2配置環境變量/etc/profile
全部節點配置,配置不要忘記 source一下
export HBASE_HOME=/opt/hbase-2.0.4 PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin
2.3配置hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_191-amd64 export HBASE_MANAGES_ZK=false
2.3配置hbase-site.xml
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://mycluster:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>node02,node03,node04</value> </property> <property> <name>hbase.unsafe.stream.capability.enforce</name> <value>false</value> </property> <property> <name>hbase.master.maxclockskew</name> <value>150000</value> </property> </configuration>
2.4配置regionservers
[root@node01 conf]# cat regionservers node02 node03 node04
2.5配置backup-masters
(註意:這個配置文件默認沒有,單獨創建編輯)[root@node01 conf]# cat backup-masters node02
2.6拷貝hdfs-site.xml
將hdfs-site.xml拷貝到hbase的配置目錄 [root@node01 conf]# cp /opt/hadoop-3.1.1/etc/hadoop/hdfs-site.xml /opt/hbase-2.0.4/conf/2.7分發hbase到其他節點
略。。。2.8啟動中出現的一些問題
根據日誌 增加一些相應的配置。 java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of ‘hbase.procedure.store.wal.use.hsync‘ to set the desired level of robustness and ensure the config value of ‘hbase.wal.dir‘ points to a FileSystem mount that can provide it. hbase-site.xml增加配置<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
regionserver.HRegionServer: STOPPED: Unhandled: org.apache.hadoop.hbase.ClockOutOfSyncException: Server node02,16020,1548146100212 has been rejected; Reported time is too far out of sync with master. Time difference of 122815ms > max allowed of 30000ms
hbase-site.xml增加配置
<property>
<name>hbase.master.maxclockskew</name>
<value>150000</value>
</property>
2.9啟動hbase集群
[root@node01 conf]# start-hbase.sh running master, logging to /opt/hbase-2.0.4/logs/hbase-root-master-node01.out node02: running regionserver, logging to /opt/hbase-2.0.4/bin/../logs/hbase-root-regionserver-node02.out node04: running regionserver, logging to /opt/hbase-2.0.4/bin/../logs/hbase-root-regionserver-node04.out node03: running regionserver, logging to /opt/hbase-2.0.4/bin/../logs/hbase-root-regionserver-node03.out node02: running master, logging to /opt/hbase-2.0.4/bin/../logs/hbase-root-master-node02.out
2.10驗證
通過網頁訪問hbase 查看web portal使用端口16010網頁 node01:16010 符合預期
使用 hbase shell
2.11 hbase shell 使用
創建test表,列族為 cfhbase(main):004:0> create ‘test‘,‘cf‘ Created table test Took 4.6111 seconds => Hbase::Table - test
向這個cf列族put 列名為name,值為 xiaoming的數據
hbase(main):005:0> put ‘test‘,‘111‘,‘cf:name‘,‘xiaoming‘ Took 1.8724 seconds
當我們將數據塞進表裏後不會立馬寫入到hdfs上,這是由於hbase的數據會暫存在內存中當內存使用達到一定閥值後會溢寫到磁盤上。
如果想讓他立即寫到磁盤需要使用flush 命令
hbase(main):007:0> flush ‘test‘ Took 2.6511 seconds
2.12驗證高可用
手動kill node01上的Hmaster進程,看是master是否會切換到node02
hbase學習(二)hbase單機和高可用完全分布式安裝部署