1. 程式人生 > 其它 >多型中的動態繫結

多型中的動態繫結

技術標籤:大資料Hadoophadoop資料庫大資料hbase

Hbase安裝與配置

瞭解Hbase

HBase特性

  • 資料容量大,單表可以有百億行、百萬列,資料矩陣橫向和縱向兩個維度所支援的資料量級都非常具有彈性
  • 多版本,每一列儲存的資料可以有多個version
  • 稀疏性,為空的列並不佔用儲存空間,表可以設計的非常稀疏
  • 讀寫強一致,非 “最終一致性” 的資料儲存,使得它非常適合高速的計算聚合
  • 自動分片,通過Region分散在叢集中,當行數增長的時候,Region也會自動的切分和再分配
  • Hadoop/HDFS整合,和HDFS開箱即用,不用太麻煩的銜接。擴充套件性強,只需要增加DataNode就可以增加儲存空間
  • 豐富的“簡潔,高效”API,提供了Thrift/REST API,Java API等方式對HBase進行訪問
  • 塊快取,布隆過濾器,可以高效的列查詢優化
  • 操作管理,Hbase提供了內建的web介面來操作,還可以監控JMX指標
  • 高可靠,保證了系統的容錯能力,WAL機制使得資料寫入時不會因為叢集異常而導致寫入資料丟失。故HBase選擇了CAP中的CP
  • 面向列的儲存和許可權控制,並支援獨立檢索,可以動態的增加列列式儲存:其資料在表中是按照某列儲存的,這樣在查詢只需要少數幾個欄位的時候,能大大減少讀取的資料量
  • 高效能,具備海量資料的隨機訪問和實時讀寫能力寫方面:底層的 LSM 資料結構和 Rowkey
    有序排列等架構上的獨特設計,使得HBase具有非常高的寫入效能。讀方面:region
    切分、主鍵索引和快取機制使得HBase在海量資料下具備一定的隨機讀取效能,針對 Rowkey 的查詢能夠達到毫秒級別

綜上,HBase是一個高可靠、高效能、面向列、可伸縮的分散式資料庫,是谷歌Bigtable的開源實現。主要用來儲存非結構化和半結構化的鬆散資料。HBase的目標是處理非常龐大的表,可以通過水平擴充套件的方式,利用廉價計算機叢集處理由超過10億行資料和數百萬列元素組成的資料表。更多內容詳見官方文件。

下載

注:博主使用的Hadoop版本為2.9.2,請注意版本問題。

下載地址:https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/2.2.6/

在這裡插入圖片描述

安裝

下載完成之後,將壓縮包解壓到/home/apps/目錄下,將解壓之後的檔名改為hbase

[[email protected] dev]# tar - zxvf hbase-2.2.6-bin.tar.gz
[[email protected] dev]# mkdir - p /home/apps
[[email protected] dev]# mv hbase-2.2.6 hbase
[[email protected] dev]# mv hbase /home/apps/
[[email protected] dev]# cd /home/apps/
[[email protected] home]# ll
total 12
drwxr-xr-x. 7 root   root   4096 Jan 21 17:13 hbase
drwxr-xr-x. 9 centos centos 4096 Dec 19  2017 sqoop
drwxr-xr-x. 8 root   root   4096 Jan 15 20:58 zookeeper

配置

hbase需要配置兩個檔案hbase-env.sh、hbase-site.xml檔案

修改conf目錄下面hbase-env.sh檔案

[[email protected] home]# cd conf
[[email protected] conf]# vi hbase-env.sh
#!/usr/bin/env bash
#
#/**
# * Licensed to the Apache Software Foundation (ASF) under one
# * or more contributor license agreements.  See the NOTICE file
# * distributed with this work for additional information
# * regarding copyright ownership.  The ASF licenses this file
# * to you under the Apache License, Version 2.0 (the
# * "License"); you may not use this file except in compliance
# * with the License.  You may obtain a copy of the License at
# *
# *     http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */

# Set environment variables here.

# This script sets variables multiple times over the course of starting an hbase process,
# so try to keep things idempotent unless you want to take an even deeper look
# into the startup scripts (bin/hbase, etc.)

# The java implementation to use.  Java 1.8+ required.
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_221

# Extra Java CLASSPATH elements.  Optional.
# The maximum amount of heap to use. Default is left to JVM default.
# export HBASE_HEAPSIZE=1G

# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of
# offheap, set the value to "8G".
# export HBASE_OFFHEAPSIZE=1G

# Extra Java runtime options.
# Below are what we set by default.  May only work with SUN JVM.
# For more on why as well as other possible settings,
# see http://hbase.apache.org/book.html#performance
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"

# This enables basic gc logging to the .out file.
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

# This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"

# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .

# Uncomment one of the below three options to enable java garbage collection logging for the client processes.

# This enables basic gc logging to the .out file.
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

# This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"

# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .

# See the package documentation for org.apache.hadoop.hbase.io.hfile for other configurations
# needed setting up off-heap block caching.

# Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
# NOTE: HBase provides an alternative JMX implementation to fix the random ports issue, please see JMX
# section in HBase Reference Guide for instructions.

# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"
# export HBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10105"

# File naming hosts on which HRegionServers will run.  $HBASE_HOME/conf/regionservers by default.
# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers

# Uncomment and adjust to keep all the Region Server pages mapped to be memory resident
#HBASE_REGIONSERVER_MLOCK=true
#HBASE_REGIONSERVER_UID="hbase"

# File naming hosts on which backup HMaster will run.  $HBASE_HOME/conf/backup-masters by default.
# export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters

# Extra ssh options.  Empty by default.
# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"

# Where log files are stored.  $HBASE_HOME/logs by default.
# export HBASE_LOG_DIR=${HBASE_HOME}/logs

# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"
# export HBASE_REST_OPTS="$HBASE_REST_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8074"

# A string representing this instance of hbase. $USER by default.
# export HBASE_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HBASE_NICENESS=10

# The directory where pid files are stored. /tmp by default.
# export HBASE_PID_DIR=/var/hadoop/pids

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HBASE_SLAVE_SLEEP=0.1

# Tell HBase whether it should manage it's own instance of ZooKeeper or not.
export HBASE_MANAGES_ZK=false

# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the
# RFA appender. Please refer to the log4j.properties file to see more details on this appender.
# In case one needs to do log rolling on a date change, one should set the environment property
# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
# For example:
# HBASE_ROOT_LOGGER=INFO,DRFA
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as
# DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.

# Tell HBase whether it should include Hadoop's lib when start up,
# the default value is false,means that includes Hadoop's lib.
# export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true"

修改hbase-env.sh

export HBASE_MANAGES_ZK=false (hbase有自帶的zookeeper,設為false,將不在使用hbase自帶的zookeeper)
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_221(jdk安裝路徑)

修改hbase-site.xml檔案

[[email protected] conf]# vi hbase-site.xml

新增以下配置

<property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:9000/hbase</value>
</property>
<!--單機模式不需要配置,使用叢集需要配置此項,值為true,表示為多節點分佈-->
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<!-- zookeeper的埠號-->
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
</property>
<!-- 設定備份數 -->
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<!--設定zookeeper各節點,單節點不需要配置多個-->
<property>
    <name>hbase.zookeeper.quorm</name>
    <value>master:2181,slave1:2181,slave2:2181</value>
</property>
<!--設定埠號預設60000-->
<property>
      <name>hbase.master.port</name>
      <value>60000</value>
</property>
<!--指定zookeeper的dataDir路徑(與zookeeper中的dataDir路徑一樣)-->
<property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/home/apps/zookeeper/data</value>
</property>
<!--指定zookeeper的dataLogDir路徑(與zookeeper中的dataLogDir路徑一樣)-->
<property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/home/apps/zookeeper/logs</value>
</property>

修改環境變數:

[[email protected] conf]# vi /etc/profile

新增配置

export HBASE_HOME=/home/hbase-1.4.13
export PATH=$PATH:$HBASE_HOME/bin

將環境變數立即生效

[[email protected] conf]# source /etc/profile

啟動

注:啟動hbase之前需要將Hadoop和zookeeper啟動

[[email protected] ~]# start-hbase.sh
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/apps/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
running master, logging to /home/apps/hbase/logs/hbase-root-master-XAA01.out
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/apps/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
: running regionserver, logging to /home/apps/hbase/logs/hbase-root-regionserver-XAA01.out
: SLF4J: Class path contains multiple SLF4J bindings.
: SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
: SLF4J: Found binding in [jar:file:/home/apps/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[[email protected] ~]# 

檢視是否啟動成功

[[email protected] ~]# jps
4050 NameNode
12804 QuorumPeerMain
9141 HRegionServer
4503 SecondaryNameNode
4248 DataNode
9736 Jps
8985 HMaster
4892 NodeManager
4751 ResourceManager

出現HRegionServer,啟動成功

結語:大資料Hadoop筆記 Hbase 安裝與配置