Hive 1.x升級hive2.1.1全過程及與HBase的互通
1. 問題背景
在構建的大資料平臺上(相關元件版本Hadoop 2.8, hive 1.2.2, hbase 1.2.6) 利用hive-hbase-handler.jar實現hive和hbase的資料互通時,在hive中輸入命令後總是報錯,
先是Cannot find class 'org.apache.hadoop.hive.bhase.HBaseStorageHandler, 通過tar -tf $HIVE_HOME/lib/hive-hbase-handler.jar確認這個類存在後,繼續執行,又報
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
通過搜尋的結果居然是,hive1.2版本只支援hbase 0.94及更低版本,hbase1.2.6需要hive 2.x才支援。如果要支援hbase1.2.6, 需要重新編譯hive 才行。詳見hive社群關於針對問題的討論及問題跟蹤:https://issues.apache.org/jira/browse/HIVE-10990
剛要從apache hive下原始碼,瀏覽中發現hive已經有2.x版本了,最高已到2.3,考慮到相容等保險期間,果斷下載了hive-2.1.1 http://mirrors.hust.edu.cn/apache/hive/stable-2/
於是原本要原始碼編碼就省掉了,轉為hive升級並與先前meta資料相容。
2. 升級步驟
2.1 環境變數設定和目錄建立
備份原來的hive-1.2.2, 並把新下的包解壓至原目錄下,改名為hive, 如絕對路徑 /home/hadoop/bigdata/hive
因路徑和先前保持一致,HIVE_HOME, PATH原來的設定都不用改,HDFS中hive warehouse的目錄等也都不用改,只需要在新的hive 目錄下建立配置中使用到的幾個目錄即可,如iotmp, log,
2.2 mysql 相關的修改 ( mysql本身沒改,原來設定的資料庫使用者、授權等本次都不受影響)
2.2.1 metadata升級
hive 的metadata存在mysql中,對應的metadata版本也是1.2.2的,升級後啟動hive時會作metadata版本檢查,我們需要對metadata進行升級。好在hive中已自帶了各個版本的升級指令碼,在hive/scripts/metastore/upgrade/mysql/ 中沒有1.2 --> 2.1的,但有1.2->2.0, 2.0->2.1的,可以直接使用。對應目錄下還有README說明。
簡單來說,操作如下:省略原metadata備份的過程,主要是防止升級過程中出錯導致資料丟失,再就是可以作為升級後的版本確認是否真正升級成功到了2.1.
進入 hive/scripts/metastore/upgrade/mysql目錄,然後在命令列中hive登入,show databases; use metadata ;
> source /home/hadoop/bigdata/hive/scripts/metastore/upgrade/mysql/upgrade-1.2.0-to-2.0.0.mysql.sql // 使用$HIVE_HOME無效,報找不到檔案,用絕對路徑沒問題
> source /home/hadoop/bigdata/hive/scripts/metastore/upgrade/mysql/upgrade-2.0.0-to-2.1.0.mysql.sql
看到介面上顯示一系列的升級日誌並且過程不報錯,基本就說明metadata升級成了。
備註:如果是新的metadata,不涉及升級過程,需要合作 hive的 schematool進行初始化: schematool -dbType mysql -initSchema
2.2.2 安裝mysql ODBC jar
把原hive/lib下mysql-connector相關的包copy進新hive/lib, 結果如下:
-rw-r--r-- 1 hadoop hadoop 915836 1月 1 2014 lib/mysql-connector-java-5.1.28.jar
lrwxrwxrwx 1 hadoop hadoop 31 6月 13 11:17 lib/mysql-connector-java.jar -> mysql-connector-java-5.1.28.jar
lrwxrwxrwx 1 hadoop hadoop 24 6月 13 11:17 lib/mysql.jar -> mysql-connector-java.jar
2.3 hive相關配置
2.3.1
cp hive-default.xml.template hive-site.xml
cp hive-env.sh.template hive-env.sh
cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
2.3.2 hive-env.sh
# Set HADOOP_HOME to point to a specific hadoop install directory
export HADOOP_HOME=/home/hadoop/bigdata/hadoop
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/home/hadoop/bigdata/hive/conf
2.3.3 hive-exec-log4j2.properties
# list of properties
property.hive.log.level = INFO
property.hive.root.logger = INFO,DRFA
property.hive.log.dir = /home/hadoop/bigdata/hive/log
property.hive.log.file = hive.log
property.hive.perflogger.log.level = INFO
hive 1.x and 2.x的屬性名稱設定修改較大,基本含義和功能沒變,容易找到。原來設定的hive.log.htreshold=ALL沒找到了,可能不需要了吧。
2.3.4 hive-site.xml 這是重點,所有重要的設定都在這裡,預設有3千多行,足見功能豐富,找相關的設定即可
<property>
<name>hive.exec.scratchdir</name>
<value>hdfs://master:9000/hive/scratchdir</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/home/hadoop/bigdata/hive/iotmp</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/home/hadoop/bigdata/hive/iotmp/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/home/hadoop/bigdata/hive/logs</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://master:9000/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master:3306/hivemeta?createDatabaseIfNotExist=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>666666</value>
<description>password to use against metastore database</description>
</property>
// 這個屬性的設定有些要注意的地方,如file:///後臺必須是三個/ ; 各個jar對應的file串間不能有空格,否則執行時會報錯
<property>
<name>hive.aux.jars.path</name>
<value>file:///home/hadoop/bigdata/hbase/lib/hbase-common-1.2.6.jar,file:///home/hadoop/bigdata/hbase/lib/protobuf-java-2.5.0.jar,file:///home/hadoop/bigdata/hive/lib/hive-hbase-handler-2.1.1.jar,file:///home/hadoop/bigdata/hive/lib/zookeeper-3.4.6.jar</value>
<description>The location of the plugin jars that contain implementations of user defined functions and serdes.</description>
</property>
2.3.5 啟動metastore服務和hiveserver2服務
因為hive warehouse設定在HDFS上,需要啟動這兩個服務,尤其hiveserver2服務,否則啟動hive時會報錯
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
master上執行: nohup hive --service metastore &
slave01上執行: nohup hive --service hiveserver2 &
ps -ef | grep Hive 可檢視服務是否都正常啟動
3. 啟動hive, hbase並建表加資料實現互通
啟動Hive 並執行
hive> create table hbase_table_city (level int, city string)
hive> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
hive> with SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") // level 對應於rowkey, city對應於cf1:val , hbase中put時引用的即是'cf1:val'. 多列族時在後面追加,也是此種對應關係
hive> tblproperties ("hbase.table.name" = "t_city");
hive > select * from bhase_table_city; ///目前只是個空表
啟動hbase並執行 scan 't_city'
hbase(main):021:0> scan 't_city'
ROW COLUMN+CELL
0 row(s) in 0.0190 seconds
hbase(main):022:0> put 't_city', 1, 'cf1:val', 'Beijing'
0 row(s) in 0.0100 seconds
hbase(main):023:0> scan 't_city'
ROW COLUMN+CELL
1 column=cf1:val, timestamp=1504782348589, value=Beijing
1 row(s) in 0.0260 seconds
hbase(main):024:0> put 't_city', 2, 'cf1:val', 'Shanghai'
0 row(s) in 0.0190 seconds
hbase(main):025:0> put 't_city', 3, 'cf1:val', 'Guangzhou'
0 row(s) in 0.0060 seconds
hbase(main):026:0> put 't_city', 4, 'cf1:val', 'Hangzhou'
0 row(s) in 0.0050 seconds
hbase(main):027:0> put 't_city', 5, 'cf1:val', 'Shenzhen'
0 row(s) in 0.0050 seconds
hbase(main):028:0> scan 't_city'
ROW COLUMN+CELL
1 column=cf1:val, timestamp=1504782348589, value=Beijing
2 column=cf1:val, timestamp=1504782376991, value=Shanghai
3 column=cf1:val, timestamp=1504782385734, value=Guangzhou
4 column=cf1:val, timestamp=1504782392031, value=Hangzhou
5 column=cf1:val, timestamp=1504782400458, value=Shenzhen
5 row(s) in 0.0320 seconds
Hbase中建立表和插入的資料均存在hBase資料庫內部(不同於hive只是個數倉的客戶工具,本身不存資料,它所操作的資料都存在HDFS中),底層檔案即HFile, 也存在HDFS中,路徑為http://localhost:50070/explorer.html#/hbase/data/default, 可看到t_city表和其它創建於HBase中的表。
再返回hive中執行 hive > select * from bhase_table_city; 資料已經存在了。
OK
1Beijing
2Shanghai
3Guangzhou
4Hangzhou
5Shenzhen
至此,hive1.2->2.1 升級完成,保證了資料相容,並實現了與hbase的資料互通。不得不說,這些元件和工具確實很強大!!!
附錄:
hive社群關於針對問題背景中問題的討論及問題跟蹤:https://issues.apache.org/jira/browse/HIVE-10990
hive 2.1的下載映象:http://mirrors.hust.edu.cn/apache/hive/stable-2/