1. 程式人生 > 其它 >大資料基礎之Hive

大資料基礎之Hive

http://hive.apache.org/
The Apache Hive ™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.

一 部署結構

服務端

  • HiveServer2
  • Metastore

客戶端

  • hive
  • beeline

二 安裝

依賴

1 JDK

2 Hadoop

$HADOOP_HOME

目錄

$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp
$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehous

配置

hive.metastore.warehouse.dir

3 Mysql

Server
jdbc jar

安裝

所有版本下載

http://archive.apache.org/dist/hive/

安裝

$ tar -xzvf hive-x.y.z.tar.gz
$ cd hive-x.y.z
$ export HIVE_HOME={{pwd}}
$ export PATH=$HIVE_HOME/bin:$PATH

初始化

配置檔案:$HIVE_HOME/conf/hive-site.xml

javax.jdo.option.ConnectionURL
javax.jdo.option.ConnectionDriverName
javax.jdo.option.ConnectionUserName
javax.jdo.option.ConnectionPassword

所有可配置項

$ wc -l $HIVE_HOME/conf/hive-default.xml.template
5959

初始化元資料庫

$HIVE_HOME/bin/schematool -dbType -initSchema

所有初始化指令碼

# ls $HIVE_HOME/scripts/metastore/upgrade/
derby mssql mysql oracle postgres
# ls $HIVE_HOME/scripts/metastore/upgrade/mysql
hive-schema-0.10.0.mysql.sql
hive-schema-2.1.0.mysql.sql
hive-schema-2.3.0.mysql.sql
hive-txn-schema-2.1.0.mysql.sql
hive-txn-schema-2.3.0.mysql.sql
upgrade-0.10.0-to-0.11.0.mysql.sql
upgrade-2.1.0-to-2.2.0.mysql.sql
upgrade-2.2.0-to-2.3.0.mysql.sql

元資料庫結構

版本

VERSION

元資料

DBS:資料庫
TBLS:表
PARTITIONS:分割槽
COLUMNS_V2:列
SERDES:序列化反序列化
FUNCS:函式
IDXS:索引

許可權

DB_PRIVS
TBL_COL_PRIVS

統計

TAB_COL_STATS

啟動Metastore

啟動程序metastore

$ $HIVE_HOME/bin/hive --service metastore

啟動類

org.apache.hadoop.hive.metastore.HiveMetaStore

埠配置

hive.metastore.port : 9083

配置

hive.metastore.uris
javax.jdo.option.*

啟動HiveServer2

啟動程序hiveserver2

$ $HIVE_HOME/bin/hiveserver2
$ $HIVE_HOME/bin/hive --service hiveserver2

啟動類

org.apache.hive.service.server.HiveServer2

埠配置

hive.server2.thrift.port : 10000

HA

Metastore
配置

hive.metastore.uris

HiveServer2
配置

hive.server2.support.dynamic.service.discovery
hive.server2.zookeeper.namespace
hive.zookeeper.quorum

URL

jdbc:hive2://zkNode1:2181,zkNode2:2181,zkNode3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2

三 客戶端

hive

客戶端命令

$ $HIVE_HOME/bin/hive

啟動類

org.apache.hadoop.hive.cli.CliDriver
run->executeDriver

直接執行sql

hive -e "$sql"
hive -f $file_path

beeline

客戶端命令

$ $HIVE_HOME/bin/beeline -u jdbc:hive2://$HS2_HOST:$HS2_PORT

四 其他

執行引擎

配置

hive.execution.engine

可選

  • mr
  • spark
  • tez

外部表

create external table * ();

內部表與外部表的差別

location
drop table

用途

  • 資料安全
  • 可以方便的訪問外部資料:hbase、es

儲存格式、壓縮格式

SERDE

  • Serialize/Deserilize

儲存格式

  • 行式
  • textfile(原生、csv、json、xml)
    • 用途:原始資料匯入
  • 列式
    • orc
    • parquet
    • 用途:查詢

壓縮格式

  • lzo
  • snappy

不同格式與SERDE

csv
org.apache.hadoop.hive.serde2.OpenCSVSerde
json
org.apache.hive.hcatalog.data.JsonSerDe
xml
com.ibm.spss.hive.serde2.xml.XmlSerDe
parquet
org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
orc
org.apache.hadoop.hive.ql.io.orc.OrcSerde
hbase
org.apache.hadoop.hive.hbase.HBaseSerDe

資料匯入

檔案匯入

普通表
hive

LOAD DATA LOCAL INPATH ${local_path} INTO TABLE ${db.${table};
LOAD DATA INPATH ${hdfs_path} INTO TABLE ${db.${table};

hdfs

hdfs dfs -put ${filepath} /user/hive/warehouse/${db}.db/${table}

分割槽表
hive

LOAD DATA LOCAL INPATH ${local_path} INTO TABLE ${db}.${table} partition(dt='${value}');

hdfs

hdfs dfs -mkdir /user/hive/warehouse/${db}.db/${table}/dt=${value}
hdfs dfs -put ${filepath} /user/hive/warehouse/${db}.db/${table}/dt=${value}
msck repair table ${db}.${table}

資料庫匯入

sqoop
spark-sql
datax
kettle
flume
logstash

資料查詢

單表查詢

查詢過程

SQL->AST(Abstract Syntax Tree)->Task(MapRedTask,FetchTask)->QueryPlan(Task集合)->Job(Yarn)

核心程式碼

org.apache.hadoop.hive.ql.Driver.compile
org.apache.hadoop.hive.ql.parse.ParseDriver
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer
org.apache.hadoop.hive.ql.QueryPlan
org.apache.hadoop.hive.ql.Driver.execute
org.apache.hadoop.hive.ql.Driver.getResult

多表查詢

Join

  • map join、broadcast
    • 場景:大表、小表
    • 配置:hive.auto.convert.join
  • bucket map join
    • 場景:大表、大表
    • 配置:hive.optimize.bucketmapjoin
    • clustered by
  • sorted merge bucket join
    • 配置:hive.optimize.bucketmapjoin.sortedmerge
  • skew join
    • 場景:資料傾斜
    • 配置:hive.optimize.skewjoin
  • left semi join
    • 場景:in、exists

CBO

CBO-Cost Based Optimize
The main goal of a CBO is to generate efficient execution plans by examining the tables and conditions specified in the query, ultimately cutting down on query execution time and reducing resource utilization.

解釋計劃

explain

查詢過程

  • Map
    • TableScan
    • Filter Operator
    • Selector Operator
    • Group By Operator
    • Reduce Output Operator
  • Reduce
    • Group By Operator
    • File Output Operator
  • Fetch Operator

程式碼

org.apache.hadoop.hive.ql.exec