Hive之——Hive支援的檔案格式與壓縮演算法(1.2.1)
阿新 • • 發佈:2019-02-20
概述
只要是配置了正確的檔案型別和壓縮型別(比如Textfile+Gzip、SequenceFile+Snappy等),Hive都可以按預期讀取並解析資料,提供SQL功能。
SequenceFile本身的結構已經設計了內容進行壓縮。所以對於SequenceFile檔案的壓縮,並不是先生成SequenceFile檔案,再對檔案進行壓縮。而是生成SequenceFile檔案時,對其中的內容欄位進行壓縮。最終壓縮後,對外仍體現為一個SequenceFile。
RCFile、ORCFile、Parquet、Avro對於壓縮的處理方式與SequenceFile相同。
檔案格式
- Textfile
- SequenceFile
- RCFile
- ORCFile
- Parquet
- Avro
壓縮演算法的編解碼器
序號 | 壓縮格式 | 演算法 | 多檔案 | 可分割性 | 工具 | 工具壓縮後副檔名 |
---|---|---|---|---|---|---|
1 | DEFLATE | DEFLATE | 不 | 不 | 無 | .deflate |
2 | Gzip | DEFLATE | 不 | 不 | gzip | .gz |
3 | bzip2 | bzip2 | 不 | 是 | bzip2 | .bz2 |
4 | LZO | LZO | 不 | 不 | lzop | .lzo |
5 | LZ4 | ??? | ?? | ?? | ??? | ??? |
6 | Snappy | ??? | ?? | ?? | ??? | ??? |
7 | ZLIB | ??? | ?? | ?? | ??? | ??? |
8 | ZIP | DEFLATE | 是 | 是,在檔案範圍內 | zip | .zip |
TEXTFILE
文字檔案,非壓縮
1 2 3 4 5 6 7 8 | --建立一個表,格式為文字檔案: CREATE EXTERNAL TABLE student_text |
可檢視到生成的資料檔案的格式為非壓縮的文字檔案:
hdfs dfs -cat /user/hive/warehouse/student_text/000000_0 1001810081,cheyo 1001810082,pku 1001810083,rocky 1001810084,stephen 2002820081,sql 2002820082,hello 2002820083,hijj 3001810081,hhhhhhh 3001810082,abbbbbb
文字檔案,DEFLATE壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | --建立一個表,格式為檔案檔案: CREATE TABLE student_text_def (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設定壓縮型別為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec; --匯入資料: INSERT OVERWRITE TABLE student_text_def SELECT * FROM student; --檢視資料 SELECT * FROM student_text_def; |
檢視資料檔案,可看到資料檔案為多個.deflate檔案。
hdfs dfs -ls /user/hive/warehouse/student_text_def/
-rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000000_0.deflate
-rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000001_0.deflate
-rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000002_0.deflate
文字檔案,Gzip壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | --建立一個表,格式為檔案檔案: CREATE TABLE student_text_gzip (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設定壓縮型別為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec; --匯入資料: INSERT OVERWRITE TABLE student_text_gzip SELECT * FROM student; --檢視資料 SELECT * FROM student_text_gzip; |
檢視資料檔案,可看到資料檔案為多個.gz檔案。解開.gz檔案,可以看到明文文字:
hdfs dfs -ls /user/hive/warehouse/student_text_gzip/
-rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000000_0.gz
-rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000001_0.gz
-rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000002_0.gz
文字檔案,Bzip2壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | --建立一個表,格式為檔案檔案: CREATE TABLE student_text_bzip2 (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設定壓縮型別為Bzip2壓縮: SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.BZip2Codec; --匯入資料 INSERT OVERWRITE TABLE student_text_bzip2 SELECT * FROM student; --檢視資料: SELECT * FROM student_text_bzip2; |
檢視資料檔案,可看到資料檔案為多個.bz2檔案。解開.bz2檔案,可以看到明文文字:
hdfs dfs -ls /user/hive/warehouse/student_text_bzip2
-rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000000_0.bz2
-rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000001_0.bz2
-rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000002_0.bz2
文字檔案,lzo壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | --建立表 CREATE TABLE student_text_lzo (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設定為LZO壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec; --匯入資料 INSERT OVERWRITE TABLE student_text_lzo SELECT * FROM student; --查詢資料 SELECT * FROM student_text_lzo; |
檢視資料檔案,可看到資料檔案為多個.lzo壓縮。解開.lzo檔案,可以看到明文文字。
未實測,需要安裝lzop庫
文字檔案,lz4壓縮
1 2 3 4 5 6 7 8 9 10 11 12 | --建立表 CREATE TABLE student_text_lz4 (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設定為LZ4壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.Lz4Codec; --匯入資料 INSERT OVERWRITE TABLE student_text_lz4 SELECT * FROM student; |
檢視資料檔案,可看到資料檔案為多個.lz4壓縮。使用cat檢視.lz4檔案,可以看到是壓縮後的文字。
hdfs dfs -ls /user/hive/warehouse/student_text_lz4
-rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000000_0.lz4
-rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000001_0.lz4
-rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000002_0.lz4
文字檔案,Snappy壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | --建立表 CREATE TABLE student_text_snappy (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS TEXTFILE; --設定壓縮 SET hive.exec.compress.output=true; SET mapred.compress.map.output=true; SET mapred.output.compress=true; SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec; --匯入資料 INSERT OVERWRITE TABLE student_text_snappy SELECT * FROM student; --查詢資料 SELECT * FROM student_text_snappy; |
檢視資料檔案,可看到資料檔案為多個.snappy壓縮檔案。使用cat檢視.snappy檔案,可以看到是壓縮後的文字:
hdfs dfs -ls /user/hive/warehouse/student_text_snappy
Found 3 items
-rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000000_0.snappy
-rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000001_0.snappy
-rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000002_0.snappy
SEQUENCEFILE
Sequence檔案,DEFLATE壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | --建立一個表,格式為檔案檔案: CREATE TABLE student_seq_def (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS SEQUENCEFILE; --設定壓縮型別為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec; --匯入資料: INSERT OVERWRITE TABLE student_seq_def SELECT * FROM student; --檢視資料 SELECT * FROM student_seq_def; |
檢視資料檔案,是一個密文的檔案.
hdfs dfs -ls /user/hive/warehouse/student_seq_def/
-rw-r--r-- /user/hive/warehouse/student_seq_def/000000_0
Sequence檔案,Gzip壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | --建立一個表,格式為檔案檔案: CREATE TABLE student_seq_gzip (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS SEQUENCEFILE; --設定壓縮型別為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec; --匯入資料: INSERT OVERWRITE TABLE student_seq_gzip SELECT * FROM student; --檢視資料 SELECT * FROM student_seq_gzip; |
檢視資料檔案,是一個密文的檔案,無法通過gzip解壓:
hdfs dfs -ls /user/hive/warehouse/student_seq_gzip/
-rw-r--r-- /user/hive/warehouse/student_seq_gzip/000000_0
RCFILE
RCFILE,Gzip壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | CREATE TABLE student_rcfile_gzip (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS RCFILE; --設定壓縮型別為Gzip壓縮 SET hive.exec.compress.output=true; SET mapred.output.compress=true; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec; --匯入資料: INSERT OVERWRITE TABLE student_rcfile_gzip SELECT id,name FROM student; --檢視資料 SELECT * FROM student_rcfile_gzip; |
ORCFile
ORCFile有自己的引數設定壓縮格式,一般不使用上述Hive引數設定壓縮引數。
ORCFile,ZLIB壓縮
1 2 3 4 5 6 7 8 9 10 11 | --建立表 CREATE TABLE student_orcfile_zlib (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS ORCFILE TBLPROPERTIES ("orc.compress"="ZLIB"); --匯入資料 INSERT OVERWRITE TABLE student_orcfile_zlib SELECT id,name FROM student; --查詢資料 SELECT * FROM student_orcfile_zlib; |
ORCFILE,Snappy壓縮
1 2 3 4 5 6 7 8 9 10 11 | --建立表 CREATE TABLE student_orcfile_snappy2 (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS ORCFILE TBLPROPERTIES ("orc.compress"="SNAPPY"); --匯入資料 INSERT OVERWRITE TABLE student_orcfile_snappy2 SELECT id,name FROM student; --查詢資料 SELECT * FROM student_orcfile_snappy2; |
一般不使用下述方式。下述方式壓縮後,結果與上述同類型壓縮(SNAPPY)不同。具體原因待進一步研究。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | --建立表 CREATE TABLE student_orcfile_snappy (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS ORCFILE; --設定壓縮 SET hive.exec.compress.output=true; SET mapred.compress.map.output=true; SET mapred.output.compress=true; SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec; --匯入資料 INSERT OVERWRITE TABLE student_orcfile_snappy SELECT id,name FROM student; --查詢資料 SELECT * FROM student_orcfile_snappy; |
Parquet
Parquet,Snappy壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | --建立表 CREATE TABLE student_parquet_snappy (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS PARQUET; --設定壓縮 SET hive.exec.compress.output=true; SET mapred.compress.map.output=true; SET mapred.output.compress=true; SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec; --匯入資料 INSERT OVERWRITE TABLE student_parquet_snappy SELECT id,name FROM student; --查詢資料 SELECT * FROM student_parquet_snappy; |
Avro
Avro,Snappy壓縮
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | --建立表 CREATE TABLE student_avro_snappy (id STRING, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS AVRO; --設定壓縮 SET hive.exec.compress.output=true; SET mapred.compress.map.output=true; SET mapred.output.compress=true; SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec; SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec; --匯入資料 INSERT OVERWRITE TABLE student_avro_snappy SELECT id,name FROM student; --查詢資料 SELECT * FROM student_avro_snappy; |