1. 程式人生 > >HiveQL與SQL區別

HiveQL與SQL區別

store func request pat 類型 map pre AS 這樣的

1、Hive不支持等值連接
?SQL中對兩表內聯可以寫成:
?select * from dual a,dual b where a.key = b.key;
?Hive中應為
?select * from dual a join dual b on a.key = b.key;
而不是傳統的格式:
SELECT t1.a1 as c1, t2.b1 as c2FROM t1, t2
WHERE t1.a2 = t2.b2

2、分號字符
?分號是SQL語句結束標記,在HiveQL中也是,但是在HiveQL中,對分號的識別沒有那麽智慧,例如:
?select concat(key,concat(‘;‘,key)) from dual;
?但HiveQL在解析語句時提示:
FAILED: Parse Error: line 0:-1 mismatched input ‘<EOF>‘ expecting ) in function specification
?解決的辦法是,使用分號的八進制的ASCII碼進行轉義,那麽上述語句應寫成:
?select concat(key,concat(‘\073‘,key)) from dual;

3、IS [NOT] NULL
?SQL中null代表空值, 值得警惕的是, 在HiveQL中String類型的字段若是空(empty)字符串, 即長度為0, 那麽對它進行IS NULL的判斷結果是False.

4、Hive不支持將數據插入現有的表或分區中,
僅支持覆蓋重寫整個表,示例如下:

INSERT OVERWRITE TABLE t1
SELECT * FROM t2;
5、hive不支持INSERT INTO 表 Values(), UPDATE, DELETE操作
這樣的話,就不要很復雜的鎖機制來讀寫數據。
INSERT INTO syntax is only available starting in version 0.8。INSERT INTO就是在表或分區中追加數據。

6、hive支持嵌入mapreduce程序,來處理復雜的邏輯,如:
FROM (
MAP doctext USING ‘python wc_mapper.py‘ AS (word, cnt)
FROM docs
CLUSTER BY word
) a
REDUCE word, cnt USING ‘python wc_reduce.py‘;
--doctext: 是輸入
--word, cnt: 是map程序的輸出

--CLUSTER BY: 將wordhash後,又作為reduce程序的輸入

並且map程序、reduce程序可以單獨使用,如:
FROM (
FROM session_table
SELECT sessionid, tstamp, data
DISTRIBUTE BY sessionid SORT BY tstamp
) a
REDUCE sessionid, tstamp, data USING ‘session_reducer.sh‘;
--DISTRIBUTE BY: 用於給reduce程序分配行數據
7、hive支持將轉換後的數據直接寫入不同的表,還能寫入分區、hdfs和本地目錄
這樣能免除多次掃描輸入表的開銷。
FROM t1
INSERT OVERWRITE TABLE t2
SELECT t3.c2, count(1)
FROM t3
WHERE t3.c1 <= 20
GROUP BY t3.c2

INSERT OVERWRITE DIRECTORY ‘/output_dir‘
SELECT t3.c2, avg(t3.c1)
FROM t3
WHERE t3.c1 > 20 AND t3.c1 <= 30
GROUP BY t3.c2

INSERT OVERWRITE LOCAL DIRECTORY ‘/home/dir‘
SELECT t3.c2, sum(t3.c1)
FROM t3
WHERE t3.c1 > 30
GROUP BY t3.c2;
實際實例

創建一個表
CREATE TABLE u_data (
userid INT,
movieid INT,
rating INT,
unixtime STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘/t‘
STORED AS TEXTFILE;
加載數據到表中:
LOAD DATA LOCAL INPATH ‘ml-data/u.data‘
OVERWRITE INTO TABLE u_data;

統計數據總量:
SELECT COUNT(1) FROM u_data;

現在做一些復雜的數據分析:
創建一個 weekday_mapper.py: 文件,作為數據按周進行分割
import sys
import datetime

for line in sys.stdin:
line = line.strip()
userid, movieid, rating, unixtime = line.split(‘/t‘)

生成數據的周信息
weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday()
print ‘/t‘.join([userid, movieid, rating, str(weekday)])

使用映射腳本
//創建表,按分割符分割行中的字段值
CREATE TABLE u_data_new (
userid INT,
movieid INT,
rating INT,
weekday INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘/t‘;
//將python文件加載到系統
add FILE weekday_mapper.py;

將數據按周進行分割
INSERT OVERWRITE TABLE u_data_new
SELECT
TRANSFORM (userid, movieid, rating, unixtime)
USING ‘python weekday_mapper.py‘
AS (userid, movieid, rating, weekday)
FROM u_data;

SELECT weekday, COUNT(1)
FROM u_data_new
GROUP BY weekday;

處理Apache Weblog 數據
將WEB日誌先用正則表達式進行組合,再按需要的條件進行組合輸入到表中
add jar ../build/contrib/hive_contrib.jar;

CREATE TABLE apachelog (
host STRING,
identity STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE ‘org.apache.hadoop.hive.contrib.serde2.RegexSerDe‘
WITH SERDEPROPERTIES (
"input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|//[[^//]]*//]) ([^ /"]*|/"[^/"]*/") (-|[0-9]*) (-|[0-9]*)(?: ([^ /"]*|/"[^/"]*/") ([^ /"]*|/"[^/"]*/"))?",
"output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s"
)
STORED AS TEXTFILE;

HiveQL與SQL區別