1. 程式人生 > 其它 >cat 常用的日誌分析架構方案_在Linux環境下,對nginx日誌進行統計分析的幾個常用業務場景和常用命令...

cat 常用的日誌分析架構方案_在Linux環境下,對nginx日誌進行統計分析的幾個常用業務場景和常用命令...

技術標籤:HIVEhive

文章目錄

Hive SQL

表型別

Hive 內部表

CREATE TABLE [IF NOT EXISTS] table_name

刪除表時,元資料與資料都會被刪除

Hive 外部表

CREATE EXTERNAL TABLE [IF NOT EXISTS] table_name LOCATION hdfs_path
location後面只能是hdfs路徑,不能是檔案

刪除外部表只刪除metastore中的元資料資訊,不會刪除hdfs中表資料

create external table demo1
(
id int,
name string,
likes array<string>,
address map<
string,string> ) row format delimited fields terminated by ',' collection items terminated by '-' map keys terminated by ':' location '/usr/';

快速建表

Create Table Like

#只複製表結構
CREATE TABLE empty_key_value_store LIKE key_value_store;

Create Table As Select (CTAS)

#複製結構和資料
CREATE TABLE new_key_value_store 
      AS
SELECT columA, columB FROM key_value_store;

修改表

ALTER TABLE name RENAME TO new_name
ALTER TABLE name ADD COLUMNS (col_spec[, col_spec ...])
ALTER TABLE name DROP [COLUMN] column_name
ALTER TABLE name CHANGE column_name new_name new_type
ALTER TABLE name REPLACE COLUMNS (col_spec[, col_spec ...])

查看錶資訊

desc table_name;
#檢視更詳細的資訊
desc formatted table_name;

靜態分割槽

分割槽資訊是儲存在元資料表中的

建立分割槽表

create table demo2
(
id int,
name string,
likes array<string>,
address map<string,string>
)
partitioned by(age int,sex string)
row format delimited 
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':';

新增分割槽

#(表已建立,在此基礎上新增分割槽):
ALTER TABLE table_name ADD [IF NOT EXISTS] PARTITION partition_spec  [LOCATION 'location1'] partition_spec [LOCATION 'location2'] ...;

alter table demo3 add if not exists partition(month_id='201805',day_id='20180509') location '/user/tuoming/part/201805/20180509';

刪除分割槽

內部表中對應的元資料和資料將被一併刪除

ALTER TABLE table_name DROP partition_spec, partition_spec,...
ALTER TABLE day_hour_table DROP PARTITION (dt='2008-08-08', hour='09');

向指定分割槽新增資料

LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] 
LOAD DATA INPATH '/user/pv.txt' INTO TABLE day_hour_table PARTITION(dt='2008-08- 08', hour='08'); 
LOAD DATA local INPATH '/user/hua/*' INTO TABLE day_hour partition(dt='2010-07- 07');

當資料被載入至表中時,不會對資料進行任何轉換。Load操作只是將資料複製至Hive表對應的位置。資料載入時在表下自動建立一個目錄

檢視分割槽

show partitions table_name

查詢執行分割槽語法

SELECT day_table.* FROM day_table WHERE day_table.dt>= '2008-08-08'; 

分割槽表的意義在於優化查詢。查詢時儘量利用分割槽欄位。如果不使用分割槽欄位,就會全部掃描

外部表預先匯入分割槽操作,但是資料無法識別怎麼做

Msck repair table tablename

或者使用add partition 直接新增分割槽

動態分割槽

靜態分割槽與動態分割槽的主要區別在於靜態分割槽是手動指定,而動態分割槽是通過資料來進行判斷。詳細來說,靜態分割槽的列實在編譯時期,通過使用者傳遞來決定的;動態分割槽只有在SQL執行時才能決定

動態分割槽是需要執行MapReduce的

開啟支援動態分割槽

set hive.exec.dynamic.partition=true;

預設:true

set hive.exec.dynamic.partition.mode=nostrict;

預設:strict(至少有一個分割槽列是靜態分割槽)

  1. 先建立一個原始資料表
#先建立一個原始表
create table t_original(
    id int,
    age int,
    sex string,
name string,
likes array<string>,
address map<string,string>
)
row format delimited
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':';

本地的資料檔案為

2,13,female,小明2,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
3,12,male,小明3,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
4,13,female,小明4,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
5,12,male,小明5,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
6,13,female,小明6,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
7,12,male,小明7,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
8,13,female,小明8,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
9,12,female,小明9,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
10,13,male,小明10,lol-book-moive,zhejiang:hangzhou-shanghai:pudong

將資料匯入到原始表中

load data local inpath '/var/demo.txt' into table t_original;

建立分割槽表

create table t_dynamic(
    id int,
name string,
likes array<string>,
address map<string,string>
)
partitioned by (age int,sex string)
row format delimited
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':';

載入資料到分割槽表

 from t_original
 insert into t_dynamic partition(age,sex)
 select id,name,likes,address,age,sex;

插入資料

載入本地或者hdfs表文件

LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]

表到表

FROM from_statement 
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] 
select_statement1 
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] 
select_statement2] 
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] ...;
FROM psn
INSERT OVERWRITE TABLE psn10
SELECT id,name
insert into psn11
select id,likes 

表到本地表

insert overwrite local directory '/root/result' 
select * from psn;

分桶

開啟分桶

(我的hive3.1 已經沒有下面這個引數了,舊版本會有)

set hive.enforce.bucketing=true;
預設:false;設定為true之後,mr執行時會根據bucket的個數自動分配reduce task個數。(使用者也可以通過mapred.reduce.tasks自己設定reduce任務個數,但分桶時不推薦使用)
注意:一次作業產生的桶(檔案數量)和reduce task個數一致。

建立一個分桶表

create table t_bucket(
    id int,
name string,
likes array<string>,
address map<string,string>
)
clustered by(id) into 4 buckets
row format delimited
fields terminated by ','
collection items terminated by '-'
map keys terminated by ':';

向表中插入資料

insert into t_bucket select id,name,likes,address from t_origin;

可在dfs中檢視分桶表,也可 desc formatted t_bucket檢視資訊

桶表的抽樣查詢

TABLESAMPLE語法:

TABLESAMPLE(BUCKET x OUT OF y)

x:表示從哪個bucket開始抽取資料

y:必須為該表總bucket數的倍數或因子

桶:32

select * from table_name

TABLESAMPLE(BUCKET 3 OUT OF 16 ON id)取出哪個桶資料:

3,19

公式:用桶總數/y

Hive 正則

有個日誌檔案是這樣的

192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /bg-upper.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /bg-nav.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /asf-logo.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /bg-button.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /bg-middle.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET / HTTP/1.1" 200 11217
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET / HTTP/1.1" 200 11217

建立匹配表

#建立表
 CREATE TABLE logtb1 (
    host STRING,
    identity STRING,
    t_user STRING,
    time1 STRING,
    request STRING,
    referer STRING,
    agent STRING)
  ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
  WITH SERDEPROPERTIES (
    "input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) \\[(.*)\\] \"(.*)\" (-|[0-9]*) (-|[0-9]*)"
  )
  STORED AS TEXTFILE;

將資料載入進logtb1中

load data local inpath '/usr/local/log.txt'  into table logtb1;

LATERAL VIEW

Lateral View用於和UDTF函式(explode、split)結合來使用。

首先通過UDTF函式拆分成多行,再將多行結果組合成一個支援別名的虛擬表。

**主要解決 ** 在select使用UDTF做查詢過程中,查詢只能包含單個UDTF,不能包含其他欄位、以及多個UDTF的問題

語法:

LATERAL VIEW udtf(expression) tableAlias AS columnAlias (’,’ columnAlias)

資料表person(id int ,name string,likes array,map<string,string>)

1,小明1,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
2,小明2,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
3,小明3,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
4,小明4,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
5,小明5,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
7,小明1,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
8,小明2,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
9,小明3,lol-book-moive,zhejiang:hangzhou-shanghai:pudong
10,小明4,lol-book-moive,zhejiang:hangzhou-shanghai:pudong

查詢共有多少種愛好,多少城市

select count(distinct(myCol1)), count(distinct(myCol2)) from person 
LATERAL VIEW explode(likes) myTable1 AS myCol1 
LATERAL VIEW explode(address) myTable2 AS myCol2, myCol3

LATERAL VIEW OUTER

還有一種情況,如果UDTF轉換的Array是空的怎麼辦呢?

在Hive0.12裡面會支援outer關鍵字,如果UDTF的結果是空,預設會被忽略輸出。

如果加上outer關鍵字,則會像left outer join 一樣,還是會輸出select出的列,而UDTF的輸出結果是NULL

select * from person lateral view explode(array())test as t1;

這樣什麼資料也查不出來

然後加上outer關鍵字

select * from person lateral view outer explode(array())test as t1;

輸出:

2       小明2   ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL
3       小明3   ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL
4       小明4   ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL
5       小明5   ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL
6       小明6   ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL
7       小明7   ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL
8       小明8   ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL
9       小明9   ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL
10      小明10  ["lol","book","moive"]  {"zhejiang":"hangzhou","shanghai":"pudong"}     NULL