1. 程式人生 > 實用技巧 >006.利用eclipse編寫自定義hive udf函式

006.利用eclipse編寫自定義hive udf函式

>>> hot3.png

eclipse編寫自定義hive udf函式

在做日誌分析的過程中,用到了hadoop框架中的hive,不過有些日誌處理用hive中的函式處理顯得力不從心,就需要用udf來進行擴充套件處理了

1 eclipse中新建java project hiveudf 然後新建class package(com.afan) name(UDFLower)

2 新增jar library hadoop-core-1.1.2.jar(來源hadoop1.1.2) hive-exec-0.9.0.jar(來源hive-0.9.0)兩個檔案到project

importorg.apache.hadoop.hive.ql.exec.UDF;

importorg.apache.hadoop.io.Text;

publicclassUDFLowerextendsUDF{

publicTextevaluate(finalTexts){

if(null==s){

returnnull;

}

returnnewText(s.toString().toLowerCase());

}

}

4 編譯輸出打包檔案為udf_hive.jar

第一步:

145203_yQ2o_1439326.jpg

第二步:

145220_xqq4_1439326.jpg

第三步:

145235_GS1a_1439326.jpg

第四步:

145257_37GT_1439326.jpg

第五步:

145313_FAQq_1439326.jpg

第六步:

145331_XJK0_1439326.jpg

5 udf_hive.jar放入配置好的linux

系統的資料夾中路徑為/root/data/udf_hive.jar

6 開啟hive命令列測試

hive> add jar /root/data/udf_hive.jar;

Added udf_hive.jar to class path
Added resource: udf_hive.jar

建立udf函式
hive> create temporary function my_lower as 'UDFLower'; // UDFLower'
表示你的類的地址,例如你有包名:cn.jiang.UDFLower.java,那麼就as後面接cn.jiang.UDFLower,如果沒有包名就直接寫類名

'UDFLower'就行

建立測試資料
hive> create table dual (name string);

匯入資料檔案test.txt

test.txt檔案內容為

WHO

AM

I

HELLO

hive> load data local inpath '/root/data/test.txt' into table dual;

hive> select name from dual;

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201105150525_0003, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201105150525_0003
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job-Dmapred.job.tracker=localhost:9001 -kill job_201105150525_0003
2011-05-15 06:46:05,459 Stage-1 map = 0%,reduce = 0%
2011-05-15 06:46:10,905 Stage-1 map = 100%,reduce = 0%
2011-05-15 06:46:13,963 Stage-1 map = 100%,reduce = 100%
Ended Job = job_201105150525_0003
OK
WHO
AM
I
HELLO

使用udf函式
hive> select my_lower(name) from dual;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201105150525_0002, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201105150525_0002
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job-Dmapred.job.tracker=localhost:9001 -kill job_201105150525_0002
2011-05-15 06:43:26,100 Stage-1 map = 0%,reduce = 0%
2011-05-15 06:43:34,364 Stage-1 map = 100%,reduce = 0%
2011-05-15 06:43:37,484 Stage-1 map = 100%,reduce = 100%
Ended Job = job_201105150525_0002
OK
who
am
i
hello

經測試成功通過

參考文章http://landyer.iteye.com/blog/1070377


轉載於:https://my.oschina.net/repine/blog/266187