1. 程式人生 > >編譯支援hive的spark assembly

編譯支援hive的spark assembly

原生的spark assembly jar是不依賴hive的,如果要使用spark hql必須將hive相關的依賴包打到spark assembly jar中來。打包方法:

假設已經裝好了maven,

1新增環境變數,如果jvm的這些配置太小的話,可能導致在編譯過程中出現OOM,因此放大一些:

export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"

2 cd到spark原始碼目錄,執行:

mvn -Pyarn -Dhadoop.version=2.5.0-cdh5.3.0  -Dscala-2.10.4 -Phive -Phive-thriftserver   -DskipTests clean package

(其實好像用cdh版本的只要寫 mvn -Pyarn -Phive  -Phive-thriftserver -DskipTests clean package就可以了)

注意hadoop.version和scala的版本設定成對應的版本

經過漫長的編譯過程(我編譯了2個半小時),最終成功了,在assembly/target/scala-2.10目錄下面有spark-assembly-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar檔案,用rar開啟看看hive jdbc package有沒有包含在裡面,有的話說明編譯成功了。

原始碼目錄下面有make-distribution.sh,可以用來打bin包:

 ./make-distribution.sh --name custom-spark --skip-java-test --tgz -Pyarn -Dhadoop.version=2.5.0-cdh5.3.0  -Dscala-2.10.4 -Phive -Phive-thriftserver
If you want IDEA compile your spark project (version 1.0.0 and above), you should do it with following steps. 1 clone spark project 2 use mvn to compile your spark project ( because you need the generated avro source file 
in flume-sink module) 3 open spark/pom.xml with IDEA 4 check profiles you need in “maven projects” window 5 modify the source path of  flume-sink module, make “target/scala-2.10/src_managed/main/compiled_avro” as a source path 6 if you checked yarn profile, you need to       remove the module "spark-yarn_2.10”       add “spark/yarn/common/src/main/scala” and “spark/yarn/stable/src/main/scala” the source path of module “yarn-parent_2.10" 7 then you can run "Build -> Rebuild Project" in IDEA. PS: you should run “rebuild” after you run mvn or sbt command to spark project.