SparkSQL連線Hive
阿新 • • 發佈:2020-08-20
1.將$HIVE_HOME/conf/hive-site.xml檔案複製一份到$SPARK_HOME/conf/hive-site.xml
cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf
2.直接啟動spark-shell就能幫我們自動連線
./spark-shell --master local[2] --jars /usr/local/jar/mysql-connector-java-5.1.47.jar # --jars:是指定jar包
3.直接啟動spark-shell就能幫我們自動連線
./spark-sql --master local[2] --jars /usr/local/jar/mysql-connector-java-5.1.47.jar --driver-class-path /usr/local/jar/mysql-connector-java-5.1.47.jar
4.我們可以啟動一個啟動thriftserver伺服器server,7*24一直running
cd $SPARK_HOME/sbin ./start-thriftserver.sh --master local --jars /usr/local/jar/mysql-connector-java-5.1.47.jar # 啟動預設監聽埠10000
5.通過內建了一個客戶端工具連線
cd $SPARK_HOME/bin/beeline ./beeline -u jdbc:hive2://192.168.104.94:10000
6.也可以使用程式碼連線
package com.imooc.bigdata.chapter06 import java.sql.{Connection, DriverManager, PreparedStatement, ResultSet} object JDBCClientApp { def main(args: Array[String]): Unit = { // 載入驅動 Class.forName("org.apache.hive.jdbc.HiveDriver") val conn: Connection = DriverManager.getConnection("jdbc:hive2://192.168.104.94:10000") val pstmt: PreparedStatement = conn.prepareStatement("show tables") val rs: ResultSet = pstmt.executeQuery() while(rs.next()) { println(rs.getObject(1) + " : " + rs.getObject(2)) } } }