Spark sql操作Hive
這裡說的是最簡便的方法,通過Spark sql直接操作hive。前提是hive-site.xml等配置檔案已經在Spark叢集配置好。
val logger = LoggerFactory.getLogger(SevsSpark4.getClass)
def main(args: Array[String]): Unit = {
val sparkconf = new SparkConf().setAppName("sevs_spark4")
.set("HADOOP_USER_NAME", getProp("hbase.hadoop.username"))
.set("HADOOP_GROUP_NAME", getProp("hbase.hadoop.groupname"))
// .setMaster("local")
val spark = SparkSession
.builder()
.appName("sevs spark sql")
.config(sparkconf)
.enableHiveSupport()
.getOrCreate()
// For implicit conversions like converting RDDs to DataFrames
import spark.implicits._
//讀Hive示例
val df = spark.sql("select field1,field2 from table1")
df.show(1)
val count = df.count
logger.error(s"資料量={} ################",count)
//寫hive示例
val wf = spark.sql("insert into table1 values ('1','201812')")
logger.error("ending sql hive#################")
}
maven依賴如下:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.1.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>2.1.0</version>
<scope>provided</scope>
</dependency>