Spark運算元:Action之saveAsNewAPIHadoopFile、saveAsNewAPIHadoopDataset
阿新 • • 發佈:2018-12-30
1、saveAsNewAPIHadoopFile
1)def saveAsNewAPIHadoopFile[F <: OutputFormat[K, V]](path: String)(implicit fm: ClassTag[F]): Unit
2)def saveAsNewAPIHadoopFile(path: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: Configuration = self.context.hadoopConfiguration): Unit
用法與saveAsHadoopFile基本相同,將RDD儲存到HDFS上。
import org.apache.spark.SparkConf import org.apache.spark.SparkContext import SparkContext._ import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat import org.apache.hadoop.io.Text import org.apache.hadoop.io.IntWritable var rdd1 = sc.makeRDD(Array(("A",2),("A",1),("B",6),("B",3),("B",7))) rdd1.saveAsNewAPIHadoopFile("/tmp/lxw1234/",classOf[Text],classOf[IntWritable],classOf[TextOutputFormat[Text,IntWritable]])
2、saveAsNewAPIHadoopDataset:def saveAsNewAPIHadoopDataset(conf: Configuration): Unit
該函式作用與saveAsHadoopDataset相同。
HBase建表語句:
create ‘lxw1234′,{NAME => ‘f1′,VERSIONS => 1},{NAME => ‘f2′,VERSIONS => 1},{NAME => ‘f3′,VERSIONS => 1}
package com.lxw1234.test import org.apache.spark.SparkConf import org.apache.spark.SparkContext import SparkContext._ import org.apache.hadoop.hbase.HBaseConfiguration import org.apache.hadoop.mapreduce.Job import org.apache.hadoop.hbase.mapreduce.TableOutputFormat import org.apache.hadoop.hbase.io.ImmutableBytesWritable import org.apache.hadoop.hbase.client.Result import org.apache.hadoop.hbase.util.Bytes import org.apache.hadoop.hbase.client.Put object Test { def main(args : Array[String]) { val sparkConf = new SparkConf().setMaster("spark://lxw1234.com:7077").setAppName("lxw1234.com") val sc = new SparkContext(sparkConf); var rdd1 = sc.makeRDD(Array(("A",2),("B",6),("C",7))) sc.hadoopConfiguration.set("hbase.zookeeper.quorum ","zkNode1,zkNode2,zkNode3") sc.hadoopConfiguration.set("zookeeper.znode.parent","/hbase") sc.hadoopConfiguration.set(TableOutputFormat.OUTPUT_TABLE,"lxw1234") var job = new Job(sc.hadoopConfiguration) job.setOutputKeyClass(classOf[ImmutableBytesWritable]) job.setOutputValueClass(classOf[Result]) job.setOutputFormatClass(classOf[TableOutputFormat[ImmutableBytesWritable]]) rdd1.map( x => { var put = new Put(Bytes.toBytes(x._1)) put.add(Bytes.toBytes("f1"), Bytes.toBytes("c1"), Bytes.toBytes(x._2)) (new ImmutableBytesWritable,put) } ).saveAsNewAPIHadoopDataset(job.getConfiguration) sc.stop() } }
注意:儲存到HBase,執行時候需要在SPARK_CLASSPATH中加入HBase相關的jar包。
可參考:http://lxw1234.com/archives/2015/07/332.htm