Spark運算元:transformation之鍵值轉換join、cogroup
1、join
1)def join[W](other: RDD[(K, W)]): RDD[(K, (V, W))] 2)def join[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (V, W))] 3)def join[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, W))]
join相當於SQL中的內關聯join,只返回兩個RDD根據K可以關聯上的結果,join只能用於兩個RDD之間的關聯,如果要多個RDD關聯,多關聯幾次即可。引數numPartitions指分割槽數,partitioner指分割槽函式。
var rdd1 = sc.makeRDD(Array(("A","1"),("B","2"),("C","3")),2)
var rdd2 = sc.makeRDD(Array(("A","a"),("C","c"),("D","d")),2)
scala> rdd1.join(rdd2).collect
res10: Array[(String, (String, String))] = Array((A,(1,a)), (C,(3,c)))
2、cogroup
(1)引數為一個RDD
1)def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] 2)def cogroup[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (Iterable[V], Iterable[W]))] 3)def cogroup[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (Iterable[V], Iterable[W]))]
(2)引數是兩個RDD
1)def cogroup[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)]): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))] 2)def cogroup[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)], numPartitions: Int): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))] 3)def cogroup[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)], partitioner: Partitioner): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))]
(3)引數是三個RDD
1)def cogroup[W1, W2, W3](other1: RDD[(K, W1)], other2: RDD[(K, W2)], other3: RDD[(K, W3)]): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3]))] 2)def cogroup[W1, W2, W3](other1: RDD[(K, W1)], other2: RDD[(K, W2)], other3: RDD[(K, W3)], numPartitions: Int): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3]))] 3)def cogroup[W1, W2, W3](other1: RDD[(K, W1)], other2: RDD[(K, W2)], other3: RDD[(K, W3)], partitioner: Partitioner): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2], Iterable[W3]))]
cogroup相當於SQL中的全外關聯full outer join,返回左右RDD中的記錄,關聯不上的為空。引數numPartitions指分割槽數,partitioner指分割槽函式。 示例1:引數為一個RDD
var rdd1 = sc.makeRDD(Array(("A","1"),("B","2"),("C","3")),2)
var rdd2 = sc.makeRDD(Array(("A","a"),("C","c"),("D","d")),2)
scala> var rdd3 = rdd1.cogroup(rdd2)
rdd3: org.apache.spark.rdd.RDD[(String, (Iterable[String], Iterable[String]))] = MapPartitionsRDD[12] at cogroup at :25
scala> rdd3.partitions.size
res3: Int = 2
scala> rdd3.collect
res1: Array[(String, (Iterable[String], Iterable[String]))] = Array(
(B,(CompactBuffer(2),CompactBuffer())),
(D,(CompactBuffer(),CompactBuffer(d))),
(A,(CompactBuffer(1),CompactBuffer(a))),
(C,(CompactBuffer(3),CompactBuffer(c)))
)
scala> var rdd4 = rdd1.cogroup(rdd2,3)
rdd4: org.apache.spark.rdd.RDD[(String, (Iterable[String], Iterable[String]))] = MapPartitionsRDD[14] at cogroup at :25
scala> rdd4.partitions.size
res5: Int = 3
scala> rdd4.collect
res6: Array[(String, (Iterable[String], Iterable[String]))] = Array(
(B,(CompactBuffer(2),CompactBuffer())),
(C,(CompactBuffer(3),CompactBuffer(c))),
(A,(CompactBuffer(1),CompactBuffer(a))),
(D,(CompactBuffer(),CompactBuffer(d))))
示例2:引數為兩個RDD
var rdd1 = sc.makeRDD(Array(("A","1"),("B","2"),("C","3")),2)
var rdd2 = sc.makeRDD(Array(("A","a"),("C","c"),("D","d")),2)
var rdd3 = sc.makeRDD(Array(("A","A"),("E","E")),2)
scala> var rdd4 = rdd1.cogroup(rdd2,rdd3)
rdd4: org.apache.spark.rdd.RDD[(String, (Iterable[String], Iterable[String], Iterable[String]))] =
MapPartitionsRDD[17] at cogroup at :27
scala> rdd4.partitions.size
res7: Int = 2
scala> rdd4.collect
res9: Array[(String, (Iterable[String], Iterable[String], Iterable[String]))] = Array(
(B,(CompactBuffer(2),CompactBuffer(),CompactBuffer())),
(D,(CompactBuffer(),CompactBuffer(d),CompactBuffer())),
(A,(CompactBuffer(1),CompactBuffer(a),CompactBuffer(A))),
(C,(CompactBuffer(3),CompactBuffer(c),CompactBuffer())),
(E,(CompactBuffer(),CompactBuffer(),CompactBuffer(E))))
示例3:引數為三個RDD,程式碼同示例2,不贅述。