[Spark][Python]Spark Join 小例子
[[email protected] ~]$ hdfs dfs -cat people.json
{"name":"Alice","pcode":"94304"}
{"name":"Brayden","age":30,"pcode":"94304"}
{"name":"Carla","age":19,"pcoe":"10036"}
{"name":"Diana","age":46}
{"name":"Etienne","pcode":"94104"}
[[email protected] ~]$
hdfs dfs -cat pcodes.json
{"pcode":"10036","city":"New York","state":"NY"}
{"pcode":"94304","city":"Palo Alto","state":"CA"}
{"pcode":"94104","city":"San Francisco","state":"CA"}
sqlContext = HiveContext(sc)
peopleDF = sqlContext.read.json("people.json")
sqlContext = HiveContext(sc)
pcodesDF = sqlContext.read.json("pcodes.json")
mydf001=peopleDF.join(pcodesDF,"pcode")
mydf001.limit(5).show()
+-----+----+-------+----+---------------+-------------+-----+
|pcode| age| name|pcoe|_corrupt_record| city|state|
+-----+----+-------+----+---------------+-------------+-----+
|94304|null| Alice|null| null| Palo Alto| CA|
|94304| 30|Brayden|null| null| Palo Alto| CA|
+-----+----+-------+----+---------------+-------------+-----+
[Spark][Python]Spark Join 小例子