pyspark給dataframe增加新的一列的實現示例
熟悉pandas的pythoner 應該知道給dataframe增加一列很容易,直接以字典形式指定就好了,pyspark中就不同了,摸索了一下,可以使用如下方式增加
from pyspark import SparkContext from pyspark import SparkConf from pypsark.sql import SparkSession from pyspark.sql import functions spark = SparkSession.builder.config(conf=SparkConf()).getOrCreate() data = [['Alice',19,'blue','["Alice","blue"]'],['Jane',20,'green','["Jane","green"]'],['Mary',21,'["Mary",] frame = spark.createDataFrame(data,schema=["name","age","eye_color","detail"]) frame.cache() frame.show()
+-----+---+---------+--------------------+
| name|age|eye_color| detail|
+-----+---+---------+--------------------+
|Alice| 19| blue|["Alice","bl...|
| Jane| 20| green|["Jane","gre...|
| Mary| 21| blue|["Mary","blue"]|
+-----+---+---------+--------------------+
1、 增加常數項
frame2 = frame.withColumn("contant",functions.lit(10)) frame2.show()
+-----+---+---------+--------------------+-------+
| name|age|eye_color| detail|contant|
+-----+---+---------+--------------------+-------+
|Alice| 19| blue|["Alice","bl...| 10|
| Jane| 20| green|["Jane","gre...| 10|
| Mary| 21| blue|["Mary","blue"]| 10|
+-----+---+---------+--------------------+-------+
2、簡單根據某列進行計算
2.1 使用 withColumn
frame3_1 = frame.withColumn("name_length",functions.length(frame.name)) frame3_1.show()
+-----+---+---------+--------------------+-----------+
| name|age|eye_color| detail|name_length|
+-----+---+---------+--------------------+-----------+
|Alice| 19| blue|["Alice","bl...| 5|
| Jane| 20| green|["Jane","gre...| 4|
| Mary| 21| blue|["Mary","blue"]| 4|
+-----+---+---------+--------------------+-----------+
2.2 使用 select
frame3_2 = frame.select(["name",functions.length(frame.name).alias("name_length")]) frame3_2.show()
+-----+-----------+
| name|name_length|
+-----+-----------+
|Alice| 5|
| Jane| 4|
| Mary| 4|
+-----+-----------+
2.3 使用 selectExpr
frame3_3 = frame.selectExpr(["name","length(name) as name_length"]) frame3_3.show()
+-----+-----------+
| name|name_length|
+-----+-----------+
|Alice| 5|
| Jane| 4|
| Mary| 4|
+-----+-----------+
3、定製化根據某列進行計算
比如我想對某列做指定操作,但是對應的函式沒得咋辦,造,自己造~
frame4 = frame.withColumn("detail_length",functions.UserDefinedFunction(lambda obj: len(json.loads(obj)))(frame.detail)) # or def length_detail(obj): return len(json.loads(obj)) frame4 = frame.withColumn("detail_length",functions.UserDefinedFunction(length_detail)(frame.detail)) frame4.show()
+-----+---+---------+--------------------+-------------+
| name|age|eye_color| detail|detail_length|
+-----+---+---------+--------------------+-------------+
|Alice| 19| blue|["Alice","bl...| 3|
| Jane| 20| green|["Jane","gre...| 3|
| Mary| 21| blue|["Mary","blue"]| 3|
+-----+---+---------+--------------------+-------------+
到此這篇關於pyspark給dataframe增加新的一列的實現示例的文章就介紹到這了,更多相關pyspark dataframe增加列內容請搜尋我們以前的文章或繼續瀏覽下面的相關文章希望大家以後多多支援我們!