在pyspark中组合来自多行的文本 [英] combine text from multiple rows in pyspark
问题描述
我使用以下代码创建了一个 PySpark 数据框
I created a PySpark dataframe using the following code
testlist = [
{"category":"A","name":"A1"},
{"category":"A","name":"A2"},
{"category":"B","name":"B1"},
{"category":"B","name":"B2"}
]
spark_df = spark.createDataFrame(testlist)
结果:
category name
A A1
A A2
B B1
B B2
我想让它显示如下:
category name
A A1, A2
B B1, B2
我尝试了以下代码但不起作用
I tried the following code which does not work
spark_df.groupby('category').agg('name', lambda x:x + ', ')
任何人都可以帮助确定我做错了什么以及实现这一目标的最佳方法吗?
Can anyone help identify what I am doing wrong and the best way to make this happen ?
推荐答案
一种选择是使用 pyspark.sql.functions.collect_list()
作为聚合函数.
One option is to use pyspark.sql.functions.collect_list()
as the aggregate function.
from pyspark.sql.functions import collect_list
grouped_df = spark_df.groupby('category').agg(collect_list('name').alias("name"))
这会将 name
的值收集到一个列表中,结果输出将如下所示:
This will collect the values for name
into a list and the resultant output will look like:
grouped_df.show()
#+---------+---------+
#|category |name |
#+---------+---------+
#|A |[A1, A2] |
#|B |[B1, B2] |
#+---------+---------+
更新 2019-06-10:如果您希望将输出作为连接字符串,您可以使用 pyspark.sql.functions.concat_ws
连接收集到的列表的值,这将是 比使用 udf
更好:
Update 2019-06-10:
If you wanted your output as a concatenated string, you can use pyspark.sql.functions.concat_ws
to concatenate the values of the collected list, which will be better than using a udf
:
from pyspark.sql.functions import concat_ws
grouped_df.withColumn("name", concat_ws(", ", "name")).show()
#+---------+-------+
#|category |name |
#+---------+-------+
#|A |A1, A2 |
#|B |B1, B2 |
#+---------+-------+
<小时>
原始答案:如果您希望将输出作为连接字符串,则必须 可以使用 udf代码>.例如,您可以先执行上述
groupBy()
并应用 udf
加入收集的列表:
Original Answer: If you wanted your output as a concatenated string, you'd have to can use a udf
. For example, you can first do the groupBy()
as above and the apply a udf
to join the collected list:
from pyspark.sql.functions import udf
concat_list = udf(lambda lst: ", ".join(lst), StringType())
grouped_df.withColumn("name", concat_list("name")).show()
#+---------+-------+
#|category |name |
#+---------+-------+
#|A |A1, A2 |
#|B |B1, B2 |
#+---------+-------+
这篇关于在pyspark中组合来自多行的文本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!