将转换应用于多列pyspark数据框 [英] Apply a transformation to multiple columns pyspark dataframe
问题描述
假设我有以下spark-dataframe:
Suppose I have the following spark-dataframe:
+-----+-------+
| word| label|
+-----+-------+
| red| color|
| red| color|
| blue| color|
| blue|feeling|
|happy|feeling|
+-----+-------+
可以使用以下代码创建
sample_df = spark.createDataFrame([
('red', 'color'),
('red', 'color'),
('blue', 'color'),
('blue', 'feeling'),
('happy', 'feeling')
],
('word', 'label')
)
我可以执行groupBy()
来获取每个单词标签对的计数:
I can perform a groupBy()
to get the counts of each word-label pair:
sample_df = sample_df.groupBy('word', 'label').count()
#+-----+-------+-----+
#| word| label|count|
#+-----+-------+-----+
#| blue| color| 1|
#| blue|feeling| 1|
#| red| color| 2|
#|happy|feeling| 1|
#+-----+-------+-----+
然后pivot()
和sum()
将标签计数作为列:
And then pivot()
and sum()
to get the label counts as columns:
import pyspark.sql.functions as f
sample_df = sample_df.groupBy('word').pivot('label').agg(f.sum('count')).na.fill(0)
#+-----+-----+-------+
#| word|color|feeling|
#+-----+-----+-------+
#| red| 2| 0|
#|happy| 0| 1|
#| blue| 1| 1|
#+-----+-----+-------+
转换此dataframe
以便将每一行除以该行的总数的最佳方法是什么?
What is the best way to transform this dataframe
such that each row is divided by the total for that row?
# Desired output
+-----+-----+-------+
| word|color|feeling|
+-----+-----+-------+
| red| 1.0| 0.0|
|happy| 0.0| 1.0|
| blue| 0.5| 0.5|
+-----+-----+-------+
获得此结果的一种方法是使用__builtin__.sum
(不是pyspark.sql.functions.sum
)来获取按行求和,然后为每个标签调用withColumn()
:
One way to achieve this result is to use __builtin__.sum
(NOT pyspark.sql.functions.sum
) to get the row-wise sum and then call withColumn()
for each label:
labels = ['color', 'feeling']
sample_df.withColumn('total', sum([f.col(x) for x in labels]))\
.withColumn('color', f.col('color')/f.col('total'))\
.withColumn('feeling', f.col('feeling')/f.col('total'))\
.select('word', 'color', 'feeling')\
.show()
但是必须有一种比枚举每个可能的列更好的方法.
But there has to be a better way than enumerating each of the possible columns.
更笼统地说,我的问题是:
More generally, my question is:
如何将当前行的函数任意变换同时应用于多个列?
推荐答案
在.
Found an answer on this Medium post.
首先为总计列(如上),然后使用*
运算符对select()
中的标签进行列表理解:
First make a column for the total (as above), then use the *
operator to unpack a list comprehension over the labels in select()
:
labels = ['color', 'feeling']
sample_df = sample_df.withColumn('total', sum([f.col(x) for x in labels]))
sample_df.select(
'word', *[(f.col(col_name)/f.col('total')).alias(col_name) for col_name in labels]
).show()
链接的文章中显示的方法显示了如何针对任意转换对此进行概括.
The approach shown on the linked post shows how to generalize this for arbitrary transformations.
这篇关于将转换应用于多列pyspark数据框的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!