pySpark Dataframe 上聚合的多个标准 [英] multiple criteria for aggregation on pySpark Dataframe
问题描述
我有一个如下所示的 pySpark 数据框:
+-------------+---------+|单品|日期|+------------+------------+|MLA-603526656|02/09/2016||MLA-603526656|01/09/2016||MLA-604172009|02/10/2016||MLA-605470584|02/09/2016||MLA-605502281|02/10/2016||MLA-605502281|02/09/2016|+------------+------------+
我想按 sku 分组,然后计算最小和最大日期.如果我这样做:
df_testing.groupBy('sku') \.agg({'date': 'min', 'date':'max'}) \.limit(10) \.展示()
行为与 Pandas 相同,我只获取 sku
和 max(date)
列.在 Pandas 中,我通常会执行以下操作以获得我想要的结果:
df_testing.groupBy('sku') \.agg({'day': ['min','max']}) \.limit(10) \.展示()
但是在 pySpark 上这不起作用,并且我收到一个 java.util.ArrayList cannot be cast to java.lang.String
错误.任何人都可以指出我正确的语法吗?
谢谢.
您不能使用 dict.使用:
<预><代码>>>>from pyspark.sql 导入函数为 F>>>>>>df_testing.groupBy('sku').agg(F.min('date'), F.max('date'))I have a pySpark dataframe that looks like this:
+-------------+----------+
| sku| date|
+-------------+----------+
|MLA-603526656|02/09/2016|
|MLA-603526656|01/09/2016|
|MLA-604172009|02/10/2016|
|MLA-605470584|02/09/2016|
|MLA-605502281|02/10/2016|
|MLA-605502281|02/09/2016|
+-------------+----------+
I want to group by sku, and then calculate the min and max dates. If I do this:
df_testing.groupBy('sku') \
.agg({'date': 'min', 'date':'max'}) \
.limit(10) \
.show()
the behavior is the same as Pandas, where I only get the sku
and max(date)
columns. In Pandas I would normally do the following to get the results I want:
df_testing.groupBy('sku') \
.agg({'day': ['min','max']}) \
.limit(10) \
.show()
However on pySpark this does not work, and I get a java.util.ArrayList cannot be cast to java.lang.String
error. Could anyone please point me to the correct syntax?
Thanks.
You cannot use dict. Use:
>>> from pyspark.sql import functions as F
>>>
>>> df_testing.groupBy('sku').agg(F.min('date'), F.max('date'))
这篇关于pySpark Dataframe 上聚合的多个标准的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!