pySpark Dataframe上聚合的多个条件 [英] multiple criteria for aggregation on pySpark Dataframe

查看:441
本文介绍了pySpark Dataframe上聚合的多个条件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个pySpark数据框,如下所示:

I have a pySpark dataframe that looks like this:

+-------------+----------+
|          sku|      date|
+-------------+----------+
|MLA-603526656|02/09/2016|
|MLA-603526656|01/09/2016|
|MLA-604172009|02/10/2016|
|MLA-605470584|02/09/2016|
|MLA-605502281|02/10/2016|
|MLA-605502281|02/09/2016|
+-------------+----------+

我想按sku分组,然后计算最小和最大日期.如果我这样做:

I want to group by sku, and then calculate the min and max dates. If I do this:

df_testing.groupBy('sku') \
    .agg({'date': 'min', 'date':'max'}) \
    .limit(10) \
    .show()

其行为与Pandas相同,在此我仅获得skumax(date)列.在熊猫中,我通常会执行以下操作以获得所需的结果:

the behavior is the same as Pandas, where I only get the sku and max(date) columns. In Pandas I would normally do the following to get the results I want:

df_testing.groupBy('sku') \
    .agg({'day': ['min','max']}) \
    .limit(10) \
    .show()

但是在pySpark上这不起作用,并且出现java.util.ArrayList cannot be cast to java.lang.String错误.有人可以指出正确的语法吗?

However on pySpark this does not work, and I get a java.util.ArrayList cannot be cast to java.lang.String error. Could anyone please point me to the correct syntax?

谢谢.

推荐答案

您不能使用dict.使用:

You cannot use dict. Use:

>>> from pyspark.sql import functions as F
>>>
>>> df_testing.groupBy('sku').agg(F.min('date'), F.max('date'))

这篇关于pySpark Dataframe上聚合的多个条件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆