(py)Spark中分组数据的模式 [英] Mode of grouped data in (py)Spark

查看:74
本文介绍了(py)Spark中分组数据的模式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个带有多个列的spark DataFrame.我想基于一列对行进行分组,然后为每个组找到第二列的模式.使用pandas DataFrame,我会做这样的事情:

I have a spark DataFrame with multiple columns. I would like to group the rows based on one column, and then find the mode of the second column for each group. Working with a pandas DataFrame, I would do something like this:

rand_values = np.random.randint(max_value,
                                size=num_values).reshape((num_values/2, 2))
rand_values = pd.DataFrame(rand_values, columns=['x', 'y'])
rand_values['x'] = rand_values['x'] > max_value/2
rand_values['x'] = rand_values['x'].astype('int32')

print(rand_values)
##    x  y
## 0  0  0
## 1  0  4
## 2  0  1
## 3  1  1
## 4  1  2

def mode(series):
    return scipy.stats.mode(series['y'])[0][0]

rand_values.groupby('x').apply(mode)
## x
## 0    4
## 1    1
## dtype: int64

在pyspark中,我能够找到单列的模式

Within pyspark, I am able to find the mode of a single column doing

df = sql_context.createDataFrame(rand_values)

def mode_spark(df, column):
    # Group by column and count the number of occurrences
    # of each x value
    counts = df.groupBy(column).count()

    # - Find the maximum value in the 'counts' column
    # - Join with the counts dataframe to select the row
    #   with the maximum count
    # - Select the first element of this dataframe and
    #   take the value in column
    mode = counts.join(
        counts.agg(F.max('count').alias('count')),
        on='count'
    ).limit(1).select(column)

    return mode.first()[column]

mode_spark(df, 'x')
## 1
mode_spark(df, 'y')
## 1

我不知道该如何将该功能应用于分组数据.如果不可能直接将此逻辑应用于DataFrame,是否可以通过其他方法达到相同的效果?

I'm at a loss for how to apply that function to grouped data. If it's not possible to directly apply this logic to a DataFrame, is it possible to achieve the same effect by some other means?

提前谢谢!

推荐答案

zero323建议的解决方案.

Solution suggested by zero323.

原始解决方案: https://stackoverflow.com/a/35226857/1560062

首先,计算每个(x,y)组合的出现次数.

First, count the occurances of each (x, y) combination.

counts = df.groupBy(['x', 'y']).count().alias('counts')
counts.show()
## +---+---+-----+
## |  x|  y|count|
## +---+---+-----+
## |  0|  1|    2|
## |  0|  3|    2|
## |  0|  4|    2|
## |  1|  1|    3|
## |  1|  3|    1|
## +---+---+-----+

解决方案1:按"x"分组,通过取每组中计数的最大值进行汇总.最后,删除计数"列.

Solution 1: Group by 'x', aggregate by taking the maximum value of the counts in each group. Finally, Drop the 'count' column.

result = (counts
          .groupBy('x')
          .agg(F.max(F.struct(F.col('count'),
                              F.col('y'))).alias('max'))
          .select(F.col('x'), F.col('max.y'))
         )
result.show()
## +---+---+
## |  x|  y|
## +---+---+
## |  0|  4|
## |  1|  1|
## +---+---+

解决方案2:使用窗口,按"x"进行分区,并按"count"列进行排序.现在,在每个分区中选择第一行.

Solution 2: Using a window, partition by 'x', and order by the 'count' column. Now, pick the first row in each of the partitions.

win = Window().partitionBy('x').orderBy(F.col('count').desc())
result = (counts
          .withColumn('row_num', F.rowNumber().over(win))
          .where(F.col('row_num') == 1)
          .select('x', 'y')
         )
result.show()
## +---+---+
## |  x|  y|
## +---+---+
## |  0|  1|
## |  1|  1|
## +---+---+

由于行的排序方式,两个结果有不同的结果.如果没有联系,则两种方法给出的结果相同.

The two results have a different outcome because of the way the rows are sorted. If there are no ties, the two methods give the same result.

这篇关于(py)Spark中分组数据的模式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆