如何从Apache Spark的数据框中选择大小相同的分层样本? [英] How to select a same-size stratified sample from a dataframe in Apache Spark?
问题描述
我在Spark 2中有一个数据框,如下所示,其中用户有50到数千个帖子.我想创建一个新数据框,使所有用户都位于原始数据框中,但每个用户只有5个随机采样的帖子.
I have a dataframe in Spark 2 as shown below where users have between 50 to thousands of posts. I would like to create a new dataframe that will have all the users in the original dataframe but with only 5 randomly sampled posts for each user.
+--------+--------------+--------------------+
| user_id| post_id| text|
+--------+--------------+--------------------+
|67778705|44783131591473|some text...........|
|67778705|44783134580755|some text...........|
|67778705|44783136367108|some text...........|
|67778705|44783136970669|some text...........|
|67778705|44783138143396|some text...........|
|67778705|44783155162624|some text...........|
|67778705|44783688650554|some text...........|
|68950272|88655645825660|some text...........|
|68950272|88651393135293|some text...........|
|68950272|88652615409812|some text...........|
|68950272|88655744880460|some text...........|
|68950272|88658059871568|some text...........|
|68950272|88656994832475|some text...........|
+--------+--------------+--------------------+
类似于posts.groupby('user_id').agg(sample('post_id'))
的东西,但是pyspark中没有这样的功能.
Something like posts.groupby('user_id').agg(sample('post_id'))
but there is no such function in pyspark.
有什么建议吗?
更新:
此问题与另一个紧密相关的问题 stratified-sampling-in-spark 中的问题不同两种方式:
This question is different from another closely related question stratified-sampling-in-spark in two ways:
- 它询问的是不成比例的分层抽样,而不是上面另一个问题中的通用比例方法.
- 它询问是通过Spark的Dataframe API而不是RDD来实现的.
我还更新了问题的标题以澄清这一点.
I have also updated the question's title to clarify this.
推荐答案
使用sampleBy
将得出近似的解决方案.这是一种替代方法,比上面的方法有些棘手,但总是导致样本大小完全相同.
Using sampleBy
will result in approximate solution. Here is an alternative approach that is a little more hacky than the approach above but always results in exactly the same sample sizes.
import org.apache.spark.sql.functions.row_number
import org.apache.spark.sql.expressions.Window
df.withColumn("row_num",row_number().over(Window.partitionBy($"user_id").orderBy($"something_random"))
如果您还没有随机ID,则可以使用org.apache.spark.sql.functions.rand
创建一个具有随机值的列,以保证您的随机采样.
If you don't already have a random ID then you can use org.apache.spark.sql.functions.rand
to create a column with a random value to guarantee your random sampling.
这篇关于如何从Apache Spark的数据框中选择大小相同的分层样本?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!