如何从PySpark DataFrame中获取随机行? [英] How take a random row from a PySpark DataFrame?
问题描述
如何从PySpark DataFrame中获得随机行?我只看到采用小数作为参数的方法sample()
.将此分数设置为1/numberOfRows
会导致随机结果,有时我什么也没得到.
How can I get a random row from a PySpark DataFrame? I only see the method sample()
which takes a fraction as parameter. Setting this fraction to 1/numberOfRows
leads to random results, where sometimes I won't get any row.
在RRD
上有一种方法takeSample()
,该方法将要包含样本的元素数量作为参数.我知道这可能很慢,因为您必须对每个分区进行计数,但是有没有办法在DataFrame上获得类似的东西?
On RRD
there is a method takeSample()
that takes as a parameter the number of elements you want the sample to contain. I understand that this might be slow, as you have to count each partition, but is there a way to get something like this on a DataFrame?
推荐答案
您只需在RDD
上调用takeSample
:
df = sqlContext.createDataFrame(
[(1, "a"), (2, "b"), (3, "c"), (4, "d")], ("k", "v"))
df.rdd.takeSample(False, 1, seed=0)
## [Row(k=3, v='c')]
如果您不想收集,可以简单地提高分数并限制:
If you don't want to collect you can simply take a higher fraction and limit:
df.sample(False, 0.1, seed=0).limit(1)
这篇关于如何从PySpark DataFrame中获取随机行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!