Spark-将RDD中的Double值排序并忽略NaN [英] Spark - Sort Double values in an RDD and ignore NaNs

查看:153
本文介绍了Spark-将RDD中的Double值排序并忽略NaN的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想对RDD中的Double值进行排序,并且希望我的sort函数忽略Double.NaN值.

I want to sort the Double values in a RDD and I want my sort function to ignore the Double.NaN values.

Double.NaN值应出现在已排序的RDD的底部或顶部.

Either the Double.NaN values should appear at the bottom or top of the sorted RDD.

我无法使用sortBy做到这一点.

I was not able to achieve this using sortBy.

scala> res13.sortBy(r => r, ascending = true)
res21: org.apache.spark.rdd.RDD[Double] = MapPartitionsRDD[10] at sortBy at <console>:26

scala> res21.collect.foreach(println)
0.656
0.99
0.998
1.0
NaN
5.6
7.0

scala> res13.sortBy(r => r, ascending = false)
res23: org.apache.spark.rdd.RDD[Double] = MapPartitionsRDD[15] at sortBy at <console>:26

scala> res23.collect.foreach(println)
7.0
5.6
NaN
1.0
0.998
0.99
0.656

我的预期结果是

scala> res23.collect.foreach(println)
    7.0
    5.6
    1.0
    0.998
    0.99
    0.656
    NaN

or 
    scala> res21.collect.foreach(println)
    NaN
    0.656
    0.99
    0.998
    1.0
    5.6
    7.0

推荐答案

按照我在评论中所说的,您可以尝试以下操作:

Taking what I said in the comment, you can try this:

scala> val a = sc.parallelize(Array(0.656, 0.99, 0.998, 1.0, Double.NaN, 5.6, 7.0))
a: org.apache.spark.rdd.RDD[Double] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> a.sortBy(r => r, ascending = false).collect
res2: Array[Double] = Array(7.0, 5.6, NaN, 1.0, 0.998, 0.99, 0.656)

scala> a.sortBy(r => if (r.isNaN) Double.MinValue else r, ascending = false).collect
res3: Array[Double] = Array(7.0, 5.6, 1.0, 0.998, 0.99, 0.656, NaN)

scala> a.sortBy(r => if (r.isNaN) Double.MaxValue else r, ascending = false).collect
res4: Array[Double] = Array(NaN, 7.0, 5.6, 1.0, 0.998, 0.99, 0.656)

这篇关于Spark-将RDD中的Double值排序并忽略NaN的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆