星火当工会大量RDD的抛出堆栈溢出错误 [英] Spark when union a lot of RDD throws stack overflow error

查看:319
本文介绍了星火当工会大量RDD的抛出堆栈溢出错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我用+来了很多RDDS结合起来,我得到了错误堆栈过流错误。

星火1.3.1版
环境:纱线的客户端。 --driver-8G内存

RDDS的数量超过4000每个RDD选自具有1 GB大小的文本文件中读取。

据以这种方式产生

  VAL集合=(为(
  路径< - 文件
)屈服sc.textFile(路径))。降低(_ _联合)

它正常工作时文件具有体积小。
且有误差

错误重演。我猜这是被称为太多时间递归函数?

 异常在org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    在org.apache.spark.rdd.UnionRDD $$ anonfun $ 1.适用(UnionRDD.scala:66)
    在org.apache.spark.rdd.UnionRDD $$ anonfun $ 1.适用(UnionRDD.scala:66)
    在scala.collection.TraversableLike $$ anonfun $ $图1.适用(TraversableLike.scala:244)
    在scala.collection.TraversableLike $$ anonfun $ $图1.适用(TraversableLike.scala:244)
    在scala.collection.IndexedSeqOptimized $ class.foreach(IndexedSeqOptimized.scala:33)
    在scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
    在scala.collection.TraversableLike $ class.map(TraversableLike.scala:244)
    在scala.collection.AbstractTraversable.map(Traversable.scala:105)
    在org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:66)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:219)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:217)
    在scala.Option.getOrElse(Option.scala:120)
    在org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    在org.apache.spark.rdd.UnionRDD $$ anonfun $ 1.适用(UnionRDD.scala:66)
    在org.apache.spark.rdd.UnionRDD $$ anonfun $ 1.适用(UnionRDD.scala:66)
    在scala.collection.TraversableLike $$ anonfun $ $图1.适用(TraversableLike.scala:244)
    在scala.collection.TraversableLike $$ anonfun $ $图1.适用(TraversableLike.scala:244)
    在scala.collection.IndexedSeqOptimized $ class.foreach(IndexedSeqOptimized.scala:33)
    在scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
    在scala.collection.TraversableLike $ class.map(TraversableLike.scala:244)
    在scala.collection.AbstractTraversable.map(Traversable.scala:105)
    在org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:66)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:219)
    在org.apache.spark.rdd.RDD $$ anonfun $ $分区2.适用(RDD.scala:217)
    在scala.Option.getOrElse(Option.scala:120)
  .....


解决方案

使用 SparkContext.union(...)来代替工会很多RDDS一次。

您不想做这件事在这样的时间,因为RDD.union()会在沿袭了新的一步(一组额外的任何计算堆栈帧)为每个RDD,而SparkContext.union( ),使得它的一次。这将确保没有得到一个堆栈溢出错误。

When I use "++" to combine a lot of RDDs, I got error stack over flow error.

Spark version 1.3.1 Environment: yarn-client. --driver-memory 8G

The number of RDDs is more than 4000. Each RDD is read from a text file with size of 1 GB.

It is generated in this way

val collection = (for (
  path <- files
) yield sc.textFile(path)).reduce(_ union _)

It works fine when files has small size. And there is the error

The error repeats itself. I guess it is a recursion function which is called too many time?

 Exception at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)
    at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:66)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)
    at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:66)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
  .....

解决方案

Use SparkContext.union(...) instead to union many RDDs at once.

You don't want to do it one at a time like that since RDD.union() creates a new step in the lineage (an extra set of stack frames on any computation) for each RDD, whereas SparkContext.union() makes it all at once. This will insure not getting a stack-overflow error.

这篇关于星火当工会大量RDD的抛出堆栈溢出错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆