星火MLlib - trainImplicit警告 [英] Spark MLlib - trainImplicit warning

查看:559
本文介绍了星火MLlib - trainImplicit警告的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直使用的时候看到这些警告 trainImplicit

WARN TaskSetManager:第一阶段246包含非常大的尺寸(208 KB)的任务。
建议的最大任务大小为100 KB。

和随后的任务大小开始增加。我试着拨打再分配输入RDD但警告是一样的。

所有这些警告来自ALS迭代,从flatMap并且也从集料,例如其中flatMap是显示这些警告(重量/火花1.3.0舞台的起源,但它们在火花1.3.1也示):

org.apache.spark.rdd.RDD.flatMap(RDD.scala:296)
org.apache.spark.ml.recommendation.ALS$.org$apache$spark$ml$recommendation$ALS$$computeFactors(ALS.scala:1065)
org.apache.spark.ml.recommendation.ALS $$ anonfun $列车$ 3.apply(ALS.scala:530)
org.apache.spark.ml.recommendation.ALS $$ anonfun $列车$ 3.apply(ALS.scala:527)
scala.collection.immutable.Range.foreach(Range.scala:141)
org.apache.spark.ml.recommendation.ALS $ .train(ALS.scala:527)
org.apache.spark.mllib.recommendation.ALS.run(ALS.scala:203)

和来自汇总:

org.apache.spark.rdd.RDD.aggregate(RDD.scala:968)
org.apache.spark.ml.recommendation.ALS $ .computeYtY(ALS.scala:1112)
org.apache.spark.ml.recommendation.ALS$.org$apache$spark$ml$recommendation$ALS$$computeFactors(ALS.scala:1064)
org.apache.spark.ml.recommendation.ALS $$ anonfun $列车$ 3.apply(ALS.scala:538)
org.apache.spark.ml.recommendation.ALS $$ anonfun $列车$ 3.apply(ALS.scala:527)
scala.collection.immutable.Range.foreach(Range.scala:141)
org.apache.spark.ml.recommendation.ALS $ .train(ALS.scala:527)
org.apache.spark.mllib.recommendation.ALS.run(ALS.scala:203)


解决方案

类似的问题在Apache的星火邮件列表说明 - http://apache-spark-user-list.1001560.n3.nabble.com/Large-Task-Size-td9539.html

我觉得你可以尝试(使用的重新分区()方式),这取决于有多少台主机,内存,CPU的你有分区数量的发挥。

也尝试调查通过Web界面,在这里你可以看到若干阶段,每个阶段的内存使用情况和数据局部性的所有步骤。

或者只是从来没有介意这个警告,除非一切正常,快速。

这个通知是在星火硬codeD(调度/ TaskSetManager.scala

 如果(serializedTask.limit> TaskSetManager.TASK_SIZE_TO_WARN_KB * 1024安培;&安培;
          !emittedTaskSizeWarning){
        emittedTaskSizeWarning =真
        logWarning(S舞台$ {} task.stageId包含非常大尺寸的任务+
          的($ {serializedTask.limit / 1024} KB)。推荐的最大任务大小为+
          的$ {TaskSetManager.TASK_SIZE_TO_WARN_KB} KB。)
      }

 私人[火花]对象TaskSetManager {
  //如果任何阶段包含有一个序列化的尺寸大于任务的用户将被警告
  // 这个。
  VAL TASK_SIZE_TO_WARN_KB = 100
}

I keep seeing these warnings when using trainImplicit:

WARN TaskSetManager: Stage 246 contains a task of very large size (208 KB).
The maximum recommended task size is 100 KB.

And then the task size starts to increase. I tried to call repartition on the input RDD but the warnings are the same.

All these warnings come from ALS iterations, from flatMap and also from aggregate, for instance the origin of the stage where the flatMap is showing these warnings (w/ Spark 1.3.0, but they are also shown in Spark 1.3.1):

org.apache.spark.rdd.RDD.flatMap(RDD.scala:296)
org.apache.spark.ml.recommendation.ALS$.org$apache$spark$ml$recommendation$ALS$$computeFactors(ALS.scala:1065)
org.apache.spark.ml.recommendation.ALS$$anonfun$train$3.apply(ALS.scala:530)
org.apache.spark.ml.recommendation.ALS$$anonfun$train$3.apply(ALS.scala:527)
scala.collection.immutable.Range.foreach(Range.scala:141)
org.apache.spark.ml.recommendation.ALS$.train(ALS.scala:527)
org.apache.spark.mllib.recommendation.ALS.run(ALS.scala:203)

and from aggregate:

org.apache.spark.rdd.RDD.aggregate(RDD.scala:968)
org.apache.spark.ml.recommendation.ALS$.computeYtY(ALS.scala:1112)
org.apache.spark.ml.recommendation.ALS$.org$apache$spark$ml$recommendation$ALS$$computeFactors(ALS.scala:1064)
org.apache.spark.ml.recommendation.ALS$$anonfun$train$3.apply(ALS.scala:538)
org.apache.spark.ml.recommendation.ALS$$anonfun$train$3.apply(ALS.scala:527)
scala.collection.immutable.Range.foreach(Range.scala:141)
org.apache.spark.ml.recommendation.ALS$.train(ALS.scala:527)
org.apache.spark.mllib.recommendation.ALS.run(ALS.scala:203)

解决方案

Similar problem was described in Apache Spark mail lists - http://apache-spark-user-list.1001560.n3.nabble.com/Large-Task-Size-td9539.html

I think you can try to play with number of partitions (using repartition() method), depending of how many hosts, RAM, CPUs do you have.

Try also to investigate all steps via Web UI, where you can see number of stages, memory usage on each stage, and data locality.

Or just never mind about this warnings unless everything works correctly and fast.

This notification is hard-coded in Spark (scheduler/TaskSetManager.scala)

      if (serializedTask.limit > TaskSetManager.TASK_SIZE_TO_WARN_KB * 1024 &&
          !emittedTaskSizeWarning) {
        emittedTaskSizeWarning = true
        logWarning(s"Stage ${task.stageId} contains a task of very large size " +
          s"(${serializedTask.limit / 1024} KB). The maximum recommended task size is " +
          s"${TaskSetManager.TASK_SIZE_TO_WARN_KB} KB.")
      }

.

private[spark] object TaskSetManager {
  // The user will be warned if any stages contain a task that has a serialized size greater than
  // this.
  val TASK_SIZE_TO_WARN_KB = 100
} 

这篇关于星火MLlib - trainImplicit警告的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆