为什么Pyspark作业在流程中间消失而没有任何特定错误 [英] Why Pyspark jobs are dying out in the middle of process without any particular error

查看:87
本文介绍了为什么Pyspark作业在流程中间消失而没有任何特定错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

专家,我注意到生产中的一个Pyspark作业(在YARN集群模式下运行)有一件奇特的事情.在执行了大约一个小时以上(大约65-75分钟)后,它消失了,并且没有抛出任何特定的错误消息.我们已经对YARN日志进行了大约2周的分析,其中没有特别的错误,它只是在执行ETL操作(读取/写入配置单元表,执行简单的地图,修剪,lambda操作等)的过程中死于中间需要指出的一段特定代码.有时重新运行可以解决此问题,有时需要多次重新运行. 代码经过优化,spark-submit --conf具有所有正确优化的选项.正如我们前面提到的,它在大约30个其他应用程序上运行非常完美,并且具有非常好的性能统计数据.这些是我们拥有的所有选项-

Experts, I am noticing one peculiar thing with one of the Pyspark jobs in production(running in YARN cluster mode). After executing for around an hour + (around 65-75 mins), it just dies out without throwing any particular error message. We have analyzed the YARN logs for around 2 weeks now and there is no particular error in them, it just dies in the middle while doing ETL operations(reading/writing hive table, doing simple maps, trim, lambda operations etc), not any particular piece of code to point out. Sometimes rerunning fixes it, sometimes it takes more than one rerun. The code is optimized, the spark-submit --conf has all the correctly optimized options. As we mentioned earlier, it is running absolutely perfect for around 30 other applications with very good performance stats. These are all the options we have -

spark-submit --conf spark.yarn.maxAppAttempts=1 --conf spark.sql.broadcastTimeout=36000 --conf spark.dynamicAllocation.executorIdleTimeout=1800 --conf spark.dynamicAllocation.minExecutors=8 --conf spark.dynamicAllocation.initialExecutors=8 --conf spark.dynamicAllocation.maxExecutors=32 --conf spark.yarn.executor.memoryOverhead=4096 --conf spark.kryoserializer.buffer.max=512m --driver-memory 2G --executor-memory 8G --executor-cores 2 --deploy-mode cluster --master yarn

我们要检查是否需要更改某些驱动器配置才能解决此问题? 还是在Spark Cluster模式下有一些自动超时可以增加?我们正在将Spark 1.6与Python 2.7结合使用

We want to check if it is some drive configuration i need to change to address this issue? Or there is some automatic timeout in Spark Cluster mode which can be increased? we are using Spark 1.6 with Python 2.7

错误看起来像(有几条消息说-

The error looks like (there are several messages where it says -

ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL 15: SIGTERM

但是在遇到驱动程序错误时(最后发生)它会失败-

But it fails when it encounters driver error (happens in the end)-

ERROR executor.CoarseGrainedExecutorBackend: Driver XX.XXX.XXX.XXX:XXXXX disassociated! Shutting down

这是日志-

19/10/24 16:17:03 INFO compress.CodecPool: Got brand-new compressor [.gz]
19/10/24 16:17:03 INFO output.FileOutputCommitter: Saved output of task 'attempt_201910241617_0152_m_000323_0' to hdfs://myserver/production/out/TBL/_temporary/0/task_201910241617_0152_m_000323
19/10/24 16:17:03 INFO mapred.SparkHadoopMapRedUtil: attempt_201910241617_0152_m_000323_0: Committed
19/10/24 16:17:03 INFO executor.Executor: Finished task 323.0 in stage 152.0 (TID 27419). 2163 bytes result sent to driver
19/10/24 16:17:03 INFO output.FileOutputCommitter: Saved output of task 'attempt_201910241617_0152_m_000135_0' to hdfs://myserver/production/out/TBL/_temporary/0/task_201910241617_0152_m_000135
19/10/24 16:17:03 INFO mapred.SparkHadoopMapRedUtil: attempt_201910241617_0152_m_000135_0: Committed
19/10/24 16:17:03 INFO executor.Executor: Finished task 135.0 in stage 152.0 (TID 27387). 2163 bytes result sent to driver
19/10/24 16:18:04 ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL 15: SIGTERM
19/10/24 16:18:04 INFO storage.DiskBlockManager: Shutdown hook called
19/10/24 16:18:04 INFO util.ShutdownHookManager: Shutdown hook called

19/10/24 16:21:12 INFO executor.Executor: Finished task 41.0 in stage 163.0 (TID 29954). 2210 bytes result sent to driver
19/10/24 16:21:12 INFO executor.Executor: Finished task 170.0 in stage 163.0 (TID 29986). 2210 bytes result sent to driver
19/10/24 16:21:13 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 30047
19/10/24 16:21:13 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 30079
19/10/24 16:21:13 INFO executor.Executor: Running task 10.0 in stage 165.0 (TID 30047)
19/10/24 16:21:13 INFO executor.Executor: Running task 42.0 in stage 165.0 (TID 30079)
19/10/24 16:21:13 INFO spark.MapOutputTrackerWorker: Updating epoch to 56 and clearing cache
19/10/24 16:21:13 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 210
19/10/24 16:21:13 INFO storage.MemoryStore: Block broadcast_210_piece0 stored as bytes in memory (estimated size 29.4 KB, free 3.8 GB)
19/10/24 16:21:13 INFO broadcast.TorrentBroadcast: Reading broadcast variable 210 took 3 ms
19/10/24 16:21:13 INFO storage.MemoryStore: Block broadcast_210 stored as values in memory (estimated size 83.4 KB, free 3.8 GB)
19/10/24 16:21:13 INFO executor.Executor: Finished task 10.0 in stage 165.0 (TID 30047). 931 bytes result sent to driver
19/10/24 16:21:13 INFO executor.Executor: Finished task 42.0 in stage 165.0 (TID 30079). 931 bytes result sent to driver
19/10/24 16:21:15 WARN executor.CoarseGrainedExecutorBackend: An unknown (rxxxxxx1.hadoop.com:XXXXX) driver disconnected.
19/10/24 16:21:15 ERROR executor.CoarseGrainedExecutorBackend: Driver XX.XXX.XXX.XXX:XXXXX disassociated! Shutting down.
19/10/24 16:21:15 INFO storage.DiskBlockManager: Shutdown hook called
19/10/24 16:21:15 INFO util.ShutdownHookManager: Shutdown hook called

谢谢, 席德

推荐答案

在没有任何明显堆栈跟踪的情况下,从两个角度考虑问题是个好主意:它是 code问题数据问题.

Without any apparent stack trace it's a good idea to think of a problem from two angles: it's either a code issue or a data issue.

无论哪种情况,您都应该从给驱动程序足够的内存开始,以排除可能的原因.增加driver.memorydriver.memoryOverhead直到诊断出问题.

Either case you should start by giving the driver abundant memory so as to rule that out as a probable cause. Increase driver.memory and driver.memoryOverhead until you've diagnosed the problem.

常见代码问题:

  1. 太多的转换会导致沿袭变得太大.如果数据帧上发生任何类型的迭代操作,则最好在两者之间做一个checkpoint来截断DAG.在Spark 2.x中,您可以直接调用dataFrame.checkpoint(),而不必访问RDD. @Sagar的答案还描述了如何针对Spark 1.6做到这一点

  1. Too many transformations causes the lineage to get too big. If there's any kind of iterative operations happening on the dataframe then it's a good idea to truncate the DAG by doing a checkpoint in between. In Spark 2.x you can call dataFrame.checkpoint() directly and not have to access the RDD. Also @Sagar's answer describes how to do this for Spark 1.6

尝试广播太大的数据帧.这通常会导致OOM异常,但有时可能只会导致作业看上去卡住了.解决方案是,如果您明确地调用broadcast,则不要调用它.否则,请检查是否已将spark.sql.autoBroadcastJoinThreshold设置为某个自定义值,然后尝试降低该值或完全禁用广播(设置-1).

Trying to broadcast dataframes that are too big. This will usually result in an OOM exception but can sometimes just cause the job to seem stuck. Resolution is to not call broadcast if you are explicitly doing so. Otherwise check if you've set spark.sql.autoBroadcastJoinThreshold to some custom value and try lowering that value or disable broadcast altogether (setting -1).

分区不足会导致每个任务热运行.诊断此问题的最简单方法是检查Spark UI上的阶段视图,并查看每个任务正在读取和写入的数据的大小.理想情况下,此范围应在100MB-500MB之间.否则,将spark.sql.shuffle.partitionsspark.default.parallelism的值增加到大于默认值200的值.

Not enough partitions can cause every task to run hot. Easiest way to diagnose this is to check the stages view on the Spark UI and see the size of data being read and written per task. This should ideally be in 100MB-500MB range. Otherwise increase spark.sql.shuffle.partitions and spark.default.parallelism to higher values than the default 200.

常见数据问题:

  1. 数据偏斜.由于您的作业因特定的工作负载而失败,因此该特定的作业可能会有数据偏斜.通过检查任务完成的中值时间是否可与75%的百分数相媲美,而这与Spark UI的舞台视图上的90%的百分数可相比较进行诊断.有很多方法可以纠正数据偏斜,但是我发现最好的一种方法是编写一个自定义的连接函数,该函数在连接之前先对连接键进行盐析.这样会将倾斜的分区分割为几个较小的分区,但代价是数据爆炸的大小是恒定的.

  1. Data skew. Since your job is failing for a specific workload it could have data skew in the specific job. Diagnose this by checking that the median time for task completion is comparable to the 75 percentile which is comparable to the 90 percentile on the stage view in the Spark UI. There are many ways to redress data skew but the one I find best is to write a custom join function that salts the join keys prior to join. This splits the skewed partition into several smaller partitions at the expense of a constant size data explosion.

输入文件格式或文件数.如果您的输入文件未分区,而您仅执行狭窄的转换(那些不会引起数据混乱的转换),那么所有数据将通过单个执行程序运行,而不能真正从分布式集群设置中受益.通过检查管道的每个阶段中创建了多少个任务,从Spark UI进行诊断.它应该是您的spark.default.parallelism值的顺序.如果不是,则在任何转换之前的数据读取步骤之后立即执行.repartition(<some value>).如果文件格式为CSV(不理想),请确认您已禁用multiLine,除非在特定情况下需要,否则,这将强制单个执行程序读取整个csv文件.

Input file format or number of files. If your input file isn't partitioned and you're only doing narrow transforms (those that do not cause a data shuffle) then all of your data will run through a single executor and not really benefit from the distributed cluster setup. Diagnose this from the Spark UI by checking how many tasks are getting created in each stage of the pipeline. It should be of the order of your spark.default.parallelism value. If not then do a .repartition(<some value>) immediately after the data read step prior to any transforms. If the file format is CSV (not ideal) then verify that you have multiLine disabled unless required in your specific case, otherwise this forces a single executor to read the entire csv file.

调试愉快!

这篇关于为什么Pyspark作业在流程中间消失而没有任何特定错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆