如何防止EMR Spark步骤重试? [英] How to prevent EMR Spark step from retrying?

查看:96
本文介绍了如何防止EMR Spark步骤重试?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个AWS EMR集群(emr-4.2.0,Spark 1.5.2),在这里我从aws cli提交步骤.我的问题是,如果Spark应用程序失败,则YARN会尝试再次运行该应用程序(在同一EMR步骤下). 我该如何预防?

I have an AWS EMR cluster (emr-4.2.0, Spark 1.5.2), where I am submitting steps from aws cli. My problem is, that if the Spark application fails, then YARN is trying to run the application again (under the same EMR step). How can I prevent this?

我正在尝试设置--conf spark.yarn.maxAppAttempts=1,该设置已在环境/火花属性"中正确设置,但是并不能阻止YARN重新启动应用程序.

I was trying to set --conf spark.yarn.maxAppAttempts=1, which is correctly set in Environment/Spark Properties, but it doesn't prevent YARN from restarting the application.

推荐答案

您应尝试将spark.task.maxFailures设置为1(默认为4).

You should try to set spark.task.maxFailures to 1 (4 by default).

含义:

在放弃工作之前,任何特定任务的失败次数.分布在不同任务上的故障总数不会导致作业失败.特定任务必须使此尝试次数失败.应该大于或等于1.允许的重试次数=此值-1.

Number of failures of any particular task before giving up on the job. The total number of failures spread across different tasks will not cause the job to fail; a particular task has to fail this number of attempts. Should be greater than or equal to 1. Number of allowed retries = this value - 1.

这篇关于如何防止EMR Spark步骤重试?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆