了解Kubernetes Job中的backoffLimit [英] Understanding backoffLimit in Kubernetes Job
问题描述
我在kubernetes中创建了一个带有时间表(8 * * * *
)的Cronjob
,作业的backoffLimit
默认设置为6,pod的RestartPolicy
设置为Never
,这些Pod被故意配置为FAIL.据我了解,(对于具有restartPolicy : Never
的podSpec),作业控制器将尝试创建backoffLimit
个Pod,然后将其标记为Failed
,因此,我预计在Error
状态下将有6个Pod
I’ve created a Cronjob
in kubernetes with schedule(8 * * * *
), with job’s backoffLimit
defaulting to 6 and pod’s RestartPolicy
to Never
, the pods are deliberately configured to FAIL. As I understand, (for podSpec with restartPolicy : Never
) Job controller will try to create backoffLimit
number of pods and then it marks the job as Failed
, so, I expected that there would be 6 pods in Error
state.
这是工作的实际状态:
status:
conditions:
- lastProbeTime: 2019-02-20T05:11:58Z
lastTransitionTime: 2019-02-20T05:11:58Z
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 5
为什么只有5个失败的吊舱而不是6个?还是我对backoffLimit
的理解不正确?
Why were there only 5 failed pods instead of 6? Or is my understanding about backoffLimit
in-correct?
推荐答案
总之:您可能看不到所有已创建的吊舱,因为cronjob中的计划时间很短.
In short: You might not be seeing all created pods because period of schedule in the cronjob is to short.
如文档:
与作业关联的失败Pod由作业重新创建 带有指数补偿延迟(10s,20s,40s…)的控制器 在六分钟.如果没有新的失败Pod,将重置退避计数 出现在作业的下一个状态检查之前.
Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s …) capped at six minutes. The back-off count is reset if no new failed Pods appear before the Job’s next status check.
如果在Job控制器有机会重新创建Pod之前计划了新作业(请记住上次失败后的延迟),则Job控制器将再次从一个位置开始计数.
If new job is scheduled before Job controller has a chance to recreate a pod (having in mind the delay after previous failure), Job controller starts counting from one again.
我使用以下.yaml
在GKE中转载了您的问题:
I reproduced your issue in GKE using following .yaml
:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hellocron
spec:
schedule: "*/3 * * * *" #Runs every 3 minutes
jobTemplate:
spec:
template:
spec:
containers:
- name: hellocron
image: busybox
args:
- /bin/cat
- /etc/os
restartPolicy: Never
backoffLimit: 6
suspend: false
此作业将失败,因为文件/etc/os
不存在.
This job will fail because file /etc/os
doesn't exist.
这是其中一个作业的kubectl describe
输出:
And here is an output of kubectl describe
for one of the jobs:
Name: hellocron-1551194280
Namespace: default
Selector: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
Labels: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
job-name=hellocron-1551194280
Annotations: <none>
Controlled By: CronJob/hellocron
Parallelism: 1
Completions: 1
Start Time: Tue, 26 Feb 2019 16:18:07 +0100
Pods Statuses: 0 Running / 0 Succeeded / 6 Failed
Pod Template:
Labels: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
job-name=hellocron-1551194280
Containers:
hellocron:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/cat
/etc/os
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-4lf6h
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-85khk
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-wrktb
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-6942s
Normal SuccessfulCreate 25m job-controller Created pod: hellocron-1551194280-662zv
Normal SuccessfulCreate 22m job-controller Created pod: hellocron-1551194280-6c6rh
Warning BackoffLimitExceeded 17m job-controller Job has reached the specified backoff limit
请注意创建Pod hellocron-1551194280-662zv
和hellocron-1551194280-6c6rh
之间的延迟.
Note the delay between creation of pods hellocron-1551194280-662zv
and hellocron-1551194280-6c6rh
.
这篇关于了解Kubernetes Job中的backoffLimit的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!