Kubernetes作业清理 [英] Kubernetes Job Cleanup

查看:88
本文介绍了Kubernetes作业清理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

据我了解,Job对象应该在一定时间后收获豆荚. 但是在我的GKE集群(Kubernetes 1.1.8)上,似乎"kubectl get pods -a"可以列出几天前的pod.

From what I understand the Job object is supposed to reap pods after a certain amount of time. But on my GKE cluster (Kubernetes 1.1.8) it seems that "kubectl get pods -a" can list pods from days ago.

所有内容都是使用Jobs API创建的.

All were created using the Jobs API.

我确实注意到删除作业后 kubectl删除作业 吊舱也被删除了.

I did notice that after delete the job with kubectl delete jobs The pods were deleted too.

我在这里主要担心的是,我将在批处理作业中在群集上运行成千上万个Pod,并且不想超载内部积压系统.

My main concern here is that I am going to run thousands and tens of thousands of pods on the cluster in batch jobs, and don't want to overload the internal backlog system.

推荐答案

如果您使用cronjobs创建作业(从而创建作业),则看起来像从Kubernetes 1.6(以及v2alpha1 api版本)开始.吊舱),您将可以限制保留了多少个旧工作.只需将以下内容添加到您的工作规格中即可:

It looks like starting with Kubernetes 1.6 (and the v2alpha1 api version), if you're using cronjobs to create the jobs (that, in turn, create your pods), you'll be able to limit how many old jobs are kept. Just add the following to your job spec:

successfulJobsHistoryLimit: X
failedJobsHistoryLimit: Y

其中X和Y是系统应保留的先前运行的作业的限制(默认情况下,它无限期地保留作业(至少在版本1.5上如此))

Where X and Y are the limits of how many previously run jobs the system should keep around (it keeps jobs around indefinitely by default [at least on version 1.5.])

编辑 2018-09-29 :

对于较新的K8S版本,此处包含有关文档的更新链接:

For newer K8S versions, updated links with documentation for this are here:

CronJob API规范

这篇关于Kubernetes作业清理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆