Kubernetes 作业清理 [英] Kubernetes Job Cleanup

查看:30
本文介绍了Kubernetes 作业清理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

据我所知,Job 对象应该在一段时间后收获 Pod.但是在我的 GKE 集群(Kubernetes 1.1.8)上,kubectl get pods -a"似乎可以列出几天前的 pod.

From what I understand the Job object is supposed to reap pods after a certain amount of time. But on my GKE cluster (Kubernetes 1.1.8) it seems that "kubectl get pods -a" can list pods from days ago.

所有这些都是使用 Jobs API 创建的.

All were created using the Jobs API.

我确实注意到在删除作业后kubectl 删除作业豆荚也被删除了.

I did notice that after delete the job with kubectl delete jobs The pods were deleted too.

我在这里主要担心的是,我将在集群上以批处理作业的形式运行成千上万个 Pod,并且不想使内部积压系统过载.

My main concern here is that I am going to run thousands and tens of thousands of pods on the cluster in batch jobs, and don't want to overload the internal backlog system.

推荐答案

它看起来像是从 Kubernetes 1.6(和 v2alpha1 api 版本)开始,如果您使用 cronjobs 创建作业(反过来,创建您的pods),您将能够限制保留了多少旧工作.只需将以下内容添加到您的工作规范中:

It looks like starting with Kubernetes 1.6 (and the v2alpha1 api version), if you're using cronjobs to create the jobs (that, in turn, create your pods), you'll be able to limit how many old jobs are kept. Just add the following to your job spec:

successfulJobsHistoryLimit: X
failedJobsHistoryLimit: Y

其中 X 和 Y 是系统应该保留多少先前运行的作业的限制(默认情况下它无限期地保留作业[至少在 1.5 版中.])

Where X and Y are the limits of how many previously run jobs the system should keep around (it keeps jobs around indefinitely by default [at least on version 1.5.])

编辑2018-09-29:

对于较新的 K8S 版本,这里有更新的文档链接:

For newer K8S versions, updated links with documentation for this are here:

CronJob API 规范

这篇关于Kubernetes 作业清理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆