当 Kubernetes 中的 configmap 更新时重新启动 Pod? [英] Restart pods when configmap updates in Kubernetes?

查看:79
本文介绍了当 Kubernetes 中的 configmap 更新时重新启动 Pod?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何在 Kubernetes pod 和与部署关联的 pod 的 configmap 更改/更新时自动重新启动它们?

How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?

我知道有人讨论过在配置映射更改时自动重新启动 Pod 的能力,但据我所知,Kubernetes 1.2 中尚不提供此功能.

I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2.

那么(我认为)我想做的是 deployment 与使用配置映射的 Pod 关联的资源.是否有可能在不更改实际模板中的任何内容的情况下强制滚动重启 Kubernetes 中的部署?这是目前最好的方法还是有更好的选择?

So what (I think) I'd like to do is a "rolling restart" of the deployment resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?

推荐答案

在配置映射更新时通知 pod 是一项正在开发的功能 (https://github.com/kubernetes/kubernetes/issues/22368).

Signalling a pod on config map update is a feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).

您始终可以编写一个自定义 pid1 来通知 confimap 已更改并重新启动您的应用程序.

You can always write a custom pid1 that notices the confimap has changed and restarts your app.

您也可以例如:在 2 个容器中挂载相同的配置映射,在第二个容器中公开 http 健康检查,如果配置映射内容的哈希值发生变化,则会失败,并将其作为第一个容器的活性探针(因为Pod 中的容器共享相同的网络命名空间).当探测失败时,kubelet 会为你重启你的第一个容器.

You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.

当然,如果您不关心 pod 位于哪些节点,您可以简单地删除它们,复制控制器会为您重新启动"它们.

Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.

这篇关于当 Kubernetes 中的 configmap 更新时重新启动 Pod?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆