kubectl流失没有驱逐头盔内存缓存的豆荚 [英] kubectl drain not evicting helm memcached pods

查看:120
本文介绍了kubectl流失没有驱逐头盔内存缓存的豆荚的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在遵循本指南尝试在不停机的情况下在GKE上升级kubernetes集群.我已经封锁了所有旧节点,并且已将大多数Pod逐出,但是对于其中几个节点,kubectl drain一直在运行,没有逐出更多的Pod.

I'm following this guide in an attempt to upgrade a kubernetes cluster on GKE with no downtime. I've gotten all the old nodes cordoned and most of the pods have been evicted, but for a couple of the nodes, kubectl drain just keeps running and not evicting any more pods.

kubectl get pods --all-namespaces -o=wide显示了少数仍在旧池上运行的Pod,当我运行kubectl drain --ignore-daemonsets --force时,它会显示一条警告,解释为什么它忽略了其中的大多数.唯一没有提到的是我运行过memcached的Pod,这些Pod是使用

kubectl get pods --all-namespaces -o=wide shows a handful of pods still running on the old pool, and when I run kubectl drain --ignore-daemonsets --force it prints a warning explaining why it's ignoring most of them; the only ones it doesn't mention are the pods I have running memcached, which were created via helm using this chart.

我们并不太依赖于memcached,因此我可以继续删除当前的旧节点池,并接受该服务的短暂停机时间.但是我更希望有一个脚本以正确的方式来完成这件事,如果这些吊舱正在做更重要的事情,我现在不知道该怎么办.

We don't rely too heavily on memcached, so I could just go ahead and delete the old node pool at this point and accept the brief downtime for that one service. But I'd prefer to have a script to do this whole thing the right way, and I wouldn't know what to do at this point if these pods were doing something more important.

那么,这种预期的行为是某种原因吗?舵图上有什么东西让这些豆荚被拒绝驱逐吗?我还需要传递给kubectl drain的另一种强制/忽略标志吗?

So, is this expected behavior somehow? Is there something about that helm chart that's making these pods refuse to be evicted? Is there another force/ignore sort of flag I need to pass to kubectl drain?

推荐答案

您链接的舵图包含PodDisruptionBudget(PDB).如果kubectl drain违反了PDB,则不会删除pod(请参阅: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/,中断预算的工作原理"部分提到了这一点).

The helm chart you linked contains a PodDisruptionBudget (PDB). kubectl drain will not remove pods if it would violate a PDB (reference: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/, "How Disruption Budgets Work" section mentions this).

如果PDB上的minAvailable等于Pod的副本数,则将无法耗尽节点.鉴于 https://github.com/kubernetes/charts/blob/master/stable/memcached/values.yaml 都设置为3,我想这很可能是问题的根源.只需将您的PDB minAvailable设置为比所需副本数少一个,它就能一一移动您的Pod.

If minAvailable on your PDB equals to number of replicas of your pod you will not be able to drain the node. Given that https://github.com/kubernetes/charts/blob/master/stable/memcached/values.yaml has both set to 3, I would guess that's most likely the source of your problem. Just set your PDB minAvailable to one less than the desired number of replicas and it will be able to move your pods one-by-one.

这篇关于kubectl流失没有驱逐头盔内存缓存的豆荚的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆