Kubernetes-滚动更新可以杀死旧的pod而不产生新的pod [英] Kubernetes - Rolling update killing off old pod without bringing up new one

查看:97
本文介绍了Kubernetes-滚动更新可以杀死旧的pod而不产生新的pod的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在使用Deployments来管理我的K8S集群中的Pod.

I am currently using Deployments to manage my pods in my K8S cluster.

我的某些部署需要2个Pod/副本,一些部署需要3个Pod/副本,而有些部署只需要1个Pod/副本.我的问题是只有一个吊舱/副本.

Some of my deployments require 2 pods/replicas, some require 3 pods/replicas and some of them require just 1 pod/replica. The issue Im having is the one with one pod/replica.

我的YAML文件是:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: user-management-backend-deployment
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 2
  selector:
    matchLabels:
      name: user-management-backend
  template:
    metadata:
      labels:
        name: user-management-backend
    spec:
      containers:
      - name: user-management-backend
        image: proj_csdp/user-management_backend:3.1.8
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 8080
        livenessProbe:
          httpGet:
            port: 8080
            path: /user_management/health
          initialDelaySeconds: 300
          timeoutSeconds: 30
        readinessProbe:
          httpGet:
            port: 8080
            path: /user_management/health
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
          - name: nfs
            mountPath: "/vault"
      volumes:
        - name: nfs
          nfs:
            server: kube-nfs
            path: "/kubenfs/vault"
            readOnly: true

我有一个运行良好的旧版本.

I have a the old version running fine.

# kubectl get po | grep  user-management-backend-deployment
user-management-backend-deployment-3264073543-mrrvl               1/1       Running        0          4d

现在我要更新图像:

# kubectl set image deployment  user-management-backend-deployment  user-management-backend=proj_csdp/user-management_backend:3.2.0

现在,根据RollingUpdate设计,K8S应该在保持旧Pod正常工作的同时启动新Pod,并且只有在新Pod准备好进行通信时,才应删除旧Pod.但是我看到的是,旧的Pod立即被删除,新的Pod被创建,然后花一些时间才能开始吸引流量,这意味着我必须减少流量.

Now as per RollingUpdate design, K8S should bring up the new pod while keeping the old pod working and only once the new pod is ready to take the traffic, should the old pod get deleted. But what I see is that the old pod is immediately deleted and the new pod is created and then it takes time to start taking traffic meaning that I have to drop traffic.

# kubectl get po | grep  user-management-backend-deployment
user-management-backend-deployment-3264073543-l93m9               0/1       ContainerCreating   0          1s

# kubectl get po | grep  user-management-backend-deployment
user-management-backend-deployment-3264073543-l93m9               1/1       Running            0          33s

我用过maxSurge: 2& maxUnavailable: 1,但这似乎不起作用.

I have used maxSurge: 2 & maxUnavailable: 1 but this does not seem to be working.

任何想法为什么这行不通?

Any ideas why is this not working ?

推荐答案

它似乎是maxUnavailable: 1;通过设置该值,我可以轻松重现您的体验,并通过将其设置为maxUnavailable: 0

It appears to be the maxUnavailable: 1; I was able to trivially reproduce your experience setting that value, and trivially achieve the correct experience by setting it to maxUnavailable: 0

这是我的伪证明",表明调度程序如何实现您所遇到的行为:

Here's my "pseudo-proof" of how the scheduler arrived at the behavior you are experiencing:

由于replicas: 1,k8的所需状态恰好是Ready中的一个Pod.在滚动更新操作(这是您要求的策略)期间,它将创建一个新的Pod,使总数达到2.但是您授予k8s权限,使一个Pod 处于不可用状态,并且您指示它可以将所需的Pod数量保持为1.因此,它满足了所有这些限制:1 Pod,RU策略允许的所需数量,处于不可用状态.

Because replicas: 1, the desired state for k8s is exactly one Pod in Ready. During a Rolling Update operation, which is the strategy you requested, it will create a new Pod, bringing the total to 2. But you granted k8s permission to leave one Pod in an unavailable state, and you instructed it to keep the desired number of Pods at 1. Thus, it fulfilled all of those constraints: 1 Pod, the desired count, in an unavailable state, permitted by the R-U strategy.

通过将maxUnavailable设置为零,您可以正确地将k8定向为从不让任何Pod不可用,即使这意味着在短时间内将Pod超过replica的次数激增

By setting the maxUnavailable to zero, you correctly direct k8s to never let any Pod be unavailable, even if that means surging Pods above the replica count for a short time

这篇关于Kubernetes-滚动更新可以杀死旧的pod而不产生新的pod的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆