Kubernetes-更新后不会删除旧Pod [英] Kubernetes - Old pod not being deleted after update
问题描述
我正在使用Deployments控制我的K8S集群中的Pod.
I am using Deployments to control my pods in my K8S cluster.
我的原始部署文件如下:
My original deployment file looks like :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websocket-backend-deployment
spec:
replicas: 2
selector:
matchLabels:
name: websocket-backend
template:
metadata:
labels:
name: websocket-backend
spec:
containers:
- name: websocket-backend
image: armdock.se/proj/websocket_backend:3.1.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /websocket/health
initialDelaySeconds: 300
timeoutSeconds: 30
readinessProbe:
httpGet:
port: 8080
path: /websocket/health
initialDelaySeconds: 25
timeoutSeconds: 5
此配置按计划工作.
# kubectl get po | grep websocket
websocket-backend-deployment-4243571618-mreef 1/1 Running 0 31s
websocket-backend-deployment-4243571618-qjo6q 1/1 Running 0 31s
现在,我计划对图像文件进行实时/滚动更新. 我正在使用的命令是:
Now I plan to do a live/rolling update on the image file. The command that I am using is :
kubectl set image deployment websocket-backend-deployment websocket-backend=armdock.se/proj/websocket_backend:3.1.5
我仅更新docker image标签. 现在我希望我的豆荚在更新后保持2.我得到的是带有新版本的2个新Pod,但是仍然有一个带有旧版本的Pod.
I am only updating the docker image tag. Now im expecting for my pods to remain 2 after the update. I am getting the 2 new pods with the new version but there is one pod that still exists carrying the old version.
# kubectl get po | grep websocket
websocket-backend-deployment-4243571618-qjo6q 1/1 Running 0 2m
websocket-backend-deployment-93242275-kgcmw 1/1 Running 0 51s
websocket-backend-deployment-93242275-kwmen 1/1 Running 0 51s
如您所见,1个广告连播使用了旧标签3.1.4
As you can see, 1 pod uses the old tag 3.1.4
# kubectl describe po websocket-backend-deployment-4243571618-qjo6q | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.4
其余2个节点位于新标签3.1.5
上.
The rest of the 2 nodes are on the new tag 3.1.5
.
# kubectl describe po websocket-backend-deployment-93242275-kgcmw | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.5
# kubectl describe po websocket-backend-deployment-93242275-kwmen | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.5
为什么1个旧吊舱仍留在那里并且没有被删除?我是否缺少某些配置?
Why does 1 old pod still stay there and doesnt get deleted ? Am I missing some config ?
当我检查rollout
命令时,它只是停留在:
When I check the rollout
command, its just stuck on :
# kubectl rollout status deployment/websocket-backend-deployment
Waiting for rollout to finish: 1 old replicas are pending termination...
我的K8S版本是:
# kubectl --version
Kubernetes v1.5.2
推荐答案
我建议您在RollingUpdate策略中将 maxSurge 设置为0,以在推出后使所需的容器相同. maxSurge参数是可以在原始容器数之前安排的最大容器数.
I would suggest you to set the maxSurge to 0 in the RollingUpdate strategy to make the desired pods same after the rollout . The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods.
示例:
apiVersion:apps/v1beta1
种类:部署
元数据:
名称:网络服务器
规格:
副本:2
选择器:
matchLabel:
名称:网络服务器
战略:
类型:RollingUpdate
rollingUpdate:
maxSurge:0
maxunavailable:1
模板:
元数据:
标签:
名称:webserver
spec:
容器:
-名称:webserver
图片:nginx:latest
imagePullPolicy:IfNotPresent
端口:
-containerPort:80
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 2
selector:
matchLabels:
name: webserver
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
template:
metadata:
labels:
name: webserver
spec:
containers:
- name: webserver
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
这篇关于Kubernetes-更新后不会删除旧Pod的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!