Kubernetes pod 在节点之间的分布 [英] Kubernetes pod distribution amongst nodes
问题描述
有什么办法可以让kubernetes尽可能多的分发pods吗?我对所有部署和全局请求都有请求"作为 HPA.所有节点都相同.
Is there any way to make kubernetes distribute pods as much as possible? I have "Requests" on all deployments and global Requests as well as HPA. all nodes are the same.
刚刚遇到一种情况,我的 ASG 缩减了一个节点,而一项服务完全不可用,因为所有 4 个 Pod 都在缩减的同一个节点上.
Just had a situation where my ASG scaled down a node and one service became completely unavailable as all 4 pods were on the same node that was scaled down.
我想维持这样一种情况,即每个部署必须将其容器分布在至少 2 个节点上.
I would like to maintain a situation where each deployment must spread its containers on at least 2 nodes.
推荐答案
在这里我利用 Anirudh 的答案添加示例代码.
Here I leverage Anirudh's answer adding example code.
我最初的 kubernetes yaml 看起来像这样:
My initial kubernetes yaml looked like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: say-deployment
spec:
replicas: 6
template:
metadata:
labels:
app: say
spec:
containers:
- name: say
image: gcr.io/hazel-champion-200108/say
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: say-service
spec:
selector:
app: say
ports:
- protocol: TCP
port: 8080
type: LoadBalancer
externalIPs:
- 192.168.0.112
此时,kubernetes 调度器以某种方式决定所有 6 个副本应该部署在同一个节点上.
At this point, kubernetes scheduler somehow decides that all the 6 replicas should be deployed on the same node.
然后我添加 requiredDuringSchedulingIgnoredDuringExecution
Pod 部署在不同的节点上:
Then I added requiredDuringSchedulingIgnoredDuringExecution
to force the pods beeing deployed on different nodes:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: say-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: say
spec:
containers:
- name: say
image: gcr.io/hazel-champion-200108/say
ports:
- containerPort: 8080
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- say
topologyKey: "kubernetes.io/hostname"
---
kind: Service
apiVersion: v1
metadata:
name: say-service
spec:
selector:
app: say
ports:
- protocol: TCP
port: 8080
type: LoadBalancer
externalIPs:
- 192.168.0.112
现在所有的 Pod 都运行在不同的节点上.由于我有 3 个节点和 6 个 pod,其他 3 个 pod(6 减 3)无法运行(待处理).这是因为我需要它:requiredDuringSchedulingIgnoredDuringExecution
.
Now all the pods are run on different nodes. And since I have 3 nodes and 6 pods, other 3 pods (6 minus 3) can't be running (pending). This is because I required it: requiredDuringSchedulingIgnoredDuringExecution
.
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
say-deployment-8b46845d8-4zdw2 1/1 Running 0 24s 10.244.2.80 night
say-deployment-8b46845d8-699wg 0/1 Pending 0 24s <none> <none>
say-deployment-8b46845d8-7nvqp 1/1 Running 0 24s 10.244.1.72 gray
say-deployment-8b46845d8-bzw48 1/1 Running 0 24s 10.244.0.25 np3
say-deployment-8b46845d8-vwn8g 0/1 Pending 0 24s <none> <none>
say-deployment-8b46845d8-ws8lr 0/1 Pending 0 24s <none> <none>
现在,如果我使用 preferredDuringSchedulingIgnoredDuringExecution放宽这个要求>:
Now if I loosen this requirement with preferredDuringSchedulingIgnoredDuringExecution
:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: say-deployment
spec:
replicas: 6
template:
metadata:
labels:
app: say
spec:
containers:
- name: say
image: gcr.io/hazel-champion-200108/say
ports:
- containerPort: 8080
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- say
topologyKey: "kubernetes.io/hostname"
---
kind: Service
apiVersion: v1
metadata:
name: say-service
spec:
selector:
app: say
ports:
- protocol: TCP
port: 8080
type: LoadBalancer
externalIPs:
- 192.168.0.112
前 3 个 Pod 部署在 3 个不同的节点上,就像前一种情况一样.其余3个(6个pod减3个节点)根据kubernetes内部考虑部署在各个节点上.
First 3 pods are deployed on 3 different nodes just like in the previous case. And the rest 3 (6 pods minus 3 nodes) are deployed on various nodes according to kubernetes internal considerations.
NAME READY STATUS RESTARTS AGE IP NODE
say-deployment-57cf5fb49b-26nvl 1/1 Running 0 59s 10.244.2.81 night
say-deployment-57cf5fb49b-2wnsc 1/1 Running 0 59s 10.244.0.27 np3
say-deployment-57cf5fb49b-6v24l 1/1 Running 0 59s 10.244.1.73 gray
say-deployment-57cf5fb49b-cxkbz 1/1 Running 0 59s 10.244.0.26 np3
say-deployment-57cf5fb49b-dxpcf 1/1 Running 0 59s 10.244.1.75 gray
say-deployment-57cf5fb49b-vv98p 1/1 Running 0 59s 10.244.1.74 gray
这篇关于Kubernetes pod 在节点之间的分布的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!