如何跨节点分布部署? [英] How can I distribute a deployment across nodes?

查看:14
本文介绍了如何跨节点分布部署?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 Kubernetes 部署,看起来像这样(用...."替换名称和其他内容):

I have a Kubernetes deployment that looks something like this (replaced names and other things with '....'):

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "3"
    kubernetes.io/change-cause: kubectl replace deployment ....
      -f - --record
  creationTimestamp: 2016-08-20T03:46:28Z
  generation: 8
  labels:
    app: ....
  name: ....
  namespace: default
  resourceVersion: "369219"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/....
  uid: aceb2a9e-6688-11e6-b5fc-42010af000c1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ....
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ....
    spec:
      containers:
      - image: gcr.io/..../....:0.2.1
        imagePullPolicy: IfNotPresent
        name: ....
        ports:
        - containerPort: 8080
          protocol: TCP
        resources:
          requests:
            cpu: "0"
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 2
  observedGeneration: 8
  replicas: 2
  updatedReplicas: 2

我观察到的问题是 Kubernetes 将两个副本(在我要求的部署中)放置在同一个节点上.如果该节点出现故障,我将丢失两个容器,服务也会离线.

The problem I'm observing is that Kubernetes places both replicas (in the deployment I've asked for two) on the same node. If that node goes down, I lose both containers and the service goes offline.

我希望 Kubernetes 做的是确保它不会将容器类型相同的同一节点上的容器加倍——这只会消耗资源,不提供任何冗余.我查看了有关部署、副本集、节点等的文档,但找不到任何选项可以让我告诉 Kubernetes 执行此操作.

What I want Kubernetes to do is to ensure that it doesn't double up containers on the same node where the containers are the same type - this only consumes resources and doesn't provide any redundancy. I've looked through the documentation on deployments, replica sets, nodes etc. but I couldn't find any options that would let me tell Kubernetes to do this.

有没有办法告诉 Kubernetes 我想要一个容器在节点间有多少冗余?

Is there a way to tell Kubernetes how much redundancy across nodes I want for a container?

我不确定标签是否有效;标签限制节点运行的位置,以便它可以访问本地资源 (SSD) 等.我想要做的就是确保节点离线时不会停机.

I'm not sure labels will work; labels constrain where a node will run so that it has access to local resources (SSDs) etc. All I want to do is ensure no downtime if a node goes offline.

推荐答案

我认为您正在寻找 Affinity/Anti-Affinity Selectors.

I think you're looking for the Affinity/Anti-Affinity Selectors.

Affinity 用于共同定位 Pod,因此我希望我的网站尝试在与我的缓存相同的主机上进行调度.另一方面,反亲和性则相反,不要按照一套规则在主机上进行调度.

Affinity is for co-locating pods, so I want my website to try and schedule on the same host as my cache for example. On the other hand, Anti-affinity is the opposite, don't schedule on a host as per a set of rules.

所以对于你在做什么,我会仔细看看这两个链接:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node

So for what you're doing, I would take a closer look at this two links: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node

https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure

这篇关于如何跨节点分布部署?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆