使用Kops创建的集群-使用DaemonSet逐个节点部署一个Pod避免使用主节点 [英] cluster created with Kops - deploying one pod by node with DaemonSet avoiding master node

查看:182
本文介绍了使用Kops创建的集群-使用DaemonSet逐个节点部署一个Pod避免使用主节点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试按节点部署一个Pod.当使用kubeup创建集群时,它可以与daemonSet类型配合使用.但是我们使用kops迁移了集群创建,而使用kops时,主节点是集群的一部分.

I try to deploy one pod by node. It works fine with the kind daemonSet and when the cluster is created with kubeup. But we migrated the cluster creation using kops and with kops the master node is part of the cluster.

我注意到主节点是用特定标签定义的:kubernetes.io/role=master

I noticed the master node is defined with a specific label: kubernetes.io/role=master

并带有污点:scheduler.alpha.kubernetes.io/taints:[{"key":"dedicated","value":"master","effect":"NoSchedule"}]

and with a taint: scheduler.alpha.kubernetes.io/taints: [{"key":"dedicated","value":"master","effect":"NoSchedule"}]

但是使用DaemonSet在其上部署Pod并不会停止

But it does not stop to have a pod deployed on it with DaemonSet

所以我尝试添加scheduler.alpha.kubernetes.io/affinity:

- apiVersion: extensions/v1beta1
  kind: DaemonSet
  metadata:
    name: elasticsearch-data
    namespace: ess
    annotations:
      scheduler.alpha.kubernetes.io/affinity: >
        {
          "nodeAffinity": {
            "requiredDuringSchedulingRequiredDuringExecution": {
              "nodeSelectorTerms": [
                {
                  "matchExpressions": [
                    {
                      "key": "kubernetes.io/role",
                      "operator": "NotIn",
                      "values": ["master"]
                    }
                  ]
                }
              ]
            }
          }
        }
  spec:
    selector:
      matchLabels:
        component: elasticsearch
        type: data
        provider: fabric8
    template:
      metadata:
        labels:
          component: elasticsearch
          type: data
          provider: fabric8
      spec:
        serviceAccount: elasticsearch
        serviceAccountName: elasticsearch
        containers:
          - env:
              - name: "SERVICE_DNS"
                value: "elasticsearch-cluster"
              - name: "NODE_MASTER"
                value: "false"
            image: "essearch/ess-elasticsearch:1.7.6"
            name: elasticsearch
            imagePullPolicy: Always
            ports:
              - containerPort: 9300
                name: transport
            volumeMounts:
              - mountPath: "/usr/share/elasticsearch/data"
                name: task-pv-storage
        volumes:
          - name: task-pv-storage
            persistentVolumeClaim:
              claimName: task-pv-claim
        nodeSelector:
          minion: true

但是它不起作用.有人知道为什么吗? 我现在的解决方法是使用nodeSelector并将标签添加到仅属于minion的节点上,但是我会避免在集群创建过程中添加标签,因为这是一个额外的步骤,如果可以避免的话,它将是最好:)

But it does not work. Is anyone know why? The workaround I have for now is to use nodeSelector and add a label to the nodes that are minion only but i would avoid to add a label during the cluster creation because it's an extra step and if i could avoid it, it would be for the best :)

我改了(给出答案),我认为这是正确的,但没有帮助,我仍然在上面部署了一个吊舱:

I changed to that (given the answer) and i think it's right but it does not help, i still have a pod deployed on it:

- apiVersion: extensions/v1beta1
  kind: DaemonSet
  metadata:
    name: elasticsearch-data
    namespace: ess
  spec:
    selector:
      matchLabels:
        component: elasticsearch
        type: data
        provider: fabric8
    template:
      metadata:
        labels:
          component: elasticsearch
          type: data
          provider: fabric8
        annotations:
          scheduler.alpha.kubernetes.io/affinity: >
            {
              "nodeAffinity": {
                "requiredDuringSchedulingRequiredDuringExecution": {
                  "nodeSelectorTerms": [
                    {
                      "matchExpressions": [
                        {
                          "key": "kubernetes.io/role",
                          "operator": "NotIn",
                          "values": ["master"]
                        }
                      ]
                    }
                  ]
                }
              }
            }
      spec:
        serviceAccount: elasticsearch
        serviceAccountName: elasticsearch
        containers:
          - env:
              - name: "SERVICE_DNS"
                value: "elasticsearch-cluster"
              - name: "NODE_MASTER"
                value: "false"
            image: "essearch/ess-elasticsearch:1.7.6"
            name: elasticsearch
            imagePullPolicy: Always
            ports:
              - containerPort: 9300
                name: transport
            volumeMounts:
              - mountPath: "/usr/share/elasticsearch/data"
                name: task-pv-storage
        volumes:
          - name: task-pv-storage
            persistentVolumeClaim:
              claimName: task-pv-claim

推荐答案

只需将注释移至 pod template:部分(在metadata:下).

Just move the annotation into the pod template: section (under metadata:).

或者 taint 主节点(并且您可以删除注释):

Alternatively taint the master node (and you can remove the annotation):

kubectl taint nodes nameofmaster dedicated=master:NoSchedule

我建议您阅读污点和容忍度.

这篇关于使用Kops创建的集群-使用DaemonSet逐个节点部署一个Pod避免使用主节点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆