为什么Kubernetes服务DNS不工作? [英] Why isn't Kubernetes service DNS working?

查看:595
本文介绍了为什么Kubernetes服务DNS不工作?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经在CoreOS / AWS上的Kubernetes(v1.1.2 + 1abf20d)系统中设置了DNS,但是我无法通过DNS查找服务。我已经尝试过调试,但是不能为我的生活找出原因。当我尝试查找应该始终可用的kubernetes服务时,会发生什么:

  $〜/ .local / bin / kubectl --kubeconfig = / etc / kubernetes / kube.conf exec busybox-sleep  -  nslookup kubernetes.default 
服务器:10.3.0.10
地址1:10.3.0.10 ip-10-3- 0-10.eu-central-1.compute.internal

nslookup:无法解析'kubernetes.default'
错误:执行远程命令时出错:在容器中执行命令时出错:错误在Docker容器中执行:1

我已经根据这个规范安装了DNS插件:

  apiVersion:v1 
种类:ReplicationController
元数据:
名称:kube-dns-v10
命名空间:kube-system
标签:
k8s-app:kube-dns
版本:v10
kubernetes.io/cluster-service:true
spec :
replicas:1
selector:
k8s-app:kube-dns
版本:v10
模板:
元数据:
标签:
k8s-app:kube-dns
版本:v10
kubernetes.io/cluster-service:true
规格:
容器:
- name:etcd
image:gcr.io/google_containers/etcd-amd64:2.2.1
resources:
#keep request = limit将此容器保留在保证类
限制:
cpu:100m
内存:50Mi
请求:
cpu:100m
内存:50Mi
命令:
- / usr / local / bin / etcd
- -data-dir
- / var / etcd / data
- -listen-client-urls
- http://127.0.0.1:2379 ,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name:etcd-storage
mountPath:/ var / etcd / data
- nam e:kube2sky
image:gcr.io/google_containers/kube2sky:1.12
resources:
#keep request = limit将此容器保留在保证类
限制内:
cpu:100m
内存:50Mi
请求:
cpu:100m
内存:50Mi
args:
#command =/ kube2sky
- --domain = cluster.local
- name:skydns
image:gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
#keep request = limit将此容器保留在保证类
限制内:
cpu:100m
内存:50Mi
请求:
cpu:100m
内存:50Mi
args:
#command =/ skydns
- -machines = http://127.0.0.1:4001
- -addr = 0.0.0.0:53
- -ns-rotate = false
- -domain = cluster.local。
ports:
- containerPort:53
名称:dns
协议:UDP
- containerPort:53
名称:dns-tcp
协议:TCP
livenessProbe:
httpGet:
路径:/ healthz
端口:8080
scheme:HTTP
initialDelaySeconds:30
timeoutSeconds:5
readinessProbe:
httpGet:
路径:/ healthz
端口:8080
scheme:HTTP
initialDelaySeconds:1
timeoutSeconds:5
- name:healthz
image:gcr.io/google_containers/exechealthz:1.0
resources:
#keep request = limit将此容器保留在保证类
限制:
cpu:10m
内存:20Mi
请求:
cpu:10m
内存:20Mi
args:
- -cmd = nslookup kubernetes.default.svc.cluster.local 127.0.0.1> / dev / null
- -port = 8080
ports:
- containerPort:8080
协议:TCP
卷:
- 名称:etcd-storage
emptyDir:{}
dnsPolicy:默认#不要使用集群DNS。

---

apiVersion:v1
kind:Service
元数据:
名称:kube-dns
命名空间: kube-system
标签:
k8s-app:kube-dns
kubernetes.io/cluster-service:true
kubernetes.io/name:KubeDNS
规格:
选择器:
k8s-app:kube-dns
clusterIP:10.3.0.10
ports:
- 名称:dns
端口: 53
协议:UDP
- 名称:dns-tcp
端口:53
协议:TCP

为什么DNS查找服务在我的Kubernetes设置中工作?请让我知道我需要提供哪些其他信息。

解决方案

有两件事我需要做:


  1. 通过kubeconfig配置kube2sky,使其正确配置为TLS。

  2. 通过kubeconfig配置kube-proxy,所以它正确配置为TLS并找到主节点。



主节点上的/etc/kubernetes/kube.conf



  apiVersion:v1 
种类:配置
集群:
- 名称:kube
集群:
服务器:https://127.0.0.1:443
证书授权:/etc/ssl/etcd/ca.pem
用户:
- 名称:kubelet
用户:
客户端证书:/etc/ssl/etcd/master-client.pem
客户端密钥:/etc/ssl/etcd/master-client-key.pem
上下文:
- 上下文:
集群:kube
用户:kubelet



/etc/kubernetes/kube.conf on worker node



  apiVersion: v1 
种:配置
集群:
- 名称:本地
集群:
证书授权:/etc/ssl/etcd/ca.pem
服务器:https://< master IP>:443
用户:
- 名称:kubelet
用户:
客户端证书:/ etc / ssl / etcd / worker。 pem
客户端密钥:/etc/ssl/etcd/worker-key.pem
上下文:
- 上下文:
集群:本地
用户:kubelet
name:kubelet-context
current-context:kubelet-context



dns-addon $。$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $
b b b b b b b b b b b b b b b b b b b $ b名称:kube-dns-v11
命名空间:kube-system
标签:
k8s-app:kube-dns
版本:v11
kubernetes.io/ cluster-service:true
spec:
replicas:1
selector:
k8s-app:kube-dns
version:v11
template:
元数据:
标签:
k8s-app:kube-dns
versio n:v11
kubernetes.io/cluster-service:true
规格:
容器:
- 名称:etcd
image:gcr.io/google_containers/ etcd-amd64:2.2.1
资源:
#TODO:当我们为大型
#集群分析容器时,设置内存限制,然后设置request = limit以将此容器保存在
#保证类。目前,这个容器属于
#burstable类别,所以kubelet不会重新启动
#it。
限制:
cpu:100m
内存:500Mi
请求:
cpu:100m
内存:50Mi
命令:
- / usr / local / bin / etcd
- -data-dir
- / var / etcd / data
- -listen-client-urls
- http://127.0 .0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- 名称:etcd-storage
mountPath:/ var / etcd / data
- 名称:kube2sky
image:gcr.io/google_containers/kube2sky:1.14
资源:
#TODO:当我们为大型
#集群分析容器时,设置内存限制,然后设置request = limit将此容器保存在
#保证类中。目前,这个容器属于
#burstable类别,所以kubelet不会重新启动
#it。
限制:
cpu:100m
#Kube2sky观看所有pod。
内存:200Mi
请求:
cpu:100m
内存:50Mi
livenessProbe:
httpGet:
路径:/ healthz
端口:8080
scheme:HTTP
initialDelaySeconds:60
timeoutSeconds:5
volumeMounts:
- 名称:kubernetes-etc
mountPath:/ etc / kubernetes
readOnly:true
- name:etcd-ssl
mountPath:/ etc / ssl / etcd
readOnly:true
readinessProbe:
httpGet:
路径:/ readiness
端口:8081
方案:HTTP
#我们在荚启动时轮询Kubernetes主服务,
#只设置/准备HTTP服务器一次这是可用的。
initialDelaySeconds:30
timeoutSeconds:5
args:
#command =/ kube2sky
- --domain = cluster.local。
- - kubecfg-file = / etc / kubernetes / kube.conf
- 名称:skydns
image:gcr.io/google_containers/skydns:2015-10-13-8c72f8c
资源:
#TODO:当我们为大型
#集群分析容器时,设置内存限制,然后设置request = limit以将此容器保留在
#保证类中。目前,这个容器属于
#burstable类别,所以kubelet不会重新启动
#it。
限制:
cpu:100m
内存:200Mi
请求:
cpu:100m
内存:50Mi
args:
#command =/ skydns
- -machines = http://127.0.0.1:4001
- -addr = 0.0.0.0:53
- -ns-rotate = false
- -domain = cluster.local
ports:
- containerPort:53
名称:dns
协议:UDP
- containerPort:53
名称:dns-tcp
协议:TCP
- 名称:healthz
image:gcr.io/google_containers/exechealthz:1.0
resources:
#keep request = limit to保持此容器在保证类
限制:
cpu:10m
内存:20Mi
请求:
cpu:10m
内存:20Mi
args:
- -cmd = nslookup kubernetes.default.svc.cluster.local \
127.0.0.1> / dev / null
- -port = 8080
ports:
- containerPort:8080
协议:TCP
卷:
- 名称:etcd-storage
emptyDir:{}
- 名称:kubernetes-etc
hostPath:
路径:/ etc / kubernetes
- 名称:etcd-ssl
hostPath:
路径:/ etc / ssl / etcd
dnsPolicy:默认#不要使用集群DNS。



主节点上的/etc/kubernetes/manifests/kube-proxy.yaml



  apiVersion:v1 
kind:Pod
元数据:
名称:kube-proxy
命名空间:kube -system
规范:
hostNetwork:true
容器:
- 名称:kube-proxy
image:gcr.io/google_containers/hyperkube:v1.1.2
命令:
- / hyperkube
- 代理
- --master = https://127.0.0.1:443
- --proxy-mode = iptables
- - kubeconfig = / etc / kubernetes / kube.conf
securityContext:
privileged:true
volumeMounts:
- mountPath:/ etc / ssl / certs
名称:ssl-certs-host
readOnly:true
- mountPath:/ etc / kubernetes
name:kubernetes
readOnly:true
- mountPath:/ etc / ssl / etcd
name:kubernetes-certs
readOnly:true
volumes:
- hostPath:
path:/ usr / share / ca-certificates
name :ssl-ce rts-host
- hostPath:
路径:/ etc / kubernetes
名称:kubernetes
- hostPath:
路径:/ etc / ssl / etcd
名称:kubernetes-certs



/etc/kubernetes/manifests/kube-proxy.yaml on worker node



  apiVersion:v1 
kind:Pod
元数据:
名称:kube-proxy
命名空间:kube-system
规范:
hostNetwork:true
容器:
- 名称:kube-proxy
image:gcr.io/google_containers/hyperkube: v1.1.2
命令:
- / hyperkube
- 代理
- --kubeconfig = / etc / kubernetes / kube.conf
- --proxy-mode = iptables
- --v = 2
securityContext:
特权:true
volumeMounts:
- mountPath:/ etc / ssl / certs
name: ssl-certs
- mountPath:/etc/kubernetes/kube.conf
name:kubeconfig
readOnly:true
- mountPath:/ etc / ssl / etcd
名称:e tc-kube-ssl
readOnly:true
卷:
- 名称:ssl-certs
hostPath:
路径:/ usr / share / ca -certificates
- name:kubeconfig
hostPath:
路径:/etc/kubernetes/kube.conf
- 名称:etc-kube-ssl
hostPath:
路径:/ etc / ssl / etcd


I have set up DNS in my Kubernetes (v1.1.2+1abf20d) system, on CoreOS/AWS, but I cannot look up services via DNS. I have tried debugging, but cannot for the life of me find out why. This is what happens when I try to look up the kubernetes service, which should always be available:

$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf exec busybox-sleep -- nslookup kubernetes.default
Server:    10.3.0.10
Address 1: 10.3.0.10 ip-10-3-0-10.eu-central-1.compute.internal

nslookup: can't resolve 'kubernetes.default'
error: error executing remote command: Error executing command in container: Error executing in Docker Container: 1

I have installed the DNS addon according to this spec:

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v10
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v10
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v10
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v10
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: etcd
        image: gcr.io/google_containers/etcd-amd64:2.2.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        command:
        - /usr/local/bin/etcd
        - -data-dir
        - /var/etcd/data
        - -listen-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -advertise-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -initial-cluster-token
        - skydns-etcd
        volumeMounts:
        - name: etcd-storage
          mountPath: /var/etcd/data
      - name: kube2sky
        image: gcr.io/google_containers/kube2sky:1.12
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/kube2sky"
        - --domain=cluster.local
      - name: skydns
        image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/skydns"
        - -machines=http://127.0.0.1:4001
        - -addr=0.0.0.0:53
        - -ns-rotate=false
        - -domain=cluster.local.
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 1
          timeoutSeconds: 5
      - name: healthz
        image: gcr.io/google_containers/exechealthz:1.0
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
        - -port=8080
        ports:
        - containerPort: 8080
          protocol: TCP
      volumes:
      - name: etcd-storage
        emptyDir: {}
      dnsPolicy: Default  # Don't use cluster DNS.

---

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.3.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

Why isn't DNS lookup for services working in my Kubernetes setup? Please let me know what other info I need to provide.

解决方案

There were two things I needed to do:

  1. Configure kube2sky via kubeconfig, so that it's properly configured for TLS.
  2. Configure kube-proxy via kubeconfig, so that it's properly configured for TLS and finds the master node.

/etc/kubernetes/kube.conf on master node

apiVersion: v1
kind: Config
clusters:
- name: kube
  cluster:
    server: https://127.0.0.1:443
    certificate-authority: /etc/ssl/etcd/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/ssl/etcd/master-client.pem
    client-key: /etc/ssl/etcd/master-client-key.pem
contexts:
- context:
  cluster: kube
  user: kubelet

/etc/kubernetes/kube.conf on worker node

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/ssl/etcd/ca.pem
    server: https://<master IP>:443
users:
- name: kubelet
  user:
    client-certificate: /etc/ssl/etcd/worker.pem
    client-key: /etc/ssl/etcd/worker-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

dns-addon.yaml (install this on master)

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v11
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v11
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v11
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v11
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: etcd
        image: gcr.io/google_containers/etcd-amd64:2.2.1
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting
          # it.
          limits:
            cpu: 100m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 50Mi
        command:
        - /usr/local/bin/etcd
        - -data-dir
        - /var/etcd/data
        - -listen-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -advertise-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -initial-cluster-token
        - skydns-etcd
        volumeMounts:
        - name: etcd-storage
          mountPath: /var/etcd/data
      - name: kube2sky
        image: gcr.io/google_containers/kube2sky:1.14
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting
          # it.
          limits:
            cpu: 100m
            # Kube2sky watches all pods.
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 50Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
        volumeMounts:
        - name: kubernetes-etc
          mountPath: /etc/kubernetes
          readOnly: true
        - name: etcd-ssl
          mountPath: /etc/ssl/etcd
          readOnly: true
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube2sky"
        - --domain=cluster.local.
        - --kubecfg-file=/etc/kubernetes/kube.conf
      - name: skydns
        image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting
          # it.
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/skydns"
        - -machines=http://127.0.0.1:4001
        - -addr=0.0.0.0:53
        - -ns-rotate=false
        - -domain=cluster.local
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        image: gcr.io/google_containers/exechealthz:1.0
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.cluster.local \
127.0.0.1 >/dev/null
        - -port=8080
        ports:
        - containerPort: 8080
          protocol: TCP
      volumes:
      - name: etcd-storage
        emptyDir: {}
      - name: kubernetes-etc
        hostPath:
          path: /etc/kubernetes
      - name: etcd-ssl
        hostPath:
          path: /etc/ssl/etcd
      dnsPolicy: Default  # Don't use cluster DNS.

/etc/kubernetes/manifests/kube-proxy.yaml on master node

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: gcr.io/google_containers/hyperkube:v1.1.2
    command:
    - /hyperkube
    - proxy
    - --master=https://127.0.0.1:443
    - --proxy-mode=iptables
    - --kubeconfig=/etc/kubernetes/kube.conf
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
    - mountPath: /etc/kubernetes
      name: kubernetes
      readOnly: true
    - mountPath: /etc/ssl/etcd
      name: kubernetes-certs
      readOnly: true
  volumes:
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host
  - hostPath:
      path: /etc/kubernetes
    name: kubernetes
  - hostPath:
      path: /etc/ssl/etcd
    name: kubernetes-certs

/etc/kubernetes/manifests/kube-proxy.yaml on worker node

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: gcr.io/google_containers/hyperkube:v1.1.2
    command:
    - /hyperkube
    - proxy
    - --kubeconfig=/etc/kubernetes/kube.conf
    - --proxy-mode=iptables
    - --v=2
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: /etc/ssl/certs
        name: "ssl-certs"
      - mountPath: /etc/kubernetes/kube.conf
        name: "kubeconfig"
        readOnly: true
      - mountPath: /etc/ssl/etcd
        name: "etc-kube-ssl"
        readOnly: true
  volumes:
    - name: "ssl-certs"
      hostPath:
        path: "/usr/share/ca-certificates"
    - name: "kubeconfig"
      hostPath:
        path: "/etc/kubernetes/kube.conf"
    - name: "etc-kube-ssl"
      hostPath:
        path: "/etc/ssl/etcd"

这篇关于为什么Kubernetes服务DNS不工作?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆