kubedns容器无法使用skydns-rc.yaml.base文件创建 [英] kubedns container failed to be created with the skydns-rc.yaml.base file

查看:139
本文介绍了kubedns容器无法使用skydns-rc.yaml.base文件创建的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用skydns-rc.yaml.base(/kubernetes-release-1.3/cluster/addons/dns/sky ..)文件创建k8s dns服务.但是kubedns容器始终无法创建.

I use the skydns-rc.yaml.base(/kubernetes-release-1.3/cluster/addons/dns/sky..) file to create the k8s dns service. but the kubedns container is always failed to created.

已进行以下替换:

  1. namespace: kube-system替换为namespace: default
  2. __PILLAR__DNS__REPLICAS__替换为1
  3. __PILLAR__DNS__DOMAIN__替换为cluster.local
  4. __PILLAR__FEDERATIONS__DOMAIN__MAP__已删除
  1. namespace: kube-system replaced by namespace: default
  2. __PILLAR__DNS__REPLICAS__ replaced by 1
  3. __PILLAR__DNS__DOMAIN__ replaced by cluster.local
  4. __PILLAR__FEDERATIONS__DOMAIN__MAP__ deleted

编辑后的元素信息和整个文件如下所示:

The edited element info and the whole file are shown below:

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v18
  namespace: default
  labels:
    k8s-app: kube-dns
    version: v18
    kubernetes.io/cluster-service: "true"
spec:
  replicas: __PILLAR__DNS__REPLICAS__
  selector:
    k8s-app: kube-dns
    version: v18
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v18
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubedns
        image: gcr.io/google_containers/kubedns-amd64:1.6
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube-dns"
        - --domain=__PILLAR__DNS__DOMAIN__.
        - --dns-port=10053
        __PILLAR__FEDERATIONS__DOMAIN__MAP__
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
      - name: dnsmasq
        image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
        args:
        - --cache-size=1000
        - --no-resolv
        - --server=127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        image: gcr.io/google_containers/exechealthz-amd64:1.0
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1:10053 >/dev/null
        - -port=8080
        - -quiet
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: Default  # Don't use cluster DNS.

以上信息有问题吗?

其他信息:

$ kubectl describe pod kube-dns-v18-u7jgt
Name: kube-dns-v18-u7jgt
Namespace: default
Node: centos-cjw-minion1/10.139.4.195
Start Time: Mon, 18 Jul 2016 19:31:48 +0800
Labels: k8s-app=kube-dns,kubernetes.io/cluster-service=true,version=v18
Status: Running
IP: 172.17.0.4
Controllers: ReplicationController/kube-dns-v18
Containers:
kubedns:
Container ID: docker://5f97e1d7185e327ac3cd5415c79b1b51da1987d8946fb243ee1758cdc4d53d29
Image: iaasfree/kubedns-amd64:1.5
Image ID: docker://sha256:a1490b272781a9921ba216778e741943e9b866114dae7e7e8980daebbc5ba7ed
Ports: 10053/UDP, 10053/TCP
Args:
--domain=cluster.local.
--dns-port=10053
QoS Tier:
memory: Burstable
cpu: Guaranteed
Limits:
cpu: 100m
memory: 200Mi
Requests:
cpu: 100m
memory: 100Mi
State: Running
Started: Mon, 18 Jul 2016 19:36:02 +0800
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 18 Jul 2016 19:34:52 +0800
Finished: Mon, 18 Jul 2016 19:35:59 +0800
Ready: False
Restart Count: 3
Liveness: http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
Environment Variables:
dnsmasq:
Container ID: docker://75ef5bc18dfe196438956c42f64a2e2d6fd408329408704f32534ce7b9252663
Image: iaasfree/kube-dnsmasq-amd64:1.3
Image ID: docker://sha256:8cb0646c9e984cf510ca70704154bee2f2c51cfb2e776f4357c52c1d17c2b741
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Running
Started: Mon, 18 Jul 2016 19:31:55 +0800
Ready: True
Restart Count: 0
Environment Variables:
healthz:
Container ID: docker://e11626508ecd5b2cfae3e1eaa3284d75dae4160c113d7f28ce97cbd0185f032d
Image: iaasfree/exechealthz-amd64:1.0
Image ID: docker://sha256:f3b98b5b347af3254c82e3a0090cd324daf703970f3bb62ba8005020ddf5a156
Port: 8080/TCP
Args:
-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
-port=8080
-quiet
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
Limits:
memory: 20Mi
cpu: 10m
Requests:
cpu: 10m
memory: 20Mi
State: Running
Started: Mon, 18 Jul 2016 19:32:12 +0800
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False 
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message

5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned kube-dns-v18-u7jgt to centos-cjw-minion1
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Created Created container with docker id 5814904f6e09
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Pulled Container image "iaasfree/kube-dnsmasq-amd64:1.3" already present on machine
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Started Started container with docker id 5814904f6e09
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Created Created container with docker id 75ef5bc18dfe
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{dnsmasq} Normal Started Started container with docker id 75ef5bc18dfe
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Pulled Container image "iaasfree/exechealthz-amd64:1.0" already present on machine
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Created Created container with docker id e11626508ecd
4m 4m 1 {kubelet centos-cjw-minion1} spec.containers{healthz} Normal Started Started container with docker id e11626508ecd
3m 3m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Killing Killing container with docker id 5814904f6e09: pod "kube-dns-v18-u7jgt_default(370b6791-4cdb-11e6-80f0-fa163ebb45ec)" container "kubedns" is unhealthy, it will be killed and re-created.
3m 3m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Created Created container with docker id 32945bc72e9b
3m 3m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Started Started container with docker id 32945bc72e9b
2m 2m 1 {kubelet centos-cjw-minion1} spec.containers{kubedns} Normal Killing Killing container with docker id 32945bc72e9b: pod "kube-dns-v18-u7jgt_default(370b6791-4cdb-11e6-80f0-fa163ebb45ec)" container "kubedns" is unhealthy, it will be killed and re-created.

推荐答案

这是因为您的DNS容器无法与主服务器上的Kubernetes API服务器联系.如果您编辑YAML文件以包含以下额外参数,将__KUBE_MASTER_URL__替换为群集的正确值(例如http://10.1.2.3:8080),则它将起作用:

This is because your DNS containers cannot contact the Kubernetes API server on the master. If you edit the YAML file to include the following extra argument, replacing __KUBE_MASTER_URL__ with the correct value for your cluster, something like http://10.1.2.3:8080, then it should work:

    args:
    # command = "/kube-dns"
    - --domain=__PILLAR__DNS__DOMAIN__.
    - --dns-port=10053
    - --kube-master-url=__KUBE_MASTER_URL__
    __PILLAR__FEDERATIONS__DOMAIN__MAP__

这篇关于kubedns容器无法使用skydns-rc.yaml.base文件创建的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆