kubelet没有在Microk8s中配置的ClusterDNS IP [英] kubelet does not have ClusterDNS IP configured in Microk8s

查看:345
本文介绍了kubelet没有在Microk8s中配置的ClusterDNS IP的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在ubuntu

我试图运行一个简单的hello world程序,但是创建pod时出现错误.

I'm trying to run a simple hello world program but I got the error when pod created.

kubelet没有配置ClusterDNS IP,并且无法使用"ClusterFirst"策略创建Pod.退回默认"政策

kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy

这是我要申请的 deployment.yaml 文件.

Here is my deployment.yaml file which I'm trying to apply.

apiVersion: v1
kind: Service
metadata:
  name: grpc-hello
spec:
  ports:
  - port: 80
    targetPort: 9000
    protocol: TCP
    name: http
  selector:
    app: grpc-hello
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpc-hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grpc-hello
  template:
    metadata:
      labels:
        app: grpc-hello
    spec:
      containers:
      - name: esp
        image: gcr.io/endpoints-release/endpoints-runtime:1
        args: [
          "--http2_port=9000",
          "--backend=grpc://127.0.0.1:50051",
          "--service=hellogrpc.endpoints.octa-test-123.cloud.goog",
          "--rollout_strategy=managed",
        ]
        ports:
          - containerPort: 9000
      - name: python-grpc-hello
        image: gcr.io/octa-test-123/python-grpc-hello:1.0
        ports:
          - containerPort: 50051

这是我尝试describe吊舱时得到的内容

Here is what I got when I try to describe the pod

Events:
  Type     Reason             Age                From                   Message
  ----     ------             ----               ----                   -------
  Normal   Scheduled          31s                default-scheduler      Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
  Normal   Started            30s                kubelet, azeem-ubuntu  Started container python-grpc-hello
  Normal   Pulled             30s                kubelet, azeem-ubuntu  Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
  Normal   Created            30s                kubelet, azeem-ubuntu  Created container python-grpc-hello
  Normal   Pulled             12s (x3 over 31s)  kubelet, azeem-ubuntu  Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
  Normal   Created            12s (x3 over 31s)  kubelet, azeem-ubuntu  Created container esp
  Normal   Started            12s (x3 over 30s)  kubelet, azeem-ubuntu  Started container esp
  Warning  MissingClusterDNS  8s (x10 over 31s)  kubelet, azeem-ubuntu  pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
  Warning  BackOff            8s (x2 over 23s)   kubelet, azeem-ubuntu  Back-off restarting failed container
Events:
  Type     Reason             Age                From                   Message
  ----     ------             ----               ----                   -------
  Normal   Scheduled          31s                default-scheduler      Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
  Normal   Started            30s                kubelet, azeem-ubuntu  Started container python-grpc-hello
  Normal   Pulled             30s                kubelet, azeem-ubuntu  Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
  Normal   Created            30s                kubelet, azeem-ubuntu  Created container python-grpc-hello
  Normal   Pulled             12s (x3 over 31s)  kubelet, azeem-ubuntu  Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
  Normal   Created            12s (x3 over 31s)  kubelet, azeem-ubuntu  Created container esp
  Normal   Started            12s (x3 over 30s)  kubelet, azeem-ubuntu  Started container esp
  Warning  MissingClusterDNS  8s (x10 over 31s)  kubelet, azeem-ubuntu  pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
  Warning  BackOff            8s (x2 over 23s)   kubelet, azeem-ubuntu  Back-off restarting failed container

我对此进行了大量搜索,但找到了一些答案,但是没有人为我工作.我也为此创建了kube-dns,但不知道为什么这仍然行不通.这些kube-dn正在运行. kube-dns在kube-system命名空间中.

I search alot about this I find some answers but no one is working for me I also create the kube-dns for this but don't know why this still is not working. These kube-dns are running. kube-dns are in kube-system namespace.

NAME                       READY   STATUS    RESTARTS   AGE
kube-dns-6dbd676f7-dfbjq   3/3     Running   0          22m

这就是我申请创建kube-dns

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.152.183.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  upstreamNameservers: |-
    ["8.8.8.8", "8.8.4.4"]
# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/cluster.local/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

请让我知道我所缺少的.

Please let me know what I'm missing.

推荐答案

您尚未指定如何部署kube dns,但建议使用microk8s来使用核心dns.您不应该自己部署kube dns或核心dns,而是需要使用此命令microk8s.enable dns启用dns,该命令将部署核心DNS并设置DNS.

You have not specified how you deployed kube dns but with microk8s its recommended to use core dns. You should not deploy kube dns or core dns on your own rather you need to enable dns using this command microk8s.enable dns which would deploy core DNS and setup DNS.

这篇关于kubelet没有在Microk8s中配置的ClusterDNS IP的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆