coredns pod 具有 CrashLoopBackOff 或 Error 状态 [英] coredns pods have CrashLoopBackOff or Error state

查看:42
本文介绍了coredns pod 具有 CrashLoopBackOff 或 Error 状态的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过发出以下命令来设置 Kubernetes 主节点:

I'm trying to set up the Kubernetes master, by issuing:

kubeadm init --pod-network-cidr=192.168.0.0/16

kubeadm init --pod-network-cidr=192.168.0.0/16

  1. 紧接着:安装 Pod 网络附加组件(印花布)
  2. 后跟:主隔离

<小时>

问题:coredns pod 具有 CrashLoopBackOffError 状态:


issue: coredns pods have CrashLoopBackOff or Error state:

# kubectl get pods -n kube-system
NAME                                       READY   STATUS             RESTARTS   AGE
calico-node-lflwx                          2/2     Running            0          2d
coredns-576cbf47c7-nm7gc                   0/1     CrashLoopBackOff   69         2d
coredns-576cbf47c7-nwcnx                   0/1     CrashLoopBackOff   69         2d
etcd-suey.nknwn.local                      1/1     Running            0          2d
kube-apiserver-suey.nknwn.local            1/1     Running            0          2d
kube-controller-manager-suey.nknwn.local   1/1     Running            0          2d
kube-proxy-xkgdr                           1/1     Running            0          2d
kube-scheduler-suey.nknwn.local            1/1     Running            0          2d
# 

我尝试了 对 kubeadm - Kubernetes 进行故障排除,但是我的节点没有运行 SELinux 并且我的 Docker 是最新的.

I tried with Troubleshooting kubeadm - Kubernetes, however my node isn't running SELinux and my Docker is up to date.

# docker --version
Docker version 18.06.1-ce, build e68fc7a
# 

kubectldescribe:

# kubectl -n kube-system describe pod coredns-576cbf47c7-nwcnx 
Name:               coredns-576cbf47c7-nwcnx
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               suey.nknwn.local/192.168.86.81
Start Time:         Sun, 28 Oct 2018 22:39:46 -0400
Labels:             k8s-app=kube-dns
                    pod-template-hash=576cbf47c7
Annotations:        cni.projectcalico.org/podIP: 192.168.0.30/32
Status:             Running
IP:                 192.168.0.30
Controlled By:      ReplicaSet/coredns-576cbf47c7
Containers:
  coredns:
    Container ID:  docker://ec65b8f40c38987961e9ed099dfa2e8bb35699a7f370a2cda0e0d522a0b05e79
    Image:         k8s.gcr.io/coredns:1.2.2
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Wed, 31 Oct 2018 23:28:58 -0400
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 31 Oct 2018 23:21:35 -0400
      Finished:     Wed, 31 Oct 2018 23:23:54 -0400
    Ready:          True
    Restart Count:  103
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-xvq8b (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-xvq8b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-xvq8b
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                       Message
  ----     ------     ----                    ----                       -------
  Normal   Killing    54m (x10 over 4h19m)    kubelet, suey.nknwn.local  Killing container with id docker://coredns:Container failed liveness probe.. Container will be killed and recreated.
  Warning  Unhealthy  9m56s (x92 over 4h20m)  kubelet, suey.nknwn.local  Liveness probe failed: HTTP probe failed with statuscode: 503
  Warning  BackOff    5m4s (x173 over 4h10m)  kubelet, suey.nknwn.local  Back-off restarting failed container
# kubectl -n kube-system describe pod coredns-576cbf47c7-nm7gc 
Name:               coredns-576cbf47c7-nm7gc
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               suey.nknwn.local/192.168.86.81
Start Time:         Sun, 28 Oct 2018 22:39:46 -0400
Labels:             k8s-app=kube-dns
                    pod-template-hash=576cbf47c7
Annotations:        cni.projectcalico.org/podIP: 192.168.0.31/32
Status:             Running
IP:                 192.168.0.31
Controlled By:      ReplicaSet/coredns-576cbf47c7
Containers:
  coredns:
    Container ID:  docker://0f2db8d89a4c439763e7293698d6a027a109bf556b806d232093300952a84359
    Image:         k8s.gcr.io/coredns:1.2.2
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:3e2be1cec87aca0b74b7668bbe8c02964a95a402e45ceb51b2252629d608d03a
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Wed, 31 Oct 2018 23:29:11 -0400
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 31 Oct 2018 23:21:58 -0400
      Finished:     Wed, 31 Oct 2018 23:24:08 -0400
    Ready:          True
    Restart Count:  102
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-xvq8b (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-xvq8b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-xvq8b
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                       Message
  ----     ------     ----                    ----                       -------
  Normal   Killing    44m (x12 over 4h18m)    kubelet, suey.nknwn.local  Killing container with id docker://coredns:Container failed liveness probe.. Container will be killed and recreated.
  Warning  BackOff    4m58s (x170 over 4h9m)  kubelet, suey.nknwn.local  Back-off restarting failed container
  Warning  Unhealthy  8s (x102 over 4h19m)    kubelet, suey.nknwn.local  Liveness probe failed: HTTP probe failed with statuscode: 503
# 

kubectllog:

# kubectl -n kube-system logs -f coredns-576cbf47c7-nm7gc 
E1101 03:31:58.974836       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:31:58.974836       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:31:58.974857       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:32:29.975493       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:32:29.976732       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:32:29.977788       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:33:00.976164       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:33:00.977415       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:33:00.978332       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
2018/11/01 03:33:08 [INFO] SIGTERM: Shutting down servers then terminating
E1101 03:33:31.976864       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:348: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:33:31.978080       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:355: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1101 03:33:31.979156       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:350: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
# 

# kubectl -n kube-system log -f coredns-576cbf47c7-gqdgd
.:53
2018/11/05 04:04:13 [INFO] CoreDNS-1.2.2
2018/11/05 04:04:13 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2018/11/05 04:04:13 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
2018/11/05 04:04:19 [FATAL] plugin/loop: Seen "HINFO IN 3597544515206064936.6415437575707023337." more than twice, loop detected
# kubectl -n kube-system log -f coredns-576cbf47c7-hhmws
.:53
2018/11/05 04:04:18 [INFO] CoreDNS-1.2.2
2018/11/05 04:04:18 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2018/11/05 04:04:18 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
2018/11/05 04:04:24 [FATAL] plugin/loop: Seen "HINFO IN 6900627972087569316.7905576541070882081." more than twice, loop detected
# 

describe (apiserver):

# kubectl -n kube-system describe pod kube-apiserver-suey.nknwn.local
Name:               kube-apiserver-suey.nknwn.local
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               suey.nknwn.local/192.168.87.20
Start Time:         Fri, 02 Nov 2018 00:28:44 -0400
Labels:             component=kube-apiserver
                    tier=control-plane
Annotations:        kubernetes.io/config.hash: 2433a531afe72165364aace3b746ea4c
                    kubernetes.io/config.mirror: 2433a531afe72165364aace3b746ea4c
                    kubernetes.io/config.seen: 2018-11-02T00:28:43.795663261-04:00
                    kubernetes.io/config.source: file
                    scheduler.alpha.kubernetes.io/critical-pod: 
Status:             Running
IP:                 192.168.87.20
Containers:
  kube-apiserver:
    Container ID:  docker://659456385a1a859f078d36f4d1b91db9143d228b3bc5b3947a09460a39ce41fc
    Image:         k8s.gcr.io/kube-apiserver:v1.12.2
    Image ID:      docker-pullable://k8s.gcr.io/kube-apiserver@sha256:094929baf3a7681945d83a7654b3248e586b20506e28526121f50eb359cee44f
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --authorization-mode=Node,RBAC
      --advertise-address=192.168.87.20
      --allow-privileged=true
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --enable-admission-plugins=NodeRestriction
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Sun, 04 Nov 2018 22:57:27 -0500
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 04 Nov 2018 20:12:06 -0500
      Finished:     Sun, 04 Nov 2018 22:55:24 -0500
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:        250m
    Liveness:     http-get https://192.168.87.20:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>
# 

系统日志(主机):

11 月 4 日 22:59:36 suey kubelet[1234]:E1104 22:59:36.139538 1234pod_workers.go:186] 同步 pod 时出错d8146b7e-de57-11e8-a1e2-ec8eb57434c8("coredns-576cbf47c7-hhmws_kube-system(d8146b7e-de57-11e8-a1e2-ec8eb57434c8)"), 跳过: 未能为 "coredns" "StartContainer" withCrashLoopBackOff: "后退 40 秒重启失败的容器=corednspod=coredns-576cbf47c7-hhmws_kube-system(d8146b7e-de57-11e8-a1e2-ec8eb57434c8)"

Nov 4 22:59:36 suey kubelet[1234]: E1104 22:59:36.139538 1234 pod_workers.go:186] Error syncing pod d8146b7e-de57-11e8-a1e2-ec8eb57434c8 ("coredns-576cbf47c7-hhmws_kube-system(d8146b7e-de57-11e8-a1e2-ec8eb57434c8)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-576cbf47c7-hhmws_kube-system(d8146b7e-de57-11e8-a1e2-ec8eb57434c8)"

请指教.

推荐答案

这个错误

[FATAL] plugin/loop: Seen "HINFO IN 6900627972087569316.7905576541070882081." more than twice, loop detected

是在 CoreDNS 检测到解析配置中的循环时引起的,这是预期的行为.您遇到了这个问题:

is caused when CoreDNS detects a loop in the resolve configuration, and it is the intended behavior. You are hitting this issue:

https://github.com/kubernetes/kubeadm/issues/1162

https://github.com/coredns/coredns/issues/2087

hacky 解决方案:禁用 CoreDNS 循环检测

编辑 CoreDNS 配置映射:

Edit the CoreDNS configmap:

kubectl -n kube-system edit configmap coredns

使用loop删除或注释掉该行,保存并退出.

Remove or comment out the line with loop, save and exit.

然后删除 CoreDNS pod,以便可以使用新配置创建新的 pod:

Then remove the CoreDNS pods, so new ones can be created with new config:

kubectl -n kube-system delete pod -l k8s-app=kube-dns

之后一切都会好的.

首选解决方案:删除 DNS 配置中的循环

首先,检查您是否使用了systemd-resolved.如果您运行的是 Ubuntu 18.04,则可能是这种情况.

First, check if you are using systemd-resolved. If you are running Ubuntu 18.04, it is probably the case.

systemctl list-unit-files | grep enabled | grep systemd-resolved

如果是,请检查您的集群正在使用哪个 resolv.conf 文件作为参考:

If it is, check which resolv.conf file your cluster is using as reference:

ps auxww | grep kubelet

您可能会看到如下一行:

You might see a line like:

/usr/bin/kubelet ... --resolv-conf=/run/systemd/resolve/resolv.conf

重要的部分是 --resolv-conf - 我们确定是否使用 systemd resolv.conf.

The important part is --resolv-conf - we figure out if systemd resolv.conf is used, or not.

如果是systemdresolv.conf,请执行以下操作:

If it is the resolv.conf of systemd, do the following:

查看/run/systemd/resolve/resolv.conf的内容,看是否有这样的记录:

Check the content of /run/systemd/resolve/resolv.conf to see if there is a record like:

nameserver 127.0.0.1

如果有127.0.0.1,就是造成循环的那个.

If there is 127.0.0.1, it is the one causing the loop.

要摆脱它,您不应编辑该文件,而应检查其他地方以使其正确生成.

To get rid of it, you should not edit that file, but check other places to make it properly generated.

检查/etc/systemd/network下的所有文件,如果你发现像

Check all files under /etc/systemd/network and if you find a record like

DNS=127.0.0.1

删除该记录.还要检查 /etc/systemd/resolved.conf 并根据需要执行相同的操作.确保您至少配置了一个或两个 DNS 服务器,例如

delete that record. Also check /etc/systemd/resolved.conf and do the same if needed. Make sure you have at least one or two DNS servers configured, such as

DNS=1.1.1.1 1.0.0.1

完成所有这些之后,重启 systemd 服务以使您的更改生效:systemctl restart systemd-networkd systemd-resolved

After doing all that, restart the systemd services to put your changes into effect: systemctl restart systemd-networkd systemd-resolved

之后,验证 DNS=127.0.0.1resolv.conf 文件中不再存在:

After that, verify that DNS=127.0.0.1 is no more in the resolv.conf file:

cat /run/systemd/resolve/resolv.conf

最后,触发重新创建 DNS pods

Finally, trigger re-creation of the DNS pods

kubectl -n kube-system delete pod -l k8s-app=kube-dns

总结:该解决方案涉及从主机 DNS 配置中去除看似 DNS 查找循环的内容.不同的 resolv.conf 管理器/实现的步骤有所不同.

Summary: The solution involves getting rid of what looks like a DNS lookup loop from the host DNS configuration. Steps vary between different resolv.conf managers/implementations.

这篇关于coredns pod 具有 CrashLoopBackOff 或 Error 状态的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆