kube-dns pod和服务仍然运行了一段时间,然后突然死了 [英] kube-dns pod and service still up for a while and suddenly dies

查看:149
本文介绍了kube-dns pod和服务仍然运行了一段时间,然后突然死了的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试基于ansible回购设置kubernetes dns插件: https://github.com/kubernetes/contrib/tree/master/ansible/roles/kubernetes-addons

I tried to setup a kubernetes dns addon based on ansible repo: https://github.com/kubernetes/contrib/tree/master/ansible/roles/kubernetes-addons

运行完剧本后,我既找不到dns pod,也找不到服务.! 做完一些演讲后,( https://github.com/kubernetes/contrib/issue/886#issuecomment-216741889 ),看来我需要手动运行rc.yml和svc.yml. 那就是我所做的.

After running the playbook, i can't find out neither dns pod nor service.!! After doing some lecture, (https://github.com/kubernetes/contrib/issues/886#issuecomment-216741889) it seems that i need to run the rc.yml and the svc.yml manually. that's what i did.

不幸的是,dns主机和服务仍保持了一段时间,并突然终止.

Unfortunately, the dns pod and service still up for a while and suddenly terminates.

我尝试在吊舱掉落之前检出一些日志:

I tried to checkout some logs before the pod goes down:

# kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c etcd

2016/11/21 13:05:04 etcd: listening for peers on http://localhost:2380
2016/11/21 13:05:04 etcd: listening for peers on http://localhost:7001
2016/11/21 13:05:04 etcd: listening for client requests on http://127.0.0.1:2379
2016/11/21 13:05:04 etcd: listening for client requests on http://127.0.0.1:4001
2016/11/21 13:05:04 etcdserver: datadir is valid for the 2.0.1 format
2016/11/21 13:05:04 etcdserver: name = default
2016/11/21 13:05:04 etcdserver: data dir = /var/etcd/data
2016/11/21 13:05:04 etcdserver: member dir = /var/etcd/data/member
2016/11/21 13:05:04 etcdserver: heartbeat = 100ms
2016/11/21 13:05:04 etcdserver: election = 1000ms
2016/11/21 13:05:04 etcdserver: snapshot count = 10000
2016/11/21 13:05:04 etcdserver: advertise client URLs = http://127.0.0.1:2379,http://127.0.0.1:4001
2016/11/21 13:05:04 etcdserver: initial advertise peer URLs =   http://localhost:2380,http://localhost:7001
2016/11/21 13:05:04 etcdserver: initial cluster = default=http://localhost:2380,default=http://localhost:7001
2016/11/21 13:05:04 etcdserver: start member 6a5871dbdd12c17c in cluster f68652439e3f8f2a
2016/11/21 13:05:04 raft: 6a5871dbdd12c17c became follower at term 0
2016/11/21 13:05:04 raft: newRaft 6a5871dbdd12c17c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2016/11/21 13:05:04 raft: 6a5871dbdd12c17c became follower at term 1
2016/11/21 13:05:04 etcdserver: added local member 6a5871dbdd12c17c [http://localhost:2380 http://localhost:7001] to cluster f68652439e3f8f2a
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c is starting a new election at term 1
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c became candidate at term 2
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c received vote from 6a5871dbdd12c17c at term 2
2016/11/21 13:05:06 raft: 6a5871dbdd12c17c became leader at term 2
2016/11/21 13:05:06 raft.node: 6a5871dbdd12c17c elected leader 6a5871dbdd12c17c at term 2
2016/11/21 13:05:06 etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379 http://127.0.0.1:4001]} to cluster f68652439e3f8f2a

skydns日志

# kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c skydns

2016/11/21 13:07:14 skydns: falling back to default configuration, could not read from etcd: 100: Key not found (/skydns/config) [10]
2016/11/21 13:07:14 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0]
2016/11/21 13:07:14 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0]

运行状况日志

#kubectl logs kube-dns-v8-ujfqn --namespace=kube-system -c healthz

2016/11/21 13:05:58 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:05:59 Client ip 12.16.64.1:45631 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:00 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:02 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:04 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:06 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:08 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:08 Client ip 12.16.64.1:45652 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:10 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:12 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:14 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:16 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:18 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:18 Client ip 12.16.64.1:45673 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null2016/11/21 13:06:20 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:22 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:24 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:26 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:28 Worker running nslookup kubernetes.default.svc.cluster.local localhost >/dev/null
2016/11/21 13:06:28 Client ip 12.16.64.1:45693 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local localhost >/dev/null

kube2sky日志

Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.213227       1 kube2sky.go:529] Using https://10.254.0.1:443 for kubernetes master
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.213279       1 kube2sky.go:530] Using kubernetes API <nil>
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.214181       1 kube2sky.go:598] Waiting for service: default/kubernetes
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: 2016/11/23 07:09:26 Worker running nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
Nov 23 10:09:26 ctc-cicd2 docker-current[25416]: I1123 07:09:26.508032       1 kube2sky.go:660] Successfully added DNS record for Kubernetes service.

做错了什么?

推荐答案

您正在使用哪个版本的kubernetes和dns容器?我看到他们正在使用v11.我在v11上遇到过类似的问题,目前在运行kube-dns v19了一个月,却没有遇到麻烦.

What version of kubernetes and dns containers are you using? I see they are using v11. I had similar issues with v11 and am currently running kube-dns v19 for a big month without running into trouble.

这篇关于kube-dns pod和服务仍然运行了一段时间,然后突然死了的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆