如何修复第二个节点的 weave-net CrashLoopBackOff? [英] How to fix weave-net CrashLoopBackOff for the second node?

查看:24
本文介绍了如何修复第二个节点的 weave-net CrashLoopBackOff?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有 2 个虚拟机节点.两者都可以通过主机名(通过/etc/hosts)或通过 IP 地址看到对方.其中一个已被配置为 kubeadm 作为 master.另一个作为工作节点.按照说明(http://kubernetes.io/docs/getting-started-guides/kubeadm/) 我添加了 weave-net.Pod 列表如下所示:

I have got 2 VMs nodes. Both see each other either by hostname (through /etc/hosts) or by ip address. One has been provisioned with kubeadm as a master. Another as a worker node. Following the instructions (http://kubernetes.io/docs/getting-started-guides/kubeadm/) I have added weave-net. The list of pods looks like the following:

vagrant@vm-master:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-vm-master                          1/1       Running            0          3m
kube-system   kube-apiserver-vm-master                1/1       Running            0          5m
kube-system   kube-controller-manager-vm-master       1/1       Running            0          4m
kube-system   kube-discovery-982812725-x2j8y          1/1       Running            0          4m
kube-system   kube-dns-2247936740-5pu0l               3/3       Running            0          4m
kube-system   kube-proxy-amd64-ail86                  1/1       Running            0          4m
kube-system   kube-proxy-amd64-oxxnc                  1/1       Running            0          2m
kube-system   kube-scheduler-vm-master                1/1       Running            0          4m
kube-system   kubernetes-dashboard-1655269645-0swts   1/1       Running            0          4m
kube-system   weave-net-7euqt                         2/2       Running            0          4m
kube-system   weave-net-baao6                         1/2       CrashLoopBackOff   2          2m

CrashLoopBackOff 出现在每个连接的工作节点上.我花了好几个时间玩网络接口,但似乎网络没问题.我发现了类似的问题,其中的答案建议查看日志而不进行跟进.所以,这里是日志:

CrashLoopBackOff appears for each worker node connected. I have spent several ours playing with network interfaces, but it seems the network is fine. I have found similar question, where the answer advised to look into the logs and no follow up. So, here are the logs:

vagrant@vm-master:~$ kubectl logs weave-net-baao6 -c weave --namespace=kube-system
2016-10-05 10:48:01.350290 I | error contacting APIServer: Get https://100.64.0.1:443/api/v1/nodes: dial tcp 100.64.0.1:443: getsockopt: connection refused; trying with blank env vars
2016-10-05 10:48:01.351122 I | error contacting APIServer: Get http://localhost:8080/api: dial tcp [::1]:8080: getsockopt: connection refused
Failed to get peers

我做错了什么?从那里去哪里?

推荐答案

我也遇到了同样的问题.似乎 weaver 想要连接到 Kubernetes 集群 IP 地址,这是虚拟的.只需运行此命令即可找到集群 ip:kubectl 获取 svc.它应该给你这样的东西:

I ran in the same issue too. It seems weaver wants to connect to the Kubernetes Cluster IP address, which is virtual. Just run this to find the cluster ip: kubectl get svc. It should give you something like this:

$ kubectl get svc
NAME                     CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
kubernetes               100.64.0.1       <none>        443/TCP   2d

Weaver 获取此 IP 并尝试连接到它,但工作节点对此一无所知.简单的路线将解决这个问题.在所有工作节点上,执行:

Weaver picks up this IP and tries to connect to it, but worker nodes does not know anything about it. Simple route will solve this issue. On all your worker nodes, execute:

route add 100.64.0.1 gw <your real master IP>

这篇关于如何修复第二个节点的 weave-net CrashLoopBackOff?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆