可以加入集群,但无法获取kubeadm-config [英] Can join the cluster, but unable to fetch kubeadm-config

查看:339
本文介绍了可以加入集群,但无法获取kubeadm-config的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在回答此处第6步.创建我自己的single master and 2 nodes的本地minikube集群.

I am following with the answer here step 6th. To make my own local minikube cluster of single master and 2 nodes.

master名称minikube.

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubelet --version
Kubernetes v1.13.3

通过以下方式登录到minikube控制台 minikube ssh

login to the minikube console by minikube ssh

然后使用ifconfig

$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:0E:E5:B4:9C
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:eff:fee5:b49c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18727 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21337 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1621416 (1.5 MiB)  TX bytes:6858635 (6.5 MiB)

eth0      Link encap:Ethernet  HWaddr 08:00:27:04:9E:5F
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe04:9e5f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:139646 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11964 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:202559446 (193.1 MiB)  TX bytes:996669 (973.3 KiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:10:7A:A5
          inet addr:192.168.99.105  Bcast:192.168.99.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe10:7aa5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2317 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2231 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:197781 (193.1 KiB)  TX bytes:199788 (195.1 KiB)

因此,我的minikube IP地址是192.168.99.105

Therefore my minikube ip address is 192.168.99.105

在我的VM节点上.我检查了他们是否使用了相同的网络. 网络是

On my VM node. I have checked that they are using the same network. Networks are

  1. NAT

  1. NAT

Host-only Adapter Names: vboxnet0`

Host-only Adapter Names:vboxnet0`

这是nmap证明,没有针对连接端口的防火墙

Here is the nmap proof that no firewall against connection port

执行kubeadm join以加入集群. 如果它从cli获得确切的输出.更糟的是.因为命令输出正在调用localhost,并且当涉及到执行程序时,这意味着它会调用自身,这是错误的,因此在执行之后.总站会显示超时错误

Execute the kubeadm join to join the cluster. If it get the exact output from cli. It is even worse. Because the command output is calling the localhost and when it comes to the executor it means it calls itself which is wrong and therefore after execute it. Terminial will show me timeout error

kubeadm join 192.168.99.105:8443 --token 856tch.tpccuji4nnc2zq5g --discovery-token-ca-cert-hash sha256:cfbb7a0f9ed7fca018b45fdfecb753a88aec64d4e46b5ac9ceb6d04bbb0a46a6

kubeadm告诉我localhost回来!

我当然没有得到任何节点

Surly I did not get any node

$ kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
minikube   Ready    master   104m   v1.13.3

问题:

  1. 如何让kubeadm在cli中正确跟随我给定的IP地址?

  1. How to let kubeadm follow my given ip address in the cli correctly?

如何防止localhost在此过程中再次出现?

How to prevent localhost come back during the process?

推荐答案

这似乎与当前的Minikube代码有关,我认为自从发布该帖子以来,它已更改. 看看 https://github.com/kubernetes/minikube/issues/3916. 通过将DNATting 127.0.0.1:8443绑定到原始minikube主机,我设法加入了第二个节点.

This seems to be an issue with current Minikube code, which I guess changed since the post was made. Take a look at https://github.com/kubernetes/minikube/issues/3916. I've managed to join a second node by DNATting 127.0.0.1:8443 to the original minikube master.

就FTR而言,我在第二个节点上添加了/etc/rc.local: (用敏感数据替换LOCAL_IF,MASTER_IP和WORKER_IP)

Just FTR, I added a /etc/rc.local at the second node with: (replace LOCAL_IF, MASTER_IP and WORKER_IP with sensible data)

#!/bin/sh
echo 1 > /proc/sys/net/ipv4/conf/<LOCAL_IF>/route_localnet
/sbin/iptables -t nat -A OUTPUT -p tcp -d 127.0.0.1 --destination-port
8443 -j DNAT --to-destination <MASTER_IP>:8443
/sbin/iptables -t nat -A POSTROUTING -p tcp -s 127.0.0.1 -d <MASTER_IP>
--dport 8443 -j SNAT --to <WORKER_IP>

但是问题并没有就此结束.安装法兰绒的方法如下:

But problems did not end there. Installing flannel with:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

(在通过控制器管理器分配节点CIDR之后)工作了,但是我的第二个节点以某种方式安装了另一个kubelet,将cni作为网络插件安装,并最终创建了一个与docker网络冲突的新网桥(cni0).

worked (after allocating node CIDRs via controller manager), but my second node somehow had a different kubelet installation, that installed cni as network plugin, and ended up creating a new bridge (cni0) that clashed with docker network.

要实现这一目标,必须完成许多工作.

There are many things that have to work together for this to fly.

这篇关于可以加入集群,但无法获取kubeadm-config的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆