如何使用kubeadm-init配置参数"controlPlaneEndpoint"? [英] How to use kubeadm-init configuration parameter- "controlPlaneEndpoint"?

查看:4715
本文介绍了如何使用kubeadm-init配置参数"controlPlaneEndpoint"?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我认为我的重点是如何使用此配置参数-"controlPlaneEndpoint". 当前使用"controlPlaneEndpoint"是有问题的. https://kubernetes.io/docs/setup/independent/high-availability/

I think my focus is on how to use this configuration parameter- "controlPlaneEndpoint". It is currently buggy to use "controlPlaneEndpoint". https://kubernetes.io/docs/setup/independent/high-availability/

我真的希望你能耐心看到我的实际情况.

I really hope you can be patient to see my actual situation.

首先,配置参数"controlPlaneEndpoint"是vip或负载均衡,对吗? 因此,我将"controlPlaneEndpoint"配置为具有4层负载平衡.我尝试过aws \ ali. 所有结果表明,使用期间可能会超时,并且在使用kubeadm进行安装的过程中,"nodexxx not found"的出现时间为100%.

First, The configuration parameter- "controlPlaneEndpoint" is a vip or a Load balancing, right? So, I configure "controlPlaneEndpoint" with 4 layer load balancing; I tried aws\ali. All the results show that will be probability of timeout during use, and "nodexxx not found" appeared 100% of the time during the installation with kubeadm.

为什么会这样? 如果在参数"controlPlaneEndpoint"中使用4层负载均衡,则将出现网络问题. 例如,我有三个主服务器,即ServerA,ServerB,ServerC,我在serverA上输入命令"kubectl get pod".发生超时的可能性为33%. 通过4层负载平衡将serverA请求定向到ServerB或ServerC时,一切都很好. 如果该请求通过4层负载平衡指向ServerA本身,则肯定会发生超时.

Why is this happening? If I use 4 layers of load balancing in parameter- "controlPlaneEndpoint", There will be network problems. For example, I have three master , ServerA、ServerB、ServerC, I enter the command "kubectl get pod" on serverA. There was a 33 percent probability of timeout. Everything is fine when the serverA request is directed to either ServerB or ServerC through the 4 layer load balancing. If the request directed to ServerA itself through the 4-layer load balancing, A timeout is bound to occur.

因为当ServerA既是服务器又是请求者时,无法使用4层负载平衡. 这是4层负载平衡的网络功能. 出于同样的原因,当我使用kubeadm创建新集群时,我的第一个主服务器是serverA.尽管ServerA的apiserver已经在docker中运行,并且我可以telnet ServerA-IP:6443成功,但kubelet会在参数"controlPlaneEndpoint"中检查4层负载平衡-IP:prot.因此,当我配置"controlPlaneEndpoint"时,在使用kubeadm进行安装的过程中,"nodexxx not found"的出现时间为100%.

Because the 4 layer load balancing cannot be used when the ServerA is the server as well as the requestor. This is the network feature of the 4-layer load balancing. Same reason, When I create a new cluster with kubeadm, My first master is serverA. Although ServerA's apiserver is already running in docker and I can telnet ServerA-IP:6443 successful , kubelet will check 4-layer load balancing-IP:prot in parameter- "controlPlaneEndpoint" . So "nodexxx not found" appeared 100% of the time during the installation with kubeadm when I configure "controlPlaneEndpoint".

在诸如ali之类的公共云环境中,我不能使用keepalived + haproxy. 这意味着如果要使用参数"controlPlaneEndpoint",则必须对k8s-apiserver使用7层负载均衡.对吧?

In a public cloud environment, such as ali, I can't use keepalived+haproxy. This means that I have to use 7 layers of load balancing for k8s-apiserver ,If I want use parameter- "controlPlaneEndpoint" . right?

如何使用第7层负载平衡配置kubeadm-config?是https,我对kubeadm认证有疑问.有文件吗?

How to configure the kubeadm-config with layer 7 load balancing? It is https, I had a problem with kubeadm certification. Is there any documentation?

推荐答案

我们遇到了完全相同的问题,但是使用了Azure负载平衡器(级别4).

We are suffering the exact same problem, but with the Azure Load Balancer (Level 4).

1)它在执行"kubeadm init"的第一个主节点上失败,因为它试图通过负载平衡器与其自身进行通信.

1) It fails on the first master node where "kubeadm init" is executed because it tries to communicate with itself through the load balancer.

2)在执行"kubeadm join"的所有其他主节点上,当负载均衡器选择节点本身而不是已经在其中的任何(N-1)个节点时,发生失败的机会为1/N集群.

2) On all the other master nodes where "kubeadm join" is executed, there's a 1/N chance of failure when the load balancer selects the node itself and not any of the (N-1) nodes that are already in the cluster.

我们通过使用iptables规则来破解自己的方式.例如,在"kubeadm init"之前的第一个节点中,我们使iptables将负载均衡器ip路由到127.0.0.1:

We hacked our way by using iptables rules. For instance, in the first node before "kubeadm init" we make iptables to route the load balancer ip to 127.0.0.1:

iptables -t nat -A输出-p all -d $ {FRONTEND_IP} -j DNAT -至目的地127.0.0.1

iptables -t nat -A OUTPUT -p all -d ${FRONTEND_IP} -j DNAT --to-destination 127.0.0.1

我们当然要在kubeadm初始化后删除iptables规则.我不建议任何人这样做,这是一个令人讨厌的骇客,我对这篇文章的意图是强迫可能知道我们所缺少的人来发表正确的解决方案.

Of course we delete the iptables rule after kubeadm init. I'm not recommending anybody to do this, it's a nasty hack and my intention with this post is to compel somebody who may know what we are missing to please post what the right solution is.

对于原始海报:我不认为我们打算使用7级LB.当他们说仅需要Level 4时,说明文件就清楚了.

To the original poster: I don't think the intention is that we use a Level 7 LB. The documentation is clear when they say that a Level 4 is all that's needed.

如果找到合适的解决方案,我会再次发布.

I'll post again if we find the right solution.

这篇关于如何使用kubeadm-init配置参数"controlPlaneEndpoint"?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆