添加第二个主节点以在kubernetes中实现高可用性 [英] Add a second master node for high availabity in kubernetes

查看:64
本文介绍了添加第二个主节点以在kubernetes中实现高可用性的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我能够遵循文档并建立kubernetes集群.但是我想添加第二个主节点,但我在第二个节点上尝试了此操作,但看到一个错误

I was able to follow the documentation and get a kubernetes cluster up. But I would like to add a second master node I tried this on the second node but seeing an error

[root@kubemaster02 ~]# kubeadm init --apiserver-advertise- 
address=10.122.161.XX --pod-network-cidr=10.244.0.0/16 --kubernetes- 
version=v1.10.0
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most 
recently validated version. Docker version: 18.03.0-ce. Max validated 
version: 17.03
    [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Some fatal errors occurred:
    [ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal 
with `--ignore-preflight-errors=...`

我的问题是,这是否是通过执行init添加第二个主服务器的正确方法?我还有一个问题是如何判断该节点是否配置为主节点,以下命令由于某些原因(可能是较旧的版本)未显示ROLES

My question is if this is the correct way to add the second master, by doing an init ? another question I have is how to tell if the node is configured as a master or not, the following command is not showing the ROLES for some reason (may be older versions)

[root@master01 ~]# kubectl get nodes -o wide
NAME                   STATUS    AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION
kubemaster01   Ready     215d      v1.8.1    <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64
kubemaster02   Ready     132d      v1.8.4    <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64
kubenode01     Ready     215d      v1.8.1    <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64
kubenode02     Ready     214d      v1.8.1    <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64

推荐答案

在您的情况下,请查看端口10250上正在运行什么:

In your case, please look what is running on the port 10250 :

netstat -nlp | grep 10250

因为您的错误是:

[ERROR Port-10250]:端口10250在使用中

[ERROR Port-10250]: Port 10250 is in use

通常,您可以引导其他主机,并拥有2个主机.在其他主服务器上运行kubeadm之前,您需要首先从kubemaster01复制K8s CA证书.为此,您有两个选择:

In general, you can bootstrap additional master, and have 2 masters. Before running kubeadm on the other master, you need to first copy the K8s CA cert from kubemaster01. To do this, you have two options:

选项1:使用scp复制

scp root@<kubemaster01-ip-address>:/etc/kubernetes/pki/* /etc/kubernetes/pki

选项2:复制粘贴

复制/etc/kubernetes/pki/ca.crt/etc/kubernetes/pki/ca.key/etc/kubernetes/pki/sa.key/etc/kubernetes/pki/sa.pub的内容,并在kubemaster02上手动创建这些文件

Copy the contents of /etc/kubernetes/pki/ca.crt, /etc/kubernetes/pki/ca.key, /etc/kubernetes/pki/sa.key and /etc/kubernetes/pki/sa.pub and create these files manually on kubemaster02

下一步是创建位于主节点前面的负载均衡器.您如何执行此操作取决于您的环境.例如,您可以利用云提供商的负载均衡器,或使用NGINX,keepalived或HAproxy设置自己的负载.

The next step is to create a Load Balancer that sits in front of your master nodes. How you do this depends on your environment; you could, for example, leverage a cloud provider Load Balancer, or set up your own using NGINX, keepalived, or HAproxy.

要进行引导,请使用config.yaml:

cat >config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: <private-ip>
etcd:
  endpoints:
 - https://<your-ectd-ip>:2379
  caFile: /etc/kubernetes/pki/etcd/ca.pem
  certFile: /etc/kubernetes/pki/etcd/client.pem
  keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
  podSubnet: <podCIDR>
apiServerCertSANs:
- <load-balancer-ip>  
apiServerExtraArgs:
  apiserver-count: "2"
EOF

确保替换以下占位符:

  • your-ectd-ip您的etcd的IP地址
  • private-ip使用主服务器的专用IPv4.
  • <podCIDR>与您的Pod CIDR
  • load-balancer-ip端点以连接您的主机
  • your-ectd-ip the IP address your etcd
  • private-ip it with the private IPv4 of the master server.
  • <podCIDR> with your Pod CIDR
  • load-balancer-ip endpoint to connect your masters

然后您可以运行命令:

kubeadm init --config=config.yaml

并引导大师.

但是,如果您确实想要HA群集,请遵循文档的最低要求,并使用3个节点作为主节点.他们为etcd仲裁创建了这些要求.他们在每个主节点上运行etcd,它与主节点非常接近.

But if you really want a HA cluster please follow the documentation's minimal requirements and use 3 nodes for masters. They create these requirements for etcd quorum. On every master node they run the etcd which works very close to masters.

这篇关于添加第二个主节点以在kubernetes中实现高可用性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆