错误标记主机:等待条件超时[kubernetes] [英] error marking master: timed out waiting for the condition [kubernetes]

查看:115
本文介绍了错误标记主机:等待条件超时[kubernetes]的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我刚开始学习Kubernetes.我已经通过Kubernetes YUM存储库安装了带有SELinux禁用的kubectl,kubeadm和kubelet的CentOS 7.5.

Just I am starting to learn Kubernetes. I've installed CentOS 7.5 with SELinux disabled kubectl, kubeadm and kubelet by Kubernetes YUM repository.

但是,当我要启动kubeadm init命令时.我收到此错误消息:

However, when I want to start a kubeadm init command. I get this error message:

[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [51.75.201.75 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vps604805.ovh.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 51.75.201.75]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 26.003496 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node vps604805.ovh.net as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node vps604805.ovh.net as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition

根据Linux Foundation课程,我不需要执行更多命令即可将第一个启动集群创建到我的VM中.

According to Linux Foundation course, I don't need more command to execute to create my first start cluster into my VM.

错了吗?

Firewalld确实有开放到防火墙的端口. 6443/tcp和10248-10252

Firewalld does have open ports into firewall. 6443/tcp and 10248-10252

推荐答案

我建议按照官方

I would recommend to bootstrap Kubernetes cluster as guided in the official documentation. I've proceeded with some steps to build cluster on the same CentOS version CentOS Linux release 7.5.1804 (Core) and will share them with you, hope it can be helpful to you to get rid of the issue during installation.

首先擦除您当前的群集安装:

First wipe your current cluster installation:

# kubeadm reset -f && rm -rf /etc/kubernetes/

添加Kubernetes存储库以进一步安装kubeadmkubeletkubectl:

Add Kubernetes repo for further kubeadm, kubelet, kubectl installation:

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

检查SELinux是否处于允许模式:

Check whether SELinux is in permissive mode:

# getenforce
Permissive

确保在sysctl中将net.bridge.bridge-nf-call-iptables设置为1:

Ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl:

# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

安装必需的Kubernetes组件并启动服务:

Install required Kubernetes components and start services:

# yum update && yum upgrade && yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes

# systemctl start docker kubelet && systemctl enable docker kubelet

通过kubeadm部署集群:

kubeadm init --pod-network-cidr=10.244.0.0/16

我更愿意在集群中安装Flannel作为主要的CNI,尽管正确安装

I prefer to install Flannel as the main CNI in my cluster, although there are some prerequisites for proper Pod network installation, I've passed --pod-network-cidr=10.244.0.0/16 flag to kubeadm init command.

为您的用户创建Kubernetes主目录并存储config文件:

Create Kubernetes Home directory for your user and store config file:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装Pod网络,在我的情况下是Flannel:

Install Pod network, in my case it was Flannel:

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

最后检查Kubernetes核心Pod的状态:

Finally check Kubernetes core Pods status:

$ kubectl get pods --all-namespaces

NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-4x7zq             1/1     Running   0          36m
kube-system   coredns-576cbf47c7-666jm             1/1     Running   0          36m
kube-system   etcd-centos-7-5                      1/1     Running   0          35m
kube-system   kube-apiserver-centos-7-5            1/1     Running   0          35m
kube-system   kube-controller-manager-centos-7-5   1/1     Running   0          35m
kube-system   kube-flannel-ds-amd64-2bmw9          1/1     Running   0          33m
kube-system   kube-proxy-pcgw8                     1/1     Running   0          36m
kube-system   kube-scheduler-centos-7-5            1/1     Running   0          35m

如果您仍有任何疑问,请在此答案下方写下评论.

In case you still have any doubts, just write down a comment below this answer.

这篇关于错误标记主机:等待条件超时[kubernetes]的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆