如何使用 Kubeadm 在代理后面安装 Kubernetes 集群? [英] How to install Kubernetes cluster behind proxy with Kubeadm?

查看:41
本文介绍了如何使用 Kubeadm 在代理后面安装 Kubernetes 集群?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使用 Kubeadm 安装 Kubernetes 时遇到了一些问题.我在公司网络背后工作.我在会话环境中声明了代理设置.

I met a couple of problems when installing the Kubernetes with Kubeadm. I am working behind the corporate network. I declared the proxy settings in the session environment.

$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1

安装完所有必要的组件和依赖项后,我开始初始化集群.为了使用当前的环境变量,我使用了 sudo -E bash.

After installing all the necessary components and dependencies, I began to initialize the cluster. In order to use the current environment variables, I used sudo -E bash.

$ sudo -E bash -c "kubeadm init --apiserver-advertise-address=192.168.1.102 --pod-network-cidr=10.244.0.0/16"

然后输出消息永远挂在下面的消息上.

Then the output message hung at the message below forever.

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [loadbalancer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.102]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready

然后我发现当 kubelet 不断请求 kube-apiserver 时,没有任何 kube 组件启动.sudo docker ps -a 什么也没返回.

Then I found that none of the kube components was up while kubelet kept requesting kube-apiserver. sudo docker ps -a returned nothing.

可能的根本原因是什么?

What is the possible root cause of it?

提前致谢.

推荐答案

我强烈怀疑它正试图拉下 gcr.io/google_containers/hyperkube:v1.7.3 的 docker 镜像或无论如何,这需要教 docker 守护进程关于代理,以这种方式使用 systemd

I would strongly suspect it is trying to pull down the docker images for gcr.io/google_containers/hyperkube:v1.7.3 or whatever, which requires teaching the docker daemon about the proxies, in this way using systemd

这当然可以解释为什么 docker ps -a 什么都不显示,但我希望 dockerd 日志 journalctl -u docker.service (或您系统中的等效项)抱怨它无法从 gcr.io

That would certainly explain why docker ps -a shows nothing, but I would expect the dockerd logs journalctl -u docker.service (or its equivalent in your system) to complain about its inability to pull from gcr.io

根据我从 kubeadm 参考指南中读到的内容,他们希望您修补目标机器上的 systemd 配置以公开这些环境变量,而不仅仅是在启动 kubeadm 的 shell 中设置它们(尽管这当然可能是一个功能请求)

Based on what I read from the kubeadm reference guide, they are expecting you to patch the systemd config on the target machine to expose those environment variables, and not just set them within the shell that launched kubeadm (although that certainly could be a feature request)

这篇关于如何使用 Kubeadm 在代理后面安装 Kubernetes 集群?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆