如何使用Kubeadm在代理后面安装Kubernetes集群? [英] How to install Kubernetes cluster behind proxy with Kubeadm?

查看:597
本文介绍了如何使用Kubeadm在代理后面安装Kubernetes集群?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在使用Kubeadm安装Kubernetes时遇到了两个问题.我在公司网络后面工作.我在会话环境中声明了代理设置.

I met a couple of problems when installing the Kubernetes with Kubeadm. I am working behind the corporate network. I declared the proxy settings in the session environment.

$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1

在安装了所有必需的组件和依赖项之后,我开始初始化集群.为了使用当前的环境变量,我使用了sudo -E bash.

After installing all the necessary components and dependencies, I began to initialize the cluster. In order to use the current environment variables, I used sudo -E bash.

$ sudo -E bash -c "kubeadm init --apiserver-advertise-address=192.168.1.102 --pod-network-cidr=10.244.0.0/16"

然后,输出消息将永远挂在下面的消息上.

Then the output message hung at the message below forever.

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [loadbalancer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.102]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready

然后我发现,当kubelet继续请求kube-apiserver时,所有kube组件均未启动. sudo docker ps -a什么也没返回.

Then I found that none of the kube components was up while kubelet kept requesting kube-apiserver. sudo docker ps -a returned nothing.

可能的根本原因是什么?

What is the possible root cause of it?

先谢谢了.

推荐答案

我强烈怀疑它试图为gcr.io/google_containers/hyperkube:v1.7.3或其他内容下拉docker映像,这需要向docker守护进程介绍有关代理的信息,以这种方式使用systemd

I would strongly suspect it is trying to pull down the docker images for gcr.io/google_containers/hyperkube:v1.7.3 or whatever, which requires teaching the docker daemon about the proxies, in this way using systemd

这肯定可以解释为什么docker ps -a什么都不显示,但是我希望dockerd日志journalctl -u docker.service(或您系统中的等效记录)抱怨它无法从gcr.io

That would certainly explain why docker ps -a shows nothing, but I would expect the dockerd logs journalctl -u docker.service (or its equivalent in your system) to complain about its inability to pull from gcr.io

根据我从kubeadm参考指南中所读的内容,他们希望您在目标计算机上修补systemd配置,以暴露那些环境变量,而不仅仅是在启动kubeadm的shell中设置它们(尽管当然可以功能请求)

Based on what I read from the kubeadm reference guide, they are expecting you to patch the systemd config on the target machine to expose those environment variables, and not just set them within the shell that launched kubeadm (although that certainly could be a feature request)

这篇关于如何使用Kubeadm在代理后面安装Kubernetes集群?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆