Kubernetes:无法获取 GCE GCECloudProvider,错误为 <nil> [英] Kubernetes: Failed to get GCE GCECloudProvider with error <nil>
问题描述
我已经使用 kubeadm 在 GCE 上设置了一个自定义的 kubernetes 集群.我正在尝试将 StatefulSets 与持久存储一起使用.
I have set up a custom kubernetes cluster on GCE using kubeadm. I am trying to use StatefulSets with persistent storage.
我有以下配置:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gce-slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: europe-west3-b
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myname
labels:
app: myapp
spec:
serviceName: myservice
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: mycontainer
image: ubuntu:16.04
env:
volumeMounts:
- name: myapp-data
mountPath: /srv/data
imagePullSecrets:
- name: sitesearch-secret
volumeClaimTemplates:
- metadata:
name: myapp-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: gce-slow
resources:
requests:
storage: 1Gi
我收到以下错误:
Nopx@vm0:~$ kubectl describe pvc
Name: myapp-data-myname-0
Namespace: default
StorageClass: gce-slow
Status: Pending
Volume:
Labels: app=myapp
Annotations: volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/gce-pd
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 5s persistentvolume-controller Failed to provision volume
with StorageClass "gce-slow": Failed to get GCE GCECloudProvider with error <nil>
我在黑暗中行走,不知道缺少什么.它不起作用似乎合乎逻辑,因为供应商从未对 GCE 进行身份验证.非常感谢任何提示和指针.
I am treading in the dark and do not know what is missing. It seems logical that it doesn't work, since the provisioner never authenticates to GCE. Any hints and pointers are very much appreciated.
编辑
我尝试了解决方案 此处,通过使用 kubeadm config upload from-file
在 kubeadm 中编辑配置文件,但错误仍然存在.kubadm 配置现在看起来像这样:
I Tried the solution here, by editing the config file in kubeadm with kubeadm config upload from-file
, however the error persists. The kubadm config looks like this right now:
api:
advertiseAddress: 10.156.0.2
bindPort: 6443
controlPlaneEndpoint: ""
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge: 2
path: ""
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: gce
criSocket: /var/run/dockershim.sock
etcd:
caFile: ""
certFile: ""
dataDir: /var/lib/etcd
endpoints: null
image: ""
keyFile: ""
imageRepository: k8s.gcr.io
kubeProxy:
config:
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: 192.168.0.0/16
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
kubeletConfiguration: {}
kubernetesVersion: v1.10.2
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
nodeName: mynode
privilegedPods: false
token: ""
tokenGroups:
- system:bootstrappers:kubeadm:default-node-token
tokenTTL: 24h0m0s
tokenUsages:
- signing
- authentication
unifiedControlPlaneImage: ""
编辑
感谢 Anton Kostenko,该问题已在评论中得到解决.最后的编辑加上 kubeadm upgrade
解决了这个问题.
The issue was resolved in the comments thanks to Anton Kostenko. The last edit coupled with kubeadm upgrade
solves the problem.
推荐答案
答案花了我一段时间,但现在是:
The answer took me a while but here it is:
在 Google Kubernetes Engine 之外的 Kubernetes 中使用 GCECloudProvider 具有以下先决条件(最后一点是 Kubeadm 特定的):
Using the GCECloudProvider in Kubernetes outside of the Google Kubernetes Engine has the following prerequisites (the last point is Kubeadm specific):
VM 需要使用有权配置磁盘的服务帐户运行.可以找到有关如何使用服务帐户运行 VM 的信息 这里
Kubelet 需要使用参数 --cloud-provider=gce
运行.为此,必须编辑 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
中的 KUBELET_KUBECONFIG_ARGS
.然后可以使用 重新启动 Kubeletsudo systemctl restart kubelet
The Kubelet needs to run with the argument --cloud-provider=gce
. For this the KUBELET_KUBECONFIG_ARGS
in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
have to be edited. The Kubelet can then be restarted with
sudo systemctl restart kubelet
需要配置Kubernetes cloud-config文件.该文件可以在 /etc/kubernetes/cloud-config
找到,以下内容足以让云提供商工作:
The Kubernetes cloud-config file needs to be configured. The file can be found at /etc/kubernetes/cloud-config
and the following content is enough to get the cloud provider to work:
[Global]
project-id = "<google-project-id>"
Kubeadm 需要将 GCE 配置为其云提供商.问题中发布的配置适用于此.但是,必须更改 nodeName
.
这篇关于Kubernetes:无法获取 GCE GCECloudProvider,错误为 <nil>的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!