带有GPD卷的Container-VM映像失败,导致无法获取GCE Cloud Provider。 plugin.host.GetCloudProvider返回< nil>代替" [英] Container-VM Image with GPD Volumes fails with "Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead"

查看:838
本文介绍了带有GPD卷的Container-VM映像失败,导致无法获取GCE Cloud Provider。 plugin.host.GetCloudProvider返回< nil>代替"的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前尝试从容器优化Google计算引擎图片切换( https ://cloud.google.com/compute/docs/containers/container_vms )添加到Container-VM图片( https://cloud.google.com/compute/docs/containers/vm-image/#overview )。在我的containers.yaml中,我使用该卷定义一个卷和一个容器。

  apiVersion:v1 
kind: Pod
元数据:
名称:workhorse
规格:
容器:
- 名称:postgres
图片:postgres:9.5
imagePullPolicy:Always
volumeMounts:
- name:postgres-storage
mountPath:/ var / lib / postgresql / data $ b $卷:
- 名称:postgres-storage
gcePersistentDisk:
pdName:磁盘名称
fsType:ext4

容器优化谷歌计算引擎图像的罚款,但与容器虚拟机失败。在日志中,我可以看到以下错误:

  5月24日18:33:43战列舰kubelet [629]:E0524 18 :33:43.405470 629 gce_util.go:176] 
在分离PDdisk-name时获取GCECloudProvider时出错:
无法获取GCE Cloud Provider。 plugin.host.GetCloudProvider返回< nil>而不是

预先感谢任何提示! 只有当 kubelet 在没有的情况下运行时才会发生这种情况 - cloud-provider = gce 解决方案 c $ c>标志。问题除非有所不同,否则取决于GCP如何启动Container-VM。



请联系google云平台人员。



注意如果使用GCE时发生这种情况: - cloud-provider = gce 标志添加到 kubelet 在你所有的工人中。这仅适用于1.2群集版本,因为如果我没有错,就会有一个持续的连接/分离设计,用于1.3群集,这会将此业务逻辑从 kubelet

如果有人对附件/分离器重新设计感兴趣,那么它就是其对应的github问题: https://github.com/kubernetes/kubernetes/issues/20262


I currently try to switch from the "Container-Optimized Google Compute Engine Images" (https://cloud.google.com/compute/docs/containers/container_vms) to the "Container-VM" Image (https://cloud.google.com/compute/docs/containers/vm-image/#overview). In my containers.yaml, I define a volume and a container using the volume.

apiVersion: v1
kind: Pod
metadata:
  name: workhorse
spec:
  containers:
    - name: postgres
      image: postgres:9.5
      imagePullPolicy: Always
      volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
  volumes:
    - name: postgres-storage
      gcePersistentDisk:
        pdName: disk-name
        fsType: ext4

This setup worked fine with the "Container-Optimized Google Compute Engine Images", however fails with the "Container-VM". In the logs, I can see the following error:

May 24 18:33:43 battleship kubelet[629]: E0524 18:33:43.405470 629 gce_util.go:176]
Error getting GCECloudProvider while detaching PD "disk-name":
Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead

Thanks in advance for any hint!

解决方案

This happens only when kubelet is run without the --cloud-provider=gce flag. The problem, unless is something different, is dependant on how GCP is launching Container-VMs.

Please contact with google cloud platform guys.

Note if this happens to you when using GCE: Add --cloud-provider=gce flag to kubelet in all your workers. This only applies to 1.2 cluster versions because, if i'm not wrong, there is an ongoing attach/detach design targeted for 1.3 clusters which will move this business logic out of kubelet.

In case someone is interested in the attach/detach redesign here it is its corresponding github issue: https://github.com/kubernetes/kubernetes/issues/20262

这篇关于带有GPD卷的Container-VM映像失败,导致无法获取GCE Cloud Provider。 plugin.host.GetCloudProvider返回&lt; nil&gt;代替&QUOT;的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆