Kubernetes Cinder 卷不使用 cloud-provider=openstack 挂载 [英] Kubernetes Cinder volumes do not mount with cloud-provider=openstack
问题描述
我正在尝试使用 kubernetes 的 cinder 插件来创建静态定义的 PV 以及 StorageClass,但我发现我的集群和 cinder 之间没有任何用于创建/安装设备的活动.
I am trying to use the cinder plugin for kubernetes to create both statically defined PVs as well as StorageClasses, but I see no activity between my cluster and cinder for creating/mounting the devices.
Kubernetes 版本:
Kubernetes Version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
kubelet 启动的命令及其状态:
The command kubelet was started with and its status:
systemctl status kubelet -l
● kubelet.service - Kubelet service
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
Main PID: 2408 (kubelet)
CGroup: /system.slice/kubelet.service
├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf
这是我的 cloud.conf 文件:
Here is my cloud.conf file:
# cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne
看来 k8s 能够与 openstack 成功通信.来自/var/log/messages:
It appears that k8s is able to communicate successfully with openstack. From /var/log/messages:
kubelet: I1020 11:43:51.770948 2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642 2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679 2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688 2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332 2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]
我的 PV/PVC yaml 文件和煤渣列表输出:
My PV/PVC yaml files, and cinder list output:
# cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: jk-test
labels:
type: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
cinder:
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
fsType: ext4
# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available | jk-cinder | 10 | - | false |
如上所示,cinder 报告 pv.yaml 文件中引用的 ID 的设备可用.当我创建它们时,事情似乎奏效了:
As seen above, cinder reports the device with the ID referenced in the pv.yaml file is available. When I create them, things seem to work:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv/jk-test 10Gi RWO Retain Bound default/myclaim 5h
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/myclaim Bound jk-test 10Gi RWO 5h
然后我尝试使用 pvc 创建一个 pod,但它无法挂载卷:
Then I try to create a pod using the pvc, but it fails to mount the volume:
# cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
name: jk-test3
labels:
name: jk-test
spec:
containers:
- name: front-end
image: example-front-end:latest
ports:
- hostPort: 6000
containerPort: 3000
volumes:
- name: jk-test
persistentVolumeClaim:
claimName: myclaim
这里是 pod 的状态:
And here is the state of the pod:
3h 46s 109 {kubelet jk-kube2-master} Warning FailedMount Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
3h 46s 109 {kubelet jk-kube2-master} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
我已经确认我的 openstack 提供程序公开了 cinder v1 和 v2 API,并且之前来自 openstack_instances 的日志显示 nova API 是可访问的.尽管如此,我从未在 k8s 部分看到任何尝试与 cinder 或 nova 通信以安装卷.
I've verified that my openstack provider is exposing cinder v1 and v2 APIs and the previous logs from openstack_instances show the nova API is accessible. Despite that, I never see any attempts on k8s part to communicate with cinder or nova to mount the volume.
以下是我认为有关挂载失败的相关日志消息:
Here are what I think are the relevant log messages regarding the failure to mount:
kubelet: I1020 06:51:11.840341 24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424 24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474 24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420 24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566 24027 nestedpendingoperations.go:253] Operation for ""kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.
有没有我遗漏的部分?我已按照此处的说明操作:k8s - mysql-cinder-pd示例 但是一直无法获得任何通信.作为另一个数据点,我尝试定义一个由 k8s 提供的存储类,这里是关联的 StorageClass 和 PVC 文件:
Is there a piece I am missing? I've followed the instructions here: k8s - mysql-cinder-pd example But haven't been able to get any communication. As another datapoint I tried defining a Storage class as provided by k8s, here are the associated StorageClass and PVC files:
# cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: dynamicclaim
annotations:
volume.beta.kubernetes.io/storage-class: "gold"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
StorageClass 报告成功,但是当我尝试创建 PVC 时,它卡在待处理"状态并报告没有匹配的卷插件":
The StorageClass reports success, but when I try to create the PVC it gets stuck in the 'pending' state and reports 'no volume plugin matched':
# kubectl get storageclass
NAME TYPE
gold kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name: dynamicclaim
Namespace: default
Status: Pending
Volume:
Labels: <none>
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 15s 5867 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
这与加载的插件日志中的内容相矛盾:
This contradicts whats in the logs for plugins that were loaded:
grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517 22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"
我的机器上安装了 nova 和 cinder 客户端:
And I have the nova and cinder clients installed on my machine:
# which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder
感谢任何帮助,我确定我在这里遗漏了一些简单的东西.
Any help is appreciated, I'm sure I'm missing something simple here.
谢谢!
推荐答案
煤渣卷肯定适用于 Kubernetes 1.5.0 和 1.5.3(我认为它们也适用于我第一次尝试的 1.4.6,我不知道以前的版本).
The cinder volumes work for sure with Kubernetes 1.5.0 and 1.5.3 (I think they also worked on 1.4.6 on which I was first experimenting, I don't know about previous versions).
在您丢失的 Pod yaml 文件中:volumeMounts:
部分.
In your Pod yaml file you were missing: volumeMounts:
section.
实际上,当您已经有一个现有的 cinder 卷时,您可以只使用 Pod(或部署),不需要 PV 或 PVC.例子:<代码>apiVersion: 扩展/v1beta1种类:部署元数据:名称: vol-test标签:全名:vol-test规格:战略:类型:重新创建复制品:1模板:元数据:标签:全名:vol-test规格:容器:- 名称:nginx图像:nginx:1.11.6-高山"imagePullPolicy: IfNotPresent参数:-/bin/sh- -C- 回声嘿测试">/usr/share/nginx/html/index.html &&nginx-g 守护进程关闭;"端口:- 名称:http集装箱港口:80卷挂载:- 名称:数据挂载路径:/usr/share/nginx/html/卷:- 名称:数据煤渣:卷ID:e143368a-440a-400f-b8a4-dd2f46c51888这将创建一个部署和一个 Pod.cinder 卷将被挂载到 nginx 容器中.要验证您正在使用卷,您可以编辑 nginx 容器内的文件,在 /usr/share/nginx/html/
目录中并停止容器.Kubernetes 将创建一个新容器,在其中,/usr/share/nginx/html/
目录中的文件将与它们在停止容器中的文件相同.
Actually, when you already have an existing cinder volume, you can just use a Pod (or Deployment), no PV or PVC is needed. Example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
spec:
containers:
- name: nginx
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
cinder:
volumeID: e143368a-440a-400f-b8a4-dd2f46c51888
This will create a Deployment and a Pod. The cinder volume will be mounted into the nginx container. To verify that you are using a volume, you can edit a file inside nginx container, inside /usr/share/nginx/html/
directory and stop the container. Kubernetes will create a new container and inside it, the files in /usr/share/nginx/html/
directory will be the same as they were in the stopped container.
删除 Deployment 资源后,cinder 卷不会被删除,而是从 vm 中分离出来.
After you delete the Deployment resource, the cinder volume is not deleted, but it is detached from a vm.
其他可能性,如果您已经有一个现有的煤渣卷,您可以使用 PV 和 PVC 资源.你说你想使用一个存储类,尽管 Kubernetes 文档不允许使用它:
Other possibility, if you already have an existing cinder volume, you can use PV and PVC resources. You said you want to use a storage class, though Kubernetes docs allow not using it:
没有注解或其类注解设置为 "" 的 PV 没有类,只能绑定到不请求特定类的 PVC
A PV with no annotation or its class annotation set to "" has no class and can only be bound to PVCs that request no particular class
一个示例存储类是:<代码>种类:存储类api版本:storage.k8s.io/v1beta1元数据:# 用作注释的值:# volume.beta.kubernetes.io/storage-class名称:cinder-gluster-hdd供应商:kubernetes.io/cinder参数:# openstack 卷类型类型:gluster_hdd# openstack 可用区可用性:新星
然后,您在 PV 中使用 ID 为 48d2d1e6-e063-437a-855f-8b62b640a950 的现有 cinder 卷:<代码>api版本:v1种类:持久卷元数据:# Kubernetes 中可见的 pv 资源的名称,而不是其名称# 煤渣体积名称:pv0001标签:pv-first-label: "123"pv-second-label: abc注释:volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd规格:容量:存储:1Gi访问模式:- 读写一次持久卷回收策略:保留煤渣:#煤渣卷的ID卷ID:48d2d1e6-e063-437a-855f-8b62b640a950然后创建一个 PVC,其标签选择器与 PV 的标签匹配:<代码>种类:PersistentVolumeClaimapi版本:v1元数据:名称: vol-test标签:pvc-first-label: "123"pvc-second-label: abc注释:volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd"规格:访问模式:# 卷可以由单个节点以读写方式挂载- 读写一次资源:要求:存储:1Gi"选择器:匹配标签:pv-first-label: "123"pv-second-label: abc然后是一个部署:<代码>种类:部署元数据:名称: vol-test标签:全名:vol-test环境:测试规格:战略:类型:重新创建复制品:1模板:元数据:标签:全名:vol-test环境:测试规格:节点选择器:is_worker":真"容器:- 名称:nginx-exist-vol图像:nginx:1.11.6-高山"imagePullPolicy: IfNotPresent参数:-/bin/sh- -C- 回声嘿测试">/usr/share/nginx/html/index.html &&nginx-g 守护进程关闭;"端口:- 名称:http集装箱港口:80卷挂载:- 名称:数据挂载路径:/usr/share/nginx/html/卷:- 名称:数据持久卷声明:声明名称:vol-test
Then, you use your existing cinder volume with ID 48d2d1e6-e063-437a-855f-8b62b640a950 in a PV:
apiVersion: v1
kind: PersistentVolume
metadata:
# name of a pv resource visible in Kubernetes, not the name of
# a cinder volume
name: pv0001
labels:
pv-first-label: "123"
pv-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
cinder:
# ID of cinder volume
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
Then create a PVC, which labels selector matches the labels of the PV:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: vol-test
labels:
pvc-first-label: "123"
pvc-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd"
spec:
accessModes:
# the volume can be mounted as read-write by a single node
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
selector:
matchLabels:
pv-first-label: "123"
pv-second-label: abc
and then a Deployment:
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
environment: testing
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
environment: testing
spec:
nodeSelector:
"is_worker": "true"
containers:
- name: nginx-exist-vol
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
persistentVolumeClaim:
claimName: vol-test
删除k8s资源后,cinder卷并没有被删除,而是从一个vm中分离出来.
After you delete the k8s resources, the cinder volume is not deleted, but it is detached from a vm.
使用 PV 可以让您设置 persistentVolumeReclaimPolicy
.
Using a PV lets you set persistentVolumeReclaimPolicy
.
如果您没有创建 cinder 卷,Kubernetes 可以为您创建.然后您必须提供 PVC 资源.我不会描述这个变体,因为它没有被要求.
If you don't have a cinder volume created, Kubernetes can create it for you. You have to then provide a PVC resource. I won't describe this variant, since it was not asked for.
我建议任何有兴趣寻找最佳选择的人都应该自己试验并比较这些方法.另外,我使用了像 pv-first-label
和 pvc-first-label
这样的标签名称只是为了更好地理解原因.您可以使用例如first-label
无处不在.
I suggest that anyone interested in finding the best option should experiment themselves and compare the methods. Also, I used the labels names like pv-first-label
and pvc-first-label
only for better understanding reasons. You can use e.g. first-label
everywhere.
这篇关于Kubernetes Cinder 卷不使用 cloud-provider=openstack 挂载的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!