Kubernetes Cinder卷无法通过cloud-provider = openstack挂载 [英] Kubernetes Cinder volumes do not mount with cloud-provider=openstack
问题描述
我正在尝试使用kubernetes的cinder插件来创建静态定义的PV和StorageClasses,但是我发现群集和cinder之间没有用于创建/安装设备的活动.
I am trying to use the cinder plugin for kubernetes to create both statically defined PVs as well as StorageClasses, but I see no activity between my cluster and cinder for creating/mounting the devices.
Kubernetes版本:
Kubernetes Version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
命令kubelet的启动及其状态:
The command kubelet was started with and its status:
systemctl status kubelet -l
● kubelet.service - Kubelet service
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
Main PID: 2408 (kubelet)
CGroup: /system.slice/kubelet.service
├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf
这是我的cloud.conf文件:
Here is my cloud.conf file:
# cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne
看来,k8s能够与openstack成功通信.从/var/log/messages:
It appears that k8s is able to communicate successfully with openstack. From /var/log/messages:
kubelet: I1020 11:43:51.770948 2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642 2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679 2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688 2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332 2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]
我的PV/PVC yaml文件和煤渣列表输出:
My PV/PVC yaml files, and cinder list output:
# cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: jk-test
labels:
type: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
cinder:
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
fsType: ext4
# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available | jk-cinder | 10 | - | false |
如上所示,cinder报告具有在pv.yaml文件中引用的ID的设备可用.当我创建它们时,似乎一切正常:
As seen above, cinder reports the device with the ID referenced in the pv.yaml file is available. When I create them, things seem to work:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv/jk-test 10Gi RWO Retain Bound default/myclaim 5h
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/myclaim Bound jk-test 10Gi RWO 5h
然后我尝试使用pvc创建一个pod,但是它无法装入该卷:
Then I try to create a pod using the pvc, but it fails to mount the volume:
# cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
name: jk-test3
labels:
name: jk-test
spec:
containers:
- name: front-end
image: example-front-end:latest
ports:
- hostPort: 6000
containerPort: 3000
volumes:
- name: jk-test
persistentVolumeClaim:
claimName: myclaim
这是吊舱的状态:
3h 46s 109 {kubelet jk-kube2-master} Warning FailedMount Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
3h 46s 109 {kubelet jk-kube2-master} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
我已验证我的openstack提供程序正在公开cinder v1和v2 API,并且openstack_instances上的先前日志显示nova API可以访问.尽管如此,我从未见过任何尝试在k8s部件上与煤渣或新星进行通信以装载体积的尝试.
I've verified that my openstack provider is exposing cinder v1 and v2 APIs and the previous logs from openstack_instances show the nova API is accessible. Despite that, I never see any attempts on k8s part to communicate with cinder or nova to mount the volume.
以下是我认为与安装失败有关的日志消息:
Here are what I think are the relevant log messages regarding the failure to mount:
kubelet: I1020 06:51:11.840341 24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424 24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474 24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420 24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566 24027 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950\"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.
我有没有想念的东西吗?我已按照此处的说明进行操作: k8s-mysql-cinder-pd例子,但是还没有得到任何沟通.作为另一个数据点,我尝试定义k8s提供的Storage类,以下是相关的StorageClass和PVC文件:
Is there a piece I am missing? I've followed the instructions here: k8s - mysql-cinder-pd example But haven't been able to get any communication. As another datapoint I tried defining a Storage class as provided by k8s, here are the associated StorageClass and PVC files:
# cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: dynamicclaim
annotations:
volume.beta.kubernetes.io/storage-class: "gold"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
StorageClass报告成功,但是当我尝试创建PVC时,它陷入了"pending"状态,并报告了没有匹配的卷插件":
The StorageClass reports success, but when I try to create the PVC it gets stuck in the 'pending' state and reports 'no volume plugin matched':
# kubectl get storageclass
NAME TYPE
gold kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name: dynamicclaim
Namespace: default
Status: Pending
Volume:
Labels: <none>
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 15s 5867 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
这与已加载插件的日志中的内容相矛盾:
This contradicts whats in the logs for plugins that were loaded:
grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517 22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"
我的机器上安装了nova和cinder客户端:
And I have the nova and cinder clients installed on my machine:
# which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder
我们非常感谢您的帮助,我肯定这里缺少一些简单的东西.
Any help is appreciated, I'm sure I'm missing something simple here.
谢谢!
推荐答案
Cinder卷可以与Kubernetes 1.5.0和1.5.3一起正常工作(我认为它们也可以在我第一次尝试的1.4.6上工作,不知道以前的版本.
The cinder volumes work for sure with Kubernetes 1.5.0 and 1.5.3 (I think they also worked on 1.4.6 on which I was first experimenting, I don't know about previous versions).
在您的Pod yaml文件中,您缺少:volumeMounts:
部分.
In your Pod yaml file you were missing: volumeMounts:
section.
实际上,当您已经具有现有的煤渣卷时,您可以仅使用Pod(或Deployment),而无需PV或PVC.例子:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
spec:
containers:
- name: nginx
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
cinder:
volumeID: e143368a-440a-400f-b8a4-dd2f46c51888
这将创建一个Deployment和一个Pod.煤渣体积将安装到nginx容器中.要验证您正在使用卷,可以在nginx容器内的/usr/share/nginx/html/
目录内编辑文件,然后停止该容器. Kubernetes将创建一个新容器,并在其中创建/usr/share/nginx/html/
目录中的文件,使其与停止容器中的文件相同.
Actually, when you already have an existing cinder volume, you can just use a Pod (or Deployment), no PV or PVC is needed. Example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
spec:
containers:
- name: nginx
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
cinder:
volumeID: e143368a-440a-400f-b8a4-dd2f46c51888
This will create a Deployment and a Pod. The cinder volume will be mounted into the nginx container. To verify that you are using a volume, you can edit a file inside nginx container, inside /usr/share/nginx/html/
directory and stop the container. Kubernetes will create a new container and inside it, the files in /usr/share/nginx/html/
directory will be the same as they were in the stopped container.
删除Deployment资源后,cinder卷不会被删除,但会与虚拟机分离.
After you delete the Deployment resource, the cinder volume is not deleted, but it is detached from a vm.
其他可能性,如果您已经具有现有的煤渣量,则可以使用PV和PVC资源.您说过您想使用存储类,尽管Kubernetes文档允许不使用它:
Other possibility, if you already have an existing cinder volume, you can use PV and PVC resources. You said you want to use a storage class, though Kubernetes docs allow not using it:
没有注释或其类别注释设置为"的PV没有类别,并且只能绑定到不要求特定类别的PVC
A PV with no annotation or its class annotation set to "" has no class and can only be bound to PVCs that request no particular class
示例存储类为:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
# to be used as value for annotation:
# volume.beta.kubernetes.io/storage-class
name: cinder-gluster-hdd
provisioner: kubernetes.io/cinder
parameters:
# openstack volume type
type: gluster_hdd
# openstack availability zone
availability: nova
An example storage-class is:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
# to be used as value for annotation:
# volume.beta.kubernetes.io/storage-class
name: cinder-gluster-hdd
provisioner: kubernetes.io/cinder
parameters:
# openstack volume type
type: gluster_hdd
# openstack availability zone
availability: nova
然后,在PV中使用ID 48d2d1e6-e063-437a-855f-8b62b640a950的现有煤渣卷:
apiVersion: v1
kind: PersistentVolume
metadata:
# name of a pv resource visible in Kubernetes, not the name of
# a cinder volume
name: pv0001
labels:
pv-first-label: "123"
pv-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
cinder:
# ID of cinder volume
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
然后创建一个PVC,其标签选择器与PV的标签匹配:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: vol-test
labels:
pvc-first-label: "123"
pvc-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd"
spec:
accessModes:
# the volume can be mounted as read-write by a single node
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
selector:
matchLabels:
pv-first-label: "123"
pv-second-label: abc
然后是部署:
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
environment: testing
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
environment: testing
spec:
nodeSelector:
"is_worker": "true"
containers:
- name: nginx-exist-vol
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
persistentVolumeClaim:
claimName: vol-test
Then, you use your existing cinder volume with ID 48d2d1e6-e063-437a-855f-8b62b640a950 in a PV:
apiVersion: v1
kind: PersistentVolume
metadata:
# name of a pv resource visible in Kubernetes, not the name of
# a cinder volume
name: pv0001
labels:
pv-first-label: "123"
pv-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
cinder:
# ID of cinder volume
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
Then create a PVC, which labels selector matches the labels of the PV:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: vol-test
labels:
pvc-first-label: "123"
pvc-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd"
spec:
accessModes:
# the volume can be mounted as read-write by a single node
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
selector:
matchLabels:
pv-first-label: "123"
pv-second-label: abc
and then a Deployment:
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
environment: testing
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
environment: testing
spec:
nodeSelector:
"is_worker": "true"
containers:
- name: nginx-exist-vol
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
persistentVolumeClaim:
claimName: vol-test
删除k8s资源后,cinder卷不会被删除,但会与虚拟机分离.
After you delete the k8s resources, the cinder volume is not deleted, but it is detached from a vm.
使用PV可以设置persistentVolumeReclaimPolicy
.
如果您没有创建煤渣卷,Kubernetes可以为您创建它.然后,您必须提供PVC资源.由于不要求使用此变体,因此我将不对其进行描述.
If you don't have a cinder volume created, Kubernetes can create it for you. You have to then provide a PVC resource. I won't describe this variant, since it was not asked for.
我建议任何有兴趣寻找最佳选择的人都应该尝试一下自己并比较一下方法.另外,我仅出于更好地理解原因而使用了pv-first-label
和pvc-first-label
之类的标签名称.您可以使用例如first-label
到处都是
I suggest that anyone interested in finding the best option should experiment themselves and compare the methods. Also, I used the labels names like pv-first-label
and pvc-first-label
only for better understanding reasons. You can use e.g. first-label
everywhere.
这篇关于Kubernetes Cinder卷无法通过cloud-provider = openstack挂载的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!