具有NFS持久卷的Kubernetes statefulset [英] Kubernetes statefulset with NFS persistent volume

查看:181
本文介绍了具有NFS持久卷的Kubernetes statefulset的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个kubernetes集群,并且为mongodb进行了简单的部署,并设置了NFS持久卷.它工作正常,但是由于数据库等资源是stateful,所以我想到了对mongodb使用Statefulset,但是现在的问题是,当我浏览文档时,statefulset具有volumeClaimTemplates而不是volumes(在部署中).

I have a kubernetes cluster and I have a simple deployment for mongodb with NFS persistent volume set. It works fine, but since resources like databases are stateful I thought of using Statefulset for the mongodb, but now the problem is, when I go through the documentation, statefulset has volumeClaimTemplates instead of volumes (in deployments).

但是现在问题来了.

这样做:

PersistentVolume-> PersistentVolumeClaim-> Deployment

PersistentVolume -> PersistentVolumeClaim -> Deployment

但是我们如何在Statefulset中做到这一点?

But how can we do this in Statefulset ?

是这样的吗?

volumeClaimTemplates-> StatefulSet

如何为volumeClaimTemplates设置PersistentVolume.如果我们不将PersistentVolume用作StatefulSet,它将如何创建卷?在何处创建卷?是在host机器中(即kubernetes工作者节点)吗?

How can I set a PersistentVolume for the volumeClaimTemplates. If we don't use PersistentVolume for StatefulSet, how does it create he volume and WHERE does it create the volumes? Is in host machines (i.e. kubernetes worker nodes)?

因为我有一个用于mongodb部署的单独的NFS资源调配器(副本集= 1),我如何在StatefulSet中使用相同的设置?

Because I have a separate NFS provisioner I am using for the mongodb deployment (with replicasset=1), how can I use the same setup with StatefulSet ?

这是我的mongo-deployment.yaml->我将要转换为有状态集,如第二个代码段(mongo-stateful.yaml)

Here's the my mongo-deployment.yaml -> which I am going to transform into a statefulset as shown in the second code snippet (mongo-stateful.yaml)

  1. mongo-deployment.yaml

<omitted>
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    name: mynfs # name can be anything
spec:
  storageClassName: manual # same storage class as pvc
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: <nfs-server-ip>
    path: "/srv/nfs/mydata" 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany #  must be the same as PersistentVolume
  resources:
    requests:
      storage: 1Gi          
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb-deployment    
  labels:
    name: mongodb
spec:
  selector:
    matchLabels:
      app: mongodb
  replicas: 1
  template:
    metadata:
      labels: 
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo
        ports:
        -  containerPort: 27017
        ... # omitted some parts for easy reading
        volumeMounts:
        - name: data  
          mountPath: /data/db
      volumes: 
        - name: data
          persistentVolumeClaim: 
            claimName: task-pv-claim    

  1. mongo-stateful.yaml

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    name: mynfs # name can be anything
spec:
  storageClassName: manual # same storage class as pvc
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: <nfs-server-ip>
    path: "/srv/nfs/mydata" 
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-statefulset
spec:
  selector:
    matchLabels:
      name: mongodb-statefulset
  serviceName: mongodb-statefulset
  replicas: 2
  template:
    metadata:
      labels:
        name: mongodb-statefulset
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mongodb
        image: mongo:3.6.4
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: db-data
          mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: db-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "manual"
      resources:
        requests:
          storage: 2Gi

但是这不起作用(mongo-stateful.yaml)吊舱处于pending状态,如我所描述的那样:

But this is not working (mongo-stateful.yaml) pods are in pending state as when I describe it shows:

默认调度程序0/3个节点可用:1个节点有污点{node-role.kubernetes.io/master:},该容器无法容忍,2个容器具有未绑定的立即PersistentVolumeClaims

default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 pod has unbound immediate PersistentVolumeClaims

PS:部署工作正常,没有任何错误,Statefulset存在问题

PS: Deployment works fine without any errors, problem is with Statefulset

有人可以帮助我,如何编写带有卷的有状态集吗?

Can someone please help me, how to write a statefulset with volumes?

推荐答案

如果您的存储类不支持动态卷配置,则您必须使用yaml文件手动创建PV和关联的PVC ,然后volumeClaimTemplates将允许将现有的PVC与statefulset的吊舱链接.

If your storage class does not support dynamic volume provisionning, you have to manually create PVs and associated PVCs, using yaml files, then the volumeClaimTemplates will allow to link existing PVCs with your statefulset's pods.

这是一个有效的示例: https ://github.com/k8s-school/k8s-school/blob/master/examples/MONGODB-install.sh

Here is a working example: https://github.com/k8s-school/k8s-school/blob/master/examples/MONGODB-install.sh

您应该:

  • https://kind.sigs.k8s.io/上本地运行支持动态卷配置,因此此处将自动创建PVC和PV
  • 导出PV和PVC Yaml文件
  • 使用这些yaml文件作为模板为NFS后端创建PV和PVC.
  • run it locally on https://kind.sigs.k8s.io/, which support dynamic volume provisionning, so here PVCs and PVs will be created automatically
  • export PVs and PVCs yaml files
  • use these yaml file as template to create your PVs and PVCs for your NFS backend.

您将在这里得到种类:

$ ./MONGODB-install.sh               
+ kubectl apply -f 13-12-mongo-configmap.yaml
configmap/mongo-init created
+ kubectl apply -f 13-11-mongo-service.yaml
service/mongo created
+ kubectl apply -f 13-14-mongo-pvc.yaml
statefulset.apps/mongo created
$ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
mongo-0   2/2     Running   0          8m38s
mongo-1   2/2     Running   0          5m58s
mongo-2   2/2     Running   0          5m45s
$ kubectl get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
database-mongo-0   Bound    pvc-05247511-096e-4af5-8944-17e0d8222512   1Gi        RWO            standard       8m42s
database-mongo-1   Bound    pvc-f53c35a4-6fc0-4b18-b5fc-d7646815c0dd   1Gi        RWO            standard       6m2s
database-mongo-2   Bound    pvc-2a711892-eeee-4481-94b7-6b46bf5b76a7   1Gi        RWO            standard       5m49s
$ kubectl get pv 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
pvc-05247511-096e-4af5-8944-17e0d8222512   1Gi        RWO            Delete           Bound    default/database-mongo-0   standard                8m40s
pvc-2a711892-eeee-4481-94b7-6b46bf5b76a7   1Gi        RWO            Delete           Bound    default/database-mongo-2   standard                5m47s
pvc-f53c35a4-6fc0-4b18-b5fc-d7646815c0dd   1Gi        RWO            Delete           Bound    default/database-mongo-1   standard                6m1s

和PVC的转储(由volumeClaimTemplate在此生成,因为odf类动态卷配置):

And a dump of a PVC (generated here by volumeClaimTemplate because odf kind dynamic volume provisionning):

$ kubectl get pvc database-mongo-0 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
    volume.kubernetes.io/selected-node: kind-worker2
  creationTimestamp: "2020-10-16T15:05:20Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: mongo
  managedFields:
    ...
  name: database-mongo-0
  namespace: default
  resourceVersion: "2259"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/database-mongo-0
  uid: 05247511-096e-4af5-8944-17e0d8222512
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: standard
  volumeMode: Filesystem
  volumeName: pvc-05247511-096e-4af5-8944-17e0d8222512
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound

以及相关的PV:

kubectl get pv pvc-05247511-096e-4af5-8944-17e0d8222512 -o yaml     
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: rancher.io/local-path
  creationTimestamp: "2020-10-16T15:05:23Z"
  finalizers:
  - kubernetes.io/pv-protection
  managedFields:
    ...
  name: pvc-05247511-096e-4af5-8944-17e0d8222512
  resourceVersion: "2256"
  selfLink: /api/v1/persistentvolumes/pvc-05247511-096e-4af5-8944-17e0d8222512
  uid: 3d1e894e-0924-411a-8378-338e48ba4a28
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: database-mongo-0
    namespace: default
    resourceVersion: "2238"
    uid: 05247511-096e-4af5-8944-17e0d8222512
  hostPath:
    path: /var/local-path-provisioner/pvc-05247511-096e-4af5-8944-17e0d8222512_default_database-mongo-0
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kind-worker2
  persistentVolumeReclaimPolicy: Delete
  storageClassName: standard
  volumeMode: Filesystem
status:
  phase: Bound

这篇关于具有NFS持久卷的Kubernetes statefulset的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆