Kubernetes遇到StatefulSet和3个PersistentVolumes的麻烦 [英] Kubernetes trouble with StatefulSet and 3 PersistentVolumes
问题描述
我正在创建StatefulSet 的"nofollow noreferrer>,它将具有3个副本.我希望3个Pod中的每一个都连接到不同的PersistentVolume.
I'm in the process of creating a StatefulSet based on this yaml, that will have 3 replicas. I want each of the 3 pods to connect to a different PersistentVolume.
对于持久卷,我正在使用3个看起来像这样的对象,只是名称更改了(pvvolume,pvvolume2,pvvolume3):
For the persistent volume I'm using 3 objects that look like this, with only the name changed (pvvolume, pvvolume2, pvvolume3):
kind: PersistentVolume
apiVersion: v1
metadata:
name: pvvolume
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs"
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: mongo-persistent-storage-mongo-0
在StatefulSet中创建的三个Pod中的第一个似乎没有问题.
The first of the 3 pods in the StatefulSet seems to be created without issue.
第二个失败,并显示错误pod has unbound PersistentVolumeClaims
Back-off restarting failed container
.
The second fails with the error pod has unbound PersistentVolumeClaims
Back-off restarting failed container
.
但是,如果我转到显示PersistentVolumeClaims的选项卡,则创建的第二个似乎成功.
Yet if I go to the tab showing PersistentVolumeClaims the second one that was created seems to have been successful.
如果成功,为什么吊舱认为它失败了?
If it was successful why does the pod think it failed?
推荐答案
我希望3个Pod中的每一个都连接到不同的PersistentVolume.
I want each of the 3 pods to connect to a different PersistentVolume.
-
要使其正常工作,您将需要:
For that to work properly you will either need:
- provisioner(在您发布的链接中有示例如何在aws,azure,googlecloud和minicube上设置资源调配器)或 可以多次安装的卷(例如nfs卷).但是请注意,在这种情况下,您所有的Pod都将读/写到同一文件夹,并且当它们不打算同时锁定/写相同的数据时,这可能会导致问题.通常的用例是Pod保存到的上载文件夹,此文件夹以后仅用于读取此类用例.另一方面,SQL数据库(例如mysql)并非要写入此类共享文件夹.
- provisioner (in link you posted there are example how to set provisioner on aws, azure, googlecloud and minicube) or
- volume capable of being mounted multiple times (such as nfs volume). Note however that in such a case all your pods read/write to the same folder and this can lead to issues when they are not meant to lock/write to same data concurrently. Usual use case for this is upload folder that pods are saving to, that is later used for reading only and such use cases. SQL Databases (such as mysql) on the other hand, are not meant to write to such shared folder.
您正在使用hostPath(指向/nfs)并将其设置为ReadWriteOnce(只有一个可以使用它),而不是您的声明清单中提到的任何要求.您还使用标准"作为存储类,并且在url中提供了快速和慢速的存储类,因此您也可能创建了存储类.
Instead of either of mentioned requirements in your claim manifest you are using hostPath (pointing to /nfs) and set it to ReadWriteOnce (only one can use it). You are also using 'standard' as storage class and in url you gave there are fast and slow ones, so you probably created your storage class as well.
第二次失败,错误窗格具有未绑定的PersistentVolumeClaims 后退重启失败的容器
The second fails with the error pod has unbound PersistentVolumeClaims Back-off restarting failed container
- 这是因为第一个Pod已经拥有了它的所有权(一次读写,主机路径),而第二个Pod如果未设置适当的预配器或访问权限,则无法重复使用同一个Pod.
- 所有PVC已成功绑定到随附的PV.但是,您永远不要将第二个和第三个PVC绑定到第二个或第三个Pod.您正在尝试对第二个Pod进行第一个声明,并且第一个声明已在ReadWriteOnce模式下绑定(绑定到第一个Pod),也无法绑定到第二个Pod,并且出现错误...
- 这仅回答了最初的问题,即假设要进行nfs共享,请在有状态集复制的容器中装入持久卷. 对于动态数据(例如数据库),实际上不建议使用NFS.通常的用例是上载文件夹或适度的日志记录/备份文件夹.数据库(sql或无sql)通常对nfs不适用.
如果成功,为什么吊舱认为它失败了?
If it was successful why does the pod think it failed?
由于您将/nfs引用为主机路径,因此可以安全地假设您正在使用某种由NFS支持的文件系统,因此这是一种替代设置,它可以使您通过nfs将动态预配置的永久卷挂载到有状态集合中所需的Pod数量
Since you reference /nfs as your host path, it may be safe to assume that you are using some kind of NFS-backed file system so here is one alternative setup that can get you to mount dynamically provisioned persistent volumes over nfs to as many pods in stateful set as you want
- 对于任务/时间紧迫的应用,您可能需要在生产中采用这种方法之前仔细地进行时间/压力测试,因为k8和外部PV两者之间都增加了一些层次/延迟.尽管对于某些应用程序这可能就足够了,但是请注意这一点.
- 您对动态创建的PV的名称控制有限(k8s将后缀添加到新创建的pv中,并在提示时重用可用的旧后缀),但k8s将在pod终止并保留第一个可用的后保留它们新的广告连播,因此您不会丢失状态/数据.不过,您可以通过策略来控制.
- This only answers original question of mounting persistent volumes across stateful set replicated pods with the assumption of nfs sharing.
- NFS is not really advisable for dynamic data such as database. Usual use case is upload folder or moderate logging/backing up folder. Database (sql or no sql) is usually a no-no for nfs.
- For mission/time critical applications you might want to time/stresstest carefully prior to taking this approach in production since both k8s and external pv are adding some layers/latency in-between. Although for some application this might suffice, be warned about it.
- You have limited control of name for pv that are being dynamically created (k8s adds suffix to newly created, and reuses available old ones if told to do so), but k8s will keep them after pod get terminated and assign first available to new pod so you won't loose state/data. This is something you can control with policies though.
-
为此,您首先需要从此处安装nfs Provisioner:
for this to work you will first need to install nfs provisioner from here:
- https://github.com/kubernetes-incubator/external -storage/tree/master/nfs .请注意,安装并不复杂,但是有一些步骤需要采取谨慎的方法(权限,设置nfs共享等),因此它不仅是一劳永逸的部署.花点时间正确安装nfs Provisioner.正确设置后,您可以继续以下建议的清单:
- https://github.com/kubernetes-incubator/external-storage/tree/master/nfs. Mind you that installation is not complicated but has some steps where you have to take careful approach (permissions, setting up nfs shares etc) so it is not just fire-and-forget deployment. Take your time installing nfs provisioner correctly. Once this is properly set up you can continue with suggested manifests below:
存储类清单:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: sc-nfs-persistent-volume
# if you changed this during provisioner installation, update also here
provisioner: example.com/nfs
状态集(仅重要摘录):
Stateful Set (important excerpt only):
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ss-my-app
spec:
replicas: 3
...
selector:
matchLabels:
app: my-app
tier: my-mongo-db
...
template:
metadata:
labels:
app: my-app
tier: my-mongo-db
spec:
...
containers:
- image: ...
...
volumeMounts:
- name: persistent-storage-mount
mountPath: /wherever/on/container/you/want/it/mounted
...
...
volumeClaimTemplates:
- metadata:
name: persistent-storage-mount
spec:
storageClassName: sc-nfs-persistent-volume
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 10Gi
...
这篇关于Kubernetes遇到StatefulSet和3个PersistentVolumes的麻烦的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!