共享同一主机路径/pvc的多个Kubernetes Pod将复制输出 [英] Multiple Kubernetes pods sharing the same host-path/pvc will duplicate output
问题描述
我有一个小问题,需要知道什么是解决这个问题的最佳方法.
I have a small problem and need to know what is the best way to approach this/solve my issue.
我在Kubernetes上部署了很少的Pod,到目前为止,我很喜欢学习和使用Kubernetes.是否所有持久卷,卷声明...等等.并可以在主机上看到我的数据,因为我需要这些文件进行进一步处理.
I have deployed few pods on Kubernetes and so far I have enjoyed learning about and working with Kubernetes. Did all the persistent volume, volume claim...etc. and can see my data on the host, as I need those files for further processing.
现在的问题是,共享相同卷声明的2个pod(2个副本)正在写入主机上的相同位置,这是预期的,但不幸的是导致数据在输出文件中重复.
Now the issue is 2 pods (2 replicas) sharing the same volume claim are writing to the same location on the host, expected, but unfortunately causing the data to be duplicated in the output file.
我需要的是:
- 使主机上每个吊舱具有唯一的输出.实现这一目标的唯一方法是 以我为例,有两个部署文件,每个部署文件使用不同的批量声明/永久 体积 ?同时不确定这是否是将来更新,升级,一定数量的Pod的可用性等的最佳方法.
- 或者我是否仍可以拥有一个带有2个或多个副本的部署文件,并且在共享相同的pvc时仍避免输出重复?
- To have a unique output of each pod on the host. Is the only way to achieve this is by having two deployment files, in my case, and each to use a different volume claim/persistent volume ? At the same time not sure if this is an optimal approach for future updates, upgrades, availability of certain number of pods ... etc.
- Or can I still have one deployment file with 2 or more replicas and still avoid the output duplication when sharing the same pvc ?
请注意,我有一个节点部署,这就是为什么我现在使用主机路径的原因.
Please note that I have one node deployment and that's why I'm using hostpath at the moment.
创建pv:
kind: PersistentVolume
apiVersion: v1
metadata:
name: ls-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/ls-data/my-data2"
claim-pv:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ls-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
我如何在部署中使用我的PV:
How I use my pv inside my deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: logstash
namespace: default
labels:
component: logstash
spec:
replicas: 2
selector:
matchLabels:
component: logstash
#omitted
ports:
- containerPort: 5044
name: logstash-input
protocol: TCP
- containerPort: 9600
name: transport
protocol: TCP
volumeMounts:
- name: ls-pv-store
mountPath: "/logstash-data"
volumes:
- name: ls-pv-store
persistentVolumeClaim:
claimName: ls-pv-claim
推荐答案
根据您要实现的目标,可以使用 StorageClass .
Depending on what exactly you are trying to achieve you could use Statefulsets instead of Deployments. Each Pod spawn from the Statefulset's Pod template can have it's own separate PersistentVolumeClaim that is created from the volumeClaimTemplate (see the link for an example). You will need a StorageClass set up for this.
如果您正在寻找更简单的内容,请从每个Pod中写入/mnt/volume/$HOSTNAME
.这也将确保它们使用单独的文件,因为Pod的主机名是唯一的.
If you are looking for something simpler you write to /mnt/volume/$HOSTNAME
from each Pod. This will also ensure that they are using separate files as the hostnames for the Pods are unique.
这篇关于共享同一主机路径/pvc的多个Kubernetes Pod将复制输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!