从Kubernetes部署应用程序使用Windows SMB共享 [英] Using Windows SMB shares from Kubernetes deployment app
问题描述
我们正在将旧版Java和.net应用程序从本地VM迁移到本地Kubernetes集群.
We are migrating legacy java and .net applications from on-premises VMs to an on-premises Kubernetes cluster.
这些应用程序中的许多应用程序都使用Windows文件共享在其他现有系统之间传输文件.与重新设计所有解决方案以避免使用samba共享相比,部署到Kubernetes的优先级要低,因此,如果要迁移,我们将必须找到一种方式来保持许多现状.
Many of these applications make use of windows file shares to transfer files from and to other existing systems. Deploying to Kubernetes has less priority than re-engineering all the solutions to avoid using samba shares, so if we want to migrate we will have to find a way of keeping many things as they are.
我们使用Kubeadm和Canal在3 centos 7机器上设置了一个3节点群集.
We have setup a 3-node cluster on 3 centos 7 machines using Kubeadm and Canal.
除了天蓝色卷之外,我找不到任何积极维护的插件或库来挂载SMB.
I could not find any actively maintained plugin or library to mount SMB except for azure volumes.
我想到的是在所有节点上使用相同的挂载点(即"/data/share1")在每个centos节点上挂载SMB共享,然后创建一个本地PersistentVolume
What I came up with was to mount the SMB shares on each centos node using the same mountpoint on all nodes, i.e.: "/data/share1", then I created a local PersistentVolume
kind: PersistentVolume
apiVersion: v1
metadata:
name: samba-share-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/data/share1"
和索赔
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: samba-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
并将索赔要求分配给了该应用程序.
and assigned the claim to the application.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: samba-share-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: samba-share-deployment
tier: backend
spec:
containers:
- name: samba-share-deployment
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: samba-share-volume
volumes:
- name: samba-share-volume
persistentVolumeClaim:
claimName: samba-share-claim
它可以在每个副本上运行,但是在生产中使用本地卷时有大量警告.我不知道执行此操作的任何其他方法,或者使用此配置的实际警告是什么.
it works from each replica, yet there are huge warnings about using local volumes in production. I do not know any other way to do this or what are the actual caveats of using this configuration.
我可以用另一种方式吗?如果我监视挂载点并在挂载失败时禁用kubernetes中的节点,可以吗?
Can I do it another way? Can this be ok if I monitor the mountpoints and disable the node in kubernetes if a mount fails?
推荐答案
我在r/kubernetes上问了同样的问题,一个用户对此发表了评论.我们现在正在尝试这样做,看来还可以.
I asked the same question on r/kubernetes and a user commented with this. We are trying this now and it seems ok.
我们不得不处理类似的情况,最终我开发了一个 定制Flexvolume驱动程序,用于将CIFS共享安装到示例中的Pod中 我在网上找到了.
We had to deal with a similar situation and I ended up developing a custom Flexvolume driver to mount CIFS shares into pods from examples I found online.
我已经编写了一个适用于我的用例的解决方案的仓库.
I have written a repo with the solution that works for my use case.
https://github.com/juliohm1978/kubernetes-cifs-volumedriver
您仍然需要在每个Kubernetes主机上安装cifs-utils和jq作为 先决条件.但是它确实允许您创建PersistentVoluems 装入CIFS卷并在您的Pod中使用它们.
You still need to intall cifs-utils and jq on each Kubernetes host as a pre-requisite. But it does allow you to create PersistentVoluems that mount CIFS volumes and use them in your pods.
希望对您有帮助.
这篇关于从Kubernetes部署应用程序使用Windows SMB共享的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!