AWS中的Kubernetes PersistentVolumeClaim问题 [英] Kubernetes PersistentVolumeClaim issues in AWS

查看:158
本文介绍了AWS中的Kubernetes PersistentVolumeClaim问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

根据项目要求,我们成功创建了Pod,服务和复制控制器.现在,我们计划使用Kubernetes在AWS中设置持久性存储.我已经创建了YAML文件以在AWS中创建EBS卷,它按预期工作正常.我可以声明其容量并成功安装到我的Pod(仅适用于单个副本).

We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).

但是,当我尝试创建更多一个副本时,我的Pod无法成功创建.当我尝试创建卷时,它仅在一个可用区中创建.如果我的容器是在其他区域节点中创建的,则由于我的容器已在其他区域中创建,因为我的容器未成功创建.如何在同一应用程序的不同区域中创建卷?如何使它与复制品一起成功?如何创建我的永久批量声明?

But when I am trying to create more the one replica, my pods are not creating successfully. When I am trying to create volumes, it's creating in only one availability zone. If my pod is created in a different zone node, since my volume is already created in different zone, due to that my pod is not creating successfully. How to create volumes in different zones for same application? How to make it successful, along with replica? How to create my persistent volumes claims?

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongo-pvc
  labels:
    type: amazonEBS
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: mongo-pp
  name: mongo-controller-pp
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: mongo-pp
    spec:
      containers:
      - image: mongo
        name: mongo-pp
        ports:
        - name: mongo-pp
          containerPort: 27017
          hostPort: 27017
        volumeMounts:
        - mountPath: "/opt/couchbase/var"
          name: mypd1
      volumes:
      - name: mypd1
        persistentVolumeClaim:
          claimName: mongo-pvc

推荐答案

我认为您面临的问题是由基础存储机制(在本例中为EBS)引起的.

I think the problem your a facing is caused by the underlying storage mechanism, in this case EBS.

在复制控制器后面扩展Pod时,每个副本将尝试安装相同的持久卷.如果您查看有关EBS的 K8文档,您会看到以下:

When scaling Pods behind a replication controller, each replica will attempt to mount the same persistent volume. If you look at the K8 docs in regards to EBS, you will see the following:

使用awsElasticBlockStore卷时有一些限制: 运行Pod的节点必须是那些AWS EC2实例 实例必须与 EBS卷EBS仅支持安装一个卷的单个EC2实例

There are some restrictions when using an awsElasticBlockStore volume: the nodes on which pods are running must be AWS EC2 instances those instances need to be in the same region and availability-zone as the EBS volume EBS only supports a single EC2 instance mounting a volume

因此,默认情况下,当您在复制控制器后面扩展时,Kubernetes会尝试在不同的节点上传播,这意味着第二个节点正在尝试安装此卷,这是EBS所不允许的.

So by default, when you scale up behind a replication controller, Kubernetes will try to spread across different nodes, this means that a second node is trying to mount this volume which is not allowed for EBS.

基本上,我看到您有两个选择.

Basically, I see that you have two options.

  1. 使用其他卷类型. nfsGlusterfs
  2. 使用 StatefulSet 代替复制控制器,并让每个副本挂载一个独立的卷.需要数据库复制,但要提供高可用性.
  1. Use a different volume type. nfs, Glusterfs etc
  2. Use a StatefulSet instead of a replication controller and have each replica mount an independent volume. Would require database replication but provide high availability.

这篇关于AWS中的Kubernetes PersistentVolumeClaim问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆