在Docker群上的多个容器中挂载rexray/ceph卷 [英] Mount rexray/ceph volume in multiple containers on Docker swarm

查看:248
本文介绍了在Docker群上的多个容器中挂载rexray/ceph卷的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我建立了一个Docker Swarm集群,在其中运行具有持久数据的容器.为了在发生故障的情况下允许容器移动到另一台主机,我需要整个集群的弹性共享存储.在研究了各种选项之后,我实现了以下内容:

I have built a Docker Swarm cluster where I am running containers that have persistent data. To allow the container to move to another host in the event of failure I need resilient shared storage across the swarm. After looking into the various options I have implemented the following:

  1. 在Swarm的所有节点上安装了一个Ceph存储集群,并创建了RADOS块设备(RBD). http://docs.ceph.com/docs/master/start/quick-ceph-deploy/

  1. Installed a Ceph Storage Cluster across all nodes of the Swarm and create a RADOS Block Device (RBD). http://docs.ceph.com/docs/master/start/quick-ceph-deploy/

在每个节点上安装Rexray,并将其配置为使用上面创建的RBD. https://rexray.readthedocs.io/en/latest/user-guide/storage-providers/ceph/

Installed Rexray on each node and configure it to use the RBD created above. https://rexray.readthedocs.io/en/latest/user-guide/storage-providers/ceph/

部署一个使用rexray驱动程序挂载卷的Docker堆栈

Deploy a Docker stack that mounts a volume using the rexray driver e.g.

version: '3'
services:
  test-volume:
    image: ubuntu
    volumes:
      - test-volume:/test
volumes:
  test-volume:
    driver: rexray

此解决方案的工作方式是,我可以部署堆栈,在正在运行的节点上模拟故障,然后观察堆栈在另一个节点上重新启动,而不会丢失持久性数据.

This solution is working in that I can deploy a stack, simulate a failure on the node that is running then observe the stack restarted on another node with no loss of persistent data.

但是,我不能在一个以上的容器中安装rexray卷.我这样做的原因是使用一个短暂的备份容器",该容器只是在容器仍在运行时将卷作为快照备份.

However, I cannot mount a rexray volume in more than one container. My reason for doing is to use a short lived "backup container" that simply tars the volume to a snapshot backup while the container is still running.

我可以将Rexray卷安装到另一个容器中吗?

Can I mount my rexray volumes into a second container?

第二个容器仅需要读取访问权限,因此它可以将卷作为快照备份的对象,同时保持第一个容器运行.

The second container only needs read access so it can tar the volume to a snapshot backup while keeping the first container running.

推荐答案

不幸的是,答案是否定的,在这种情况下,无法将Rexray卷安装到另一个容器中.以下一些信息有望帮助任何人走类似的道路:

Unfortunately the answer is no, in this use case rexray volumes cannot be mounted into a second container. Some information below will hopefully assist anyone heading down a similar path:

  1. Rexray不支持多个安装:

  1. Rexray does not support multiple mounts:

如今,REX-Ray旨在真正确保许多可能访问同一主机的主机之间的安全性.这意味着它将强制限制单个卷一次只能供一个主机使用. ( https://github.com/rexray/rexray/issues/343#issuecomment -198568291 )

  • 但是Rexray确实支持一项称为pre-emption的功能,

  • But Rexray does support a feature called pre-emption where:

    ..如果第二个主机确实请求了该卷,则他可以将该卷强行从原始主机上分离出来,然后再将其移交给自己.这将模拟连接到卷的主机的关闭电源操作,在该卷上,原始主机上内存中所有尚未刷新的位都将丢失.这将为Swarm用例提供支持,使其主机发生故障,并尝试重新安排容器. ( https://github.com/rexray/rexray/issues/343#issuecomment -198568291 )

    ..if a second host does request the volume that he is able to forcefully detach it from the original host first, and then bring it to himself. This would simulate a power-off operation of a host attached to a volume where all bits in memory on original host that have not been flushed down is lost. This would support the Swarm use case with a host that fails, and a container trying to be re-scheduled. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)

  • 但是,Ceph RBD 不支持抢占. ( https://rexray.readthedocs.io/en/稳定/用户指南/服务器/libstorage/#preemption )

  • However, pre-emption is not supported by the Ceph RBD. (https://rexray.readthedocs.io/en/stable/user-guide/servers/libstorage/#preemption)

    这篇关于在Docker群上的多个容器中挂载rexray/ceph卷的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆