在Docker中的容器之间共享数据库的最佳实践是什么? [英] What is best practice for sharing database between containers in docker?

查看:199
本文介绍了在Docker中的容器之间共享数据库的最佳实践是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有没有人知道在Docker中的容器之间共享数据库的最佳实践是什么?

Is there anyone knows what is the best practice for sharing database between containers in docker?

我的意思是我想在docker中创建多个容器。然后,这些容器将在具有相同标识的同一数据库上执行CRUD。

What I mean is I want to create multiple containers in docker. Then, these containers will execute CRUD on the same database with same identity.

到目前为止,我有两个想法。一种是创建一个单独的容器以仅运行数据库。另一个是直接在安装了docker的主机上安装数据库。

So far, I have two ideas. One is create an separate container to run database merely. Another one is install database directly on the host machine where installed docker.

哪个更好?或者,是否有其他最佳实践可以满足此要求?

Which one is better? Or, is there any other best practice for this requirement?

谢谢

推荐答案

很难回答最佳实践问题,因为这是一种见解。

It is hard to answer a 'best practice' question, because it's a matter of opinion. And opinions are off topic on Stack Overflow.

因此,我将举一个具体示例说明我在认真部署中所做的工作。

So I will give a specific example of what I have done in a serious deployment.

我正在运行ELK(Elasticsearch,Logstash,Kibana)。它是容器化的。

I'm running ELK (Elasticsearch, Logstash, Kibana). It's containerised.

对于我的数据存储,我有存储容器。这些存储容器包含本地文件系统传递:

For my data stores, I have storage containers. These storage containers contain a local fileystem pass through:

docker create -v /elasticsearch_data:/elasticsearch_data --name ${HOST}-es-data base_image /bin/true

我也在使用 etcd confd ,以动态地重新配置指向数据库的服务。 etcd 可让我存储键值,因此比较简单:

I'm also using etcd and confd, to dynamically reconfigure my services that point at the databases. etcd lets me store key-values, so at a simplistic level:

CONTAINER_ID=`docker run -d --volumes-from ${HOST}-es-data elasticsearch-thing`
ES_IP=`docker inspect $CONTAINER_ID | jq -r .[0].NetworkSettings.Networks.dockernet.IPAddress`
etcdctl set /mynet/elasticsearch/${HOST}-es-0

由于我们在 etcd 中进行了注册,因此可以使用 confd 监视键值存储,监视它的更改,并重写并重新启动我们的 other 容器服务。

Because we register it in etcd, we can then use confd to watch the key-value store, monitor it for changes, and rewrite and restart our other container services.

我有时为此使用 haproxy nginx 当我需要更复杂的东西时。通过这两种方式,您可以指定要向其发送流量的主机集,并具有一些基本的可用性/负载平衡机制。

I'm using haproxy for this sometimes, and nginx when I need something a bit more complicated. Both these let you specify sets of hosts to 'send' traffic to, and have some basic availability/load balance mechanisms.

这意味着我可以很懒惰地重新启动/移动/添加elasticsearch节点,因为注册过程会更新整个环境。 openshift 使用的机制与此类似。

That means I can be pretty lazy about restarted/moving/adding elasticsearch nodes, because the registration process updates the whole environment. A mechanism similar to this is what's used for openshift.

所以要专门回答您的问题:

So to specifically answer your question:


  • DB打包在一个容器中,出于相同的原因,其他元素也是。

  • 用于数据库存储的卷是通过本地文件系统传递的存储容器。

  • 查找数据库是通过父主机上的 etcd 完成的,但除此之外,我已将安装空间降至最低。 (我为docker主机提供了一个通用的安装模板,并尽可能避免在此模板中添加任何多余的东西)

  • DB is packaged in a container, for all the same reasons the other elements are.
  • Volumes for DB storage are storage containers passing through local filesystems.
  • 'finding' the database is done via etcd on the parent host, but otherwise I've minimised my install footprint. (I have a common 'install' template for docker hosts, and try and avoid adding anything extra to it wherever possible)

我的观点:如果您依赖具有(特定)数据库实例的本地主机,则docker的优势将大大降低,因为您不再具有打包测试部署的能力,或在几分钟内启动新系统。

It is my opinion that the advantages of docker are largely diminished if you're reliant on the local host having a (particular) database instance, because you've no longer got the ability to package-test-deploy, or 'spin up' a new system in minutes.

(上面的示例-我已经 literally 在10分钟内重建了整个内容,其中大部分是 docker pull 传输图像)

(The above example - I have literally rebuilt the whole thing in 10 minutes, and most of that was the docker pull transferring the images)

这篇关于在Docker中的容器之间共享数据库的最佳实践是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆