如何限制Docker容器内部的ArangoDB RAM使用率? [英] How to limit ArangoDB RAM usage inside of a docker container?

查看:151
本文介绍了如何限制Docker容器内部的ArangoDB RAM使用率?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们将带有MMFiles存储引擎的ArangoDB 3.3.14(社区版)用于相对较大的数据集(备份时超过30 GB).我们使用ECS在docker容器内运行它.我们的主机VM具有64 GB的RAM,并且为ArangoDB容器专门分配了55 GB(我们将该容器的硬限制设置为55 GB).

We use ArangoDB 3.3.14 (Community Edition) with MMFiles storage engine for a relatively large data set (a bit over 30 GB when you back it up). We run it inside of a docker container using ECS. Our host VM has 64 GB of RAM and we have dedicated 55 GBs exclusively for ArangoDB container (we set a hard limit for that container to 55 GBs).

刚启动ArangoDB并将所有集合加载到RAM中时,大约需要45 GB,因此我们有大约10 GB的可用RAM用于查询等.

When ArangoDB is just started and have all the collections loaded into the RAM it would take about 45 GBs, so we have about 10 GBs of free RAM to be used for queries, etc.

问题在于,经过一段时间(取决于使用情况),ArangoDB会占用所有55 GB的RAM,并且不会在那里停下来.它继续消耗超出设置的硬限制的RAM,并在某个时刻,docker终止了退出代码为137且状态原因为OutOfMemoryError:容器由于内存使用而被杀死的容器.

The problem is that after some period of time (depending on usage) ArangoDB eats all the 55 GB of RAM and does not stop there. It continues to consume the RAM over the set hard limit and at some point, docker kills the container with exit code 137 and the status reason OutOfMemoryError: Container killed due to memory usage.

重新启动给我们带来了很多问题,因为我们需要等到所有集合和图形再次加载回RAM中.我们的数据集大约需要1-1.5小时,并且在重新启动"时不能使用ArangoDB.

The restart causes a lot of problems for us because we need to wait until all the collections and graphs are loaded back into the RAM again. It takes about 1-1.5 hours for our data set and you can not use ArangoDB while it is "restarting".

我的问题是我如何才能限制ArangoDB RAM的使用,比如说限制为54 GB,这样它就永远不会达到为Docker容器设置的硬盘限制?

My question is how can I limit ArangoDB RAM usage, let's say to 54 GBs, so it never reaches a hard memory limit set for a docker container?

推荐答案

在3.3.20中,ArangoDB引入了参数{{total-write-buffer-size}},该参数限制了写缓冲区.您可以尝试将其添加到您的配置文件中:

In 3.3.20, ArangoDB introduces the parameter {{total-write-buffer-size}} which limits the write buffer. You can try adding this to your configuration file:

[rocksdb]
block-cache-size = <value in bytes>  # 30% RAM
total-write-buffer-size = <value in bytes>  # 30% RAM
enforce-block-cache-size-limit = true

[cache]
size = <value in bytes>  # 20% RAM

或者您可以将参数传递给命令行:

or you can pass parameter to the command line:

arangod --cache.size <value in bytes>  # 20% RAM \
    --rocksdb.block-cache-size <value in bytes>  # 30% RAM \
    --rocksdb.total-write-buffer-size <value in bytes>  # 30% RAM \
    --rocksdb.enforce-block-cache-size-limit true 

您还可以根据使用情况调整每个单个组件分配的内存量. 但是您必须至少升级到3.3.20.

You can also tune how much memory assign per single component as per your usage. But you have to upgrade at least to 3.3.20.

这篇关于如何限制Docker容器内部的ArangoDB RAM使用率?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆