如何限制 docker 容器内的 ArangoDB RAM 使用? [英] How to limit ArangoDB RAM usage inside of a docker container?

查看:36
本文介绍了如何限制 docker 容器内的 ArangoDB RAM 使用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们使用带有 MMFiles 存储引擎的 ArangoDB 3.3.14(社区版)来处理相对较大的数据集(备份时超过 30 GB).我们使用 ECS 在 docker 容器内运行它.我们的主机 VM 有 64 GB 的 RAM,我们专门为 ArangoDB 容器提供了 55 GB 的内存(我们将该容器的硬限制设置为 55 GB).

We use ArangoDB 3.3.14 (Community Edition) with MMFiles storage engine for a relatively large data set (a bit over 30 GB when you back it up). We run it inside of a docker container using ECS. Our host VM has 64 GB of RAM and we have dedicated 55 GBs exclusively for ArangoDB container (we set a hard limit for that container to 55 GBs).

当 ArangoDB 刚刚启动并将所有集合加载到 RAM 中时,大约需要 45 GB,因此我们有大约 10 GB 的空闲 RAM 可用于查询等.

When ArangoDB is just started and have all the collections loaded into the RAM it would take about 45 GBs, so we have about 10 GBs of free RAM to be used for queries, etc.

问题是一段时间后(取决于使用情况)ArangoDB 吃掉了所有 55 GB 的 RAM 并且并没有停止.它继续消耗超过设置的硬限制的 RAM,并且在某些时候,docker 使用退出代码 137 和状态原因 OutOfMemoryError: Container Killed due to memory use 杀死容器.

The problem is that after some period of time (depending on usage) ArangoDB eats all the 55 GB of RAM and does not stop there. It continues to consume the RAM over the set hard limit and at some point, docker kills the container with exit code 137 and the status reason OutOfMemoryError: Container killed due to memory usage.

重启给我们带来了很多问题,因为我们需要等到所有集合和图形再次加载回RAM.我们的数据集大约需要 1-1.5 小时,并且您在重启"时无法使用 ArangoDB.

The restart causes a lot of problems for us because we need to wait until all the collections and graphs are loaded back into the RAM again. It takes about 1-1.5 hours for our data set and you can not use ArangoDB while it is "restarting".

我的问题是如何限制 ArangoDB RAM 的使用,比如 54 GB,这样它就永远不会达到为 docker 容器设置的硬内存限制?

My question is how can I limit ArangoDB RAM usage, let's say to 54 GBs, so it never reaches a hard memory limit set for a docker container?

推荐答案

在 3.3.20 中,ArangoDB 引入了限制写入缓冲区的参数 {{total-write-buffer-size}}.您可以尝试将其添加到您的配置文件中:

In 3.3.20, ArangoDB introduces the parameter {{total-write-buffer-size}} which limits the write buffer. You can try adding this to your configuration file:

[rocksdb]
block-cache-size = <value in bytes>  # 30% RAM
total-write-buffer-size = <value in bytes>  # 30% RAM
enforce-block-cache-size-limit = true

[cache]
size = <value in bytes>  # 20% RAM

或者您可以将参数传递给命令行:

or you can pass parameter to the command line:

arangod --cache.size <value in bytes>  # 20% RAM \
    --rocksdb.block-cache-size <value in bytes>  # 30% RAM \
    --rocksdb.total-write-buffer-size <value in bytes>  # 30% RAM \
    --rocksdb.enforce-block-cache-size-limit true 

您还可以根据使用情况调整每个组件分配的内存量.但是你至少要升级到3.3.20.

You can also tune how much memory assign per single component as per your usage. But you have to upgrade at least to 3.3.20.

这篇关于如何限制 docker 容器内的 ArangoDB RAM 使用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆