使用 PHP 创建和删除文件后 Docker 不释放内存 [英] Docker does not free memory after creating and deleting files with PHP

查看:52
本文介绍了使用 PHP 创建和删除文件后 Docker 不释放内存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 PHP 守护程序脚本下载远程图像并在上传到对象存储之前将它们临时存储在本地.

I have a PHP daemon script downloading remote images and storing them local temporary before uploading to object storage.

PHP 内部内存使用量保持稳定,但 Docker/Kubernetes 报告的内存使用量不断增加.

PHP internal memory usage remains stable but the memory usage reported by Docker/Kubernetes keeps increasing.

我不确定这是否与 PHP、Docker 或预期的 Linux 行为有关.

I'm not sure if this is related to PHP, Docker or expected Linux behavior.

重现问题的示例:

Docker 镜像:php:7.2.2-apache

<?php
for ($i = 0; $i < 100000; $i++) {
    $fp = fopen('/tmp/' . $i, 'w+');
    fclose($fp);

    unlink('/tmp/' . $i);

    unset($fp);
}

在执行上述脚本之前在容器内调用free -m:

Calling free -m inside container before executing the above script:

          total        used        free      shared  buff/cache   available
Mem:           3929        2276         139          38        1513        1311
Swap:          1023         167         856

执行脚本后:

          total        used        free      shared  buff/cache   available
Mem:           3929        2277         155          38        1496        1310
Swap:          1023         167         856

显然内存已释放,但从主机调用 docker stats php-apache 表明其他情况:

Apperantly the memory is released but calling docker stats php-apache from host indicate something other:

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
ccc19719078f        php-apache          0.00%               222.1MiB / 3.837GiB   5.65%               1.21kB / 0B         1.02MB / 4.1kB      7

docker stats php-apache 报告的初始内存使用量为 16.04MiB.

The initial memory usage reported by docker stats php-apache was 16.04MiB.

解释是什么?如何释放内存?

What is the explanation? How do I free the memory?

让这个容器在资源受限的 Kubernetes 集群中运行会导致 pod 失败并反复重启.

Having this contianer running in a Kubernetes cluster with resource limits causes the pod to fail and restart repeatedly.

推荐答案

是的,已经报告了类似的问题 这里.

Yes, a similar issue has been reported here.

这里是贡献者之一coolljt0725的答案为什么 top 输出中的 RES 列显示的内容与 docker stats 不同(我只是按原样引用他):

Here's the answer of coolljt0725, one of the contributors, answering why a RES column in top output shows something different, than docker stats (I'm just gonna quote him as is):

如果我理解正确,docker stats 中的内存使用情况完全是从容器的内存 cgroup 中读取的,您可以看到该值与您从 cat/sys/fs/cgroup/memory/docker/665e99f8b760c0300f10d3d9b35b1a5e5fdcf1b7e4a0e27c1b6ff10098/1d9a96 读取的值相同memory.usage_in_bytes,这个限制也是创建容器时-m设置的内存cgroup限制.RES 和 memory cgroup 的统计数据不同,RES 不考虑缓存,但 memory cgroup 考虑,这就是为什么 docker stats 中的 MEM USAGE 比 top 中的 RES 多

If I understand correctly, the memory usage in docker stats is exactly read from containers's memory cgroup, you can see the value is the same with 490270720 which you read from cat /sys/fs/cgroup/memory/docker/665e99f8b760c0300f10d3d9b35b1a5e5fdcf1b7e4a0e27c1b6ff100981d9a69/memory.usage_in_bytes, and the limit is also the memory cgroup limit which is set by -m when you create container. The statistics of RES and memory cgroup are different, the RES does not take caches into account, but the memory cgroup does, that's why MEM USAGE in docker stats is much more than RES in top

用户在此处提出的建议实际上可能会帮助您查看实际内存消耗:

What a user suggested here might actually help you to see the real memory consumption:

尝试设置docker run --memory的参数,然后检查你的/sys/fs/cgroup/memory/docker//memory.usage_in_bytes应该是对的.

Try set the param of docker run --memory,then check your /sys/fs/cgroup/memory/docker/<container_id>/memory.usage_in_bytes It should be right.

--memory-m 描述 这里:

-m, --memory="" - 内存限制(格式:<number>[<unit>]).数字是一个正整数.单位可以是 bkmg 之一.最小值为 4M.

-m, --memory="" - Memory limit (format: <number>[<unit>]). Number is a positive integer. Unit can be one of b, k, m, or g. Minimum is 4M.

现在如何避免不必要的内存消耗.正如您发布的那样,在 PHP 中取消链接文件不需要立即删除内存缓存.相反,在特权模式下运行 Docker 容器(带有 --privileged 标志)然后可以调用 echo 3 >/proc/sys/vm/drop_cachessync &&sysctl -w vm.drop_caches=3 定期清除内存页面缓存.

And now how to avoid the unnecessary memory consumption. Just as you posted, unlinking a file in PHP does not necessary drop memory cache immediately. Instead, running the Docker container in privileged mode (with --privileged flag) it is then possible to call echo 3 > /proc/sys/vm/drop_caches or sync && sysctl -w vm.drop_caches=3 periodcally to clear the memory pagecache.

另外,使用 fopen('php://temp', 'w+') 并将文件临时存储在内存中可以避免整个问题.

And as a bonus, using fopen('php://temp', 'w+') and storing the file temporary in memory avoids the entire issue.

这篇关于使用 PHP 创建和删除文件后 Docker 不释放内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆