使用PHP创建和删除文件后,Docker无法释放内存 [英] Docker does not free memory after creating and deleting files with PHP

查看:86
本文介绍了使用PHP创建和删除文件后,Docker无法释放内存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个PHP守护程序脚本,可以下载远程图像并将其存储在本地临时文件中,然后再上传到对象存储中.

I have a PHP daemon script downloading remote images and storing them local temporary before uploading to object storage.

PHP内部内存使用率保持稳定,但Docker/Kubernetes报告的内存使用率持续增长.

PHP internal memory usage remains stable but the memory usage reported by Docker/Kubernetes keeps increasing.

我不确定这是否与PHP,Docker或预期的Linux行为有关.

I'm not sure if this is related to PHP, Docker or expected Linux behavior.

重现该问题的示例:

Docker映像: php:7.2.2-apache

Docker image: php:7.2.2-apache

<?php
for ($i = 0; $i < 100000; $i++) {
    $fp = fopen('/tmp/' . $i, 'w+');
    fclose($fp);

    unlink('/tmp/' . $i);

    unset($fp);
}

在执行上述脚本之前,在容器内调用 free -m :

Calling free -m inside container before executing the above script:

          total        used        free      shared  buff/cache   available
Mem:           3929        2276         139          38        1513        1311
Swap:          1023         167         856

在执行脚本之后:

          total        used        free      shared  buff/cache   available
Mem:           3929        2277         155          38        1496        1310
Swap:          1023         167         856

显然,内存已释放,但是从主机调用 docker stats php-apache 表示其他情况:

Apperantly the memory is released but calling docker stats php-apache from host indicate something other:

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
ccc19719078f        php-apache          0.00%               222.1MiB / 3.837GiB   5.65%               1.21kB / 0B         1.02MB / 4.1kB      7

docker stats php-apache 报告的初始内存使用量为16.04MiB.

The initial memory usage reported by docker stats php-apache was 16.04MiB.

解释是什么?如何释放内存?

What is the explanation? How do I free the memory?

如果此容器在具有资源限制的Kubernetes集群中运行,则会导致pod失败并反复重新启动.

Having this contianer running in a Kubernetes cluster with resource limits causes the pod to fail and restart repeatedly.

推荐答案

是的,已经报道了类似的问题此处.

Yes, a similar issue has been reported here.

这就是其中之一的coolljt0725的答案,回答为什么 top 输出中的 RES 列显示的内容不同于 docker stats 的原因(我只想引用他的意思):

Here's the answer of coolljt0725, one of the contributors, answering why a RES column in top output shows something different, than docker stats (I'm just gonna quote him as is):

如果我正确理解,则是从容器的内存cgroup中完全读取docker stats中的内存使用情况,您可以看到该值与从cat/sys/fs/cgroup/memory/docker/665e99f8b760c0300f10d3d9b35b1a5e5fdcf1b7e4a0100969memory.usage_in_bytes,该限制也是创建容器时由-m设置的内存cgroup限制.RES和内存cgroup的统计信息不同,RES没有考虑缓存,但内存cgroup却考虑到了这,这就是为什么docker stats中的MEM USAGE远比top中的RES更重要

If I understand correctly, the memory usage in docker stats is exactly read from containers's memory cgroup, you can see the value is the same with 490270720 which you read from cat /sys/fs/cgroup/memory/docker/665e99f8b760c0300f10d3d9b35b1a5e5fdcf1b7e4a0e27c1b6ff100981d9a69/memory.usage_in_bytes, and the limit is also the memory cgroup limit which is set by -m when you create container. The statistics of RES and memory cgroup are different, the RES does not take caches into account, but the memory cgroup does, that's why MEM USAGE in docker stats is much more than RES in top

此处用户的建议实际上可以帮助您查看实际内存消耗:

What a user suggested here might actually help you to see the real memory consumption:

尝试设置 docker run --memory 的参数,然后检查/sys/fs/cgroup/memory/docker/<container_id>/memory.usage_in_bytes 应该是对的.

Try set the param of docker run --memory,then check your /sys/fs/cgroup/memory/docker/<container_id>/memory.usage_in_bytes It should be right.

-内存 -m

-m -memory =" -内存限制(格式:< number> [< unit>] ).数字是一个正整数.单位可以是 b k m g 之一.最小为 4M .

-m, --memory="" - Memory limit (format: <number>[<unit>]). Number is a positive integer. Unit can be one of b, k, m, or g. Minimum is 4M.

现在如何避免不必要的内存消耗.就像您发布的一样,在PHP中取消链接文件并不需要立即删除内存缓存.相反,可以在特权模式下运行Docker容器(带有-privileged 标志),然后可以调用 echo 3>./proc/sys/vm/drop_caches sync&&sysctl -w vm.drop_caches = 3 定期清除内存页面缓存.

And now how to avoid the unnecessary memory consumption. Just as you posted, unlinking a file in PHP does not necessary drop memory cache immediately. Instead, running the Docker container in privileged mode (with --privileged flag) it is then possible to call echo 3 > /proc/sys/vm/drop_caches or sync && sysctl -w vm.drop_caches=3 periodcally to clear the memory pagecache.

另外,使用 fopen('php://temp','w +')并将文件临时存储在内存中可以避免整个问题.

And as a bonus, using fopen('php://temp', 'w+') and storing the file temporary in memory avoids the entire issue.

这篇关于使用PHP创建和删除文件后,Docker无法释放内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆