将大文件添加到Docker构建中会导致EOF异常 [英] Adding large file to Docker build gives EOF exception

查看:727
本文介绍了将大文件添加到Docker构建中会导致EOF异常的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

要在本地恢复生产数据库,我将Postgres转储添加到Docker构建文件中。直到最近,这还是一个平稳的过程。但是随着分贝稳步增长(现在为+ 80G),好像我遇到了一个未知的门槛。该构建在Dockerfile中的简单 ADD dmp.sql.gz /tmp/dmp.sql.gz 行中崩溃(因此 before 实际上已解压缩或执行文件的内容)

To restore our production db locally, I'm adding a Postgres dump to a Docker build file. Until recently this was a smooth process. But as the db steadily grows (now +80G), it seems as though I've hit an unknown treshold. The build crashes at a simple ADD dmp.sql.gz /tmp/dmp.sql.gz line in the Dockerfile (so before it actually unzips or executes the contents of the file)

Sending build context to Docker daemon  87.42GB
Step 1/6 : FROM ecr.url/postgres96
 ---> 36f64c15a938
...
Step 5/6 : ADD dmp.sql.gz /tmp/dmp.sql.gz
Error processing tar file(exit status 1): unexpected EOF

Docker守护进程的日志没有给我太多线索:

logs of the Docker deamon don't give me much of a clue:

Aug 15 10:02:55 raf-P775DM3-G dockerd[2498]: time="2018-08-15T10:02:55.902896948+02:00" level=error msg="Can't add file /var/lib/docker/overlay2/84787e6108e9df6739cee9905989e2aab8cc72298cbffa107facda39158b633d/diff/tmp/dmp.sql.gz to tar: io: read/write on closed pipe"
Aug 15 10:02:55 raf-P775DM3-G dockerd[2498]: time="2018-08-15T10:02:55.904099449+02:00" level=error msg="Can't close tar writer: io: read/write on closed pipe"

我继续将文件实际复制到覆盖fs,希望能看到它在该过程中某处崩溃,但实际上在整个文件传输后 崩溃:

I followed up on the actual copying of the file to the overlay fs, expecting to see it crash somewhere in the process, but it actually crashes after the whole file is transferred:

root@raf-P775DM3-G:/home/raf# ls /var/lib/docker/overlay2/e1d241ba14524cff6a7ef3dff8222d4f1ffbc4de05f60cd15d6afbdb2bb9f754/diff/tmp/ -lrta
total 85150928
-rw-r--r-- 1 root root 87194526754 Aug 14 00:01 dmp.sql.gz // -> this is the whole file
drwxr-xr-x 3 root root        4096 Aug 14 17:30 ..
drwxrwxrwt 2 root root        4096 Aug 14 17:30 .

当此dmp文件处于70GB范围内时,以这种方式进行恢复虽然很耗时,但是却很顺利

When this dmp file was in the 70GB range, restoring it in this fashion was a time consuming but smooth process,on different OSes and Docker versions.

有人可以帮助找出问题的根源吗?

Does anyone can help figuring out the gist of the problem?

当前在 Docker版本18.06.0-ce上遇到此问题,生成0ffa825

Ps:我读到一个tar标头限制为8GB,这会导致EOF异常( https://github.com/moby/moby / issues / 37581 ),但我们又恢复了70GB以上的转储,没有任何问题。

Ps: I read about a tar header limit of 8GB which causes a EOF exception (https://github.com/moby/moby/issues/37581) but again, we were restoring 70GB+ dumps without issue.

推荐答案

尝试升级到18.09。他们更改了应该解决此问题的tar后端。至于70GB文件为何起作用的原因,我怀疑它与图层压缩有关,因为您无法使用零的8GB文件触发此问题。参见 https://github.com/moby/moby/pull/37771

Try upgrading to 18.09. They changed the tar backend which should fix this issue. As for why the 70GB file worked, I suspect it has something to do with compression in the layers since you cannot trigger this issue with an 8GB file of zeros. See https://github.com/moby/moby/pull/37771.

这篇关于将大文件添加到Docker构建中会导致EOF异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆