"gcloud docker push"的速度 [英] Speed of "gcloud docker push"

查看:101
本文介绍了"gcloud docker push"的速度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于Google Container Registry和Docker生态系统来说,这通常是新的.我正在将现有映像推送到gcr.io,我希望完成任务的时间接近0秒,因为所有位都已经在gcr.io上了.与Mac笔记本电脑拥有的4个内核相反,上下文同时在许多内核上的Cloud中运行开发代码.我正在执行无操作操作以隔离瓶颈,实际使用量约为600万个新字节.执行慢速操作很慢,需要14秒.有没有办法将这种无操作时间减少到不到一秒钟?

New to Google Container Registry and Docker ecosystem in general. I'm pushing an existing image to gcr.io and I'd expect the time to complete the task to be close to 0 seconds, as all the bits are already on gcr.io. The context is running dev code in the Cloud, on lots of cores at the same time, as opposed to the 4 cores my Mac laptop has. I'm running a no-op to isolate the bottlenecks, the real usage has about 6M new bytes. It is slow, 14 seconds to perform a no-op. Is there a way to cut this no-op down to less than a second?

$ time gcloud docker push gcr.io/ai2-general/euclid:latest
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
WARNING: login credentials saved in /Users/cristipp/.docker/config.json
Login Succeeded
The push refers to a repository [gcr.io/ai2-general/euclid]
3a67b2b013f5: Layer already exists 
b7c8985fbf02: Layer already exists 
fef418d1a9e8: Layer already exists 
c58360ce048c: Layer already exists 
0030e912789f: Layer already exists 
5f70bf18a086: Layer already exists 
0ece0aa9121d: Layer already exists 
ef63204109e7: Layer already exists 
694ead1cbb4d: Layer already exists 
591569fa6c34: Layer already exists 
998608e2fcd4: Layer already exists 
c12ecfd4861d: Layer already exists 
latest: digest: sha256:04a831f4bf3e3033c40eaf424e447dd173e233329440a3c9796bf1515225546a size: 10321

real    0m14.742s
user    0m0.622s
sys 0m0.181s

14秒是很长的时间.使用普通的docker push速度更快,但仍浪费5个宝贵的秒数.

14 seconds is long time. Using plain docker push is faster, but still wastes 5 precious seconds.

$ time docker push gcr.io/ai2-general/euclid:latest
The push refers to a repository [gcr.io/ai2-general/euclid]
3a67b2b013f5: Layer already exists 
b7c8985fbf02: Layer already exists 
fef418d1a9e8: Layer already exists 
c58360ce048c: Layer already exists 
0030e912789f: Layer already exists 
5f70bf18a086: Layer already exists 
0ece0aa9121d: Layer already exists 
ef63204109e7: Layer already exists 
694ead1cbb4d: Layer already exists 
591569fa6c34: Layer already exists 
998608e2fcd4: Layer already exists 
c12ecfd4861d: Layer already exists 
latest: digest: sha256:04a831f4bf3e3033c40eaf424e447dd173e233329440a3c9796bf1515225546a size: 10321

real    0m5.014s
user    0m0.030s
sys 0m0.011s

我怀疑差异是由7次登录尝试引起的,需要花一些时间来处理,之后才感觉像是docker push开销.

I suspect the difference is caused by the 7 login attempts, which take a while to process, afterwards it feels like the docker push overhead.

供参考:

$ gcloud --version
Google Cloud SDK 107.0.0

bq 2.0.24
bq-nix 2.0.24
core 2016.04.21
core-nix 2016.03.28
gcloud 
gsutil 4.19
gsutil-nix 4.18
kubectl 
kubectl-darwin-x86_64 1.2.2

推荐答案

Docker for Mac?尝试重新启动守护程序.

Docker for Mac? Try restarting the daemon.

我发现我必须每天大约重启一次Docker(1.12),否则事情开始变慢.我相信Docker团队已经意识到了这个问题并正在跟踪问题.

I find I have to restart Docker (1.12) about once-a-day, or things begin to slow down. I believe the Docker team is aware of the problem and tracking the issue.

https://forums.docker. com/t/slow-upload-push-to-hub-docker/12072/14

这篇关于"gcloud docker push"的速度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆