如何加快在Google Cloud Platform上的Rails Docker部署? [英] How can I speed up Rails Docker deployments on Google Cloud Platform?

查看:168
本文介绍了如何加快在Google Cloud Platform上的Rails Docker部署?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用更具成本效益的方式部署我的Rails应用程序,并通过

几乎是完美的,肯定有竞争力的价格,但部署速度非常慢。



当我从示例书架应用程序

  $ gcloud preview app deploy app .yaml worker.yaml --promote 

我可以看到一个新的 gae- 计算引擎/ VM实例页面上的builder-vm 实例我获得了熟悉的Docker b uild输出 - 这需要大约十分钟的时间才能完成。



如果我立即重新部署,我会得到一个新的 gae-builder -vm 转过来,通过完全相同的十分钟构建过程,第一次创建图像时没有明显的缓存



在这两种情况下,第二个 module (worker.yaml)被缓存并且真的很快:

 为模块建立和推送图像[工人] 
---------------------------------------- DOCKER BUILD OUTPUT - ---------------------------------------
步骤0:FROM gcr.io / google_appengine / ruby​​
---> 3e8b286df835
步骤1:运行rbenv install -s 2.2.3&& rbenv全球2.2.3&& gem install -q --no-rdoc --no-ri bundler --version 1.10.6&& gem install -q --no-rdoc --no-ri foreman --version 0.78.0
--->使用缓存
---> efdafde40bf8
步骤2:ENV RBENV_VERSION 2.2.3
--->使用缓存
---> 49534db5b7eb
步骤3:COPY Gemfile Gemfile.lock / app /
--->使用缓存
---> d8c2f1c5a44b
步骤4:运行捆绑安装&& rbenv rehash
--->使用缓存
---> d9f9b57ccbad
步骤5:复制。 / app /
--->使用缓存
---> 503904327f13
步骤6:ENTRYPOINT bundle exec foreman start --formation$ FORMATION
--->使用缓存
---> af547f521411
成功构建af547f521411

但对我来说,这些版本不能如果没有任何变化,则不要在部署之间缓存。



理想情况下,如果我在专用的构建服务器上触发了重建(可能会记得Docker构建之间的图像),然后更新公共映像文件,并要求Google重新部署预先构建的映像,这将更快。



这是Docker文件,由#code code code code code $ c




$ b $使用:
#gcloud预览应用程序gen-config --custom

#基本Docker文件安装:
#* Ruby运行时和gems需要的许多软件包
#常用于Ruby Web应用程序(如libsqlite3)
#*最新版本的NodeJS
#*最新版本的标准Ruby运行时默认使用
#*捆绑者和工头宝石
FROM gcr.io/google_appengine/ruby

#安装ruby 2.2.3,如果还没有预先安装基本图像
#base image:https://github.com/GoogleCloudPlatform/ruby-docker/blob/master/appengine/Dockerfile
#预装红宝石版本:2.0.0-p647 2.1.7 2.2.3
运行rbenv安装-s 2.2.3&& \
rbenv全局2.2.3&& \
gem install -q --no-rdoc --no-ri bundler --version 1.10.6&& \
gem install -q --no-rdoc --no-ri foreman --version 0.78.0
ENV RBENV_VERSION 2.2.3

#安装需要的其他包通过你的宝石,取消注释
#的RUN apt-get update和RUN apt-get install行在
#下面,并指定你的包。
#运行apt-get更新
#运行apt-get install -y -q(您的包在这里)

#安装所需的宝石。
COPY Gemfile Gemfile.lock / app /
RUN bundle install&& rbenv rehash

#启动端口8080上的应用程序。
COPY。 / app /
ENTRYPOINT bundle exec foreman start --formation$ FORMATION

我使这个过程更快?

解决方案

嗯,你有点混合2种不同的情况:




  • 重新部署完全相同的应用程序代码 - Google确实没有检查应用程序是否有任何变化,可以重新使用整个Docker镜像,但是您已经拥有了该图像,因此您甚至不需要重新部署。除非您怀疑出现问题,否则您确实坚持重新构建映像(并且部署实用程序完全正确)。一个相当学术的案例,对现实应用程序部署的成本效益几乎没有影响:)

  • 您正在部署不同的应用程序代码(无关紧要多少不同) - 在图像构建过程中(不过根据您的构建日志发生),很难重新使用缓存的工件 - 最终的图像仍然需要构建以包含新的应用程序代码 - 不可避免。重新使用以前构建的图像是不可能的。



更新:仔细看看你的日志我同意你的看法,缓存似乎是每个构建虚拟机的本地(由高速缓存命中解释,只有在构建 worker 模块时,每个在相同的虚拟机上,相应的默认模块预先构建),因此不会在部署之间重新使用。



另一个更新:有可能可以获取跨部署的缓存命中...



gcloud preview app deploy DESCRIPTION 表示托管的构建也可以使用Container Builder API(似乎默认设置!)除了临时VM之外还可以完成:


使用临时虚拟机(使用默认的--docker-build = remote
设置),而不是Container Builder API执行docker
构建,运行:

  $ gcloud config set app / use_cloud_build false 


使用 Container Builder API 完成的构建可能使用共享存储,其中可能允许跨部署的缓存命中。 IMHO值得一试。


I'm experimenting with more cost effective ways to deploy my Rails apps, and went through the Ruby Starter Projects to get a feel for Google Cloud Platform.

It's almost perfect, and certainly competitive on price, but the deployments are incredibly slow.

When I run the deployment command from the sample Bookshelf app:

$ gcloud preview app deploy app.yaml worker.yaml --promote

I can see a new gae-builder-vm instance on the Compute Engine/VM Instances page and I get the familiar Docker build output - this takes about ten minutes to finish.

If I immediately redeploy, though, I get a new gae-builder-vm spun up that goes through the exact same ten-minute build process with no apparent caching from the first time the image was built.

In both cases, the second module (worker.yaml) gets cached and goes really quickly:

Building and pushing image for module [worker]
---------------------------------------- DOCKER BUILD OUTPUT ----------------------------------------
Step 0 : FROM gcr.io/google_appengine/ruby
---> 3e8b286df835
Step 1 : RUN rbenv install -s 2.2.3 &&     rbenv global 2.2.3 &&     gem install -q --no-rdoc --no-ri bundler --version 1.10.6 &&     gem install -q --no-rdoc --no-ri foreman --version 0.78.0
---> Using cache
---> efdafde40bf8
Step 2 : ENV RBENV_VERSION 2.2.3
---> Using cache
---> 49534db5b7eb
Step 3 : COPY Gemfile Gemfile.lock /app/
---> Using cache
---> d8c2f1c5a44b
Step 4 : RUN bundle install && rbenv rehash
---> Using cache
---> d9f9b57ccbad
Step 5 : COPY . /app/
---> Using cache
---> 503904327f13
Step 6 : ENTRYPOINT bundle exec foreman start --formation "$FORMATION"
---> Using cache
---> af547f521411
Successfully built af547f521411

but it doesn't make sense to me that these versions couldn't be cached between deployments if nothing has changed.

Ideally I'm thinking this would go faster if I triggered a rebuild on a dedicated build server (which could remember Docker images between builds), which then updated a public image file and asked Google to redeploy with the prebuilt image, which would go faster.

Here's the Docker file that was generated by gcloud:

# This Dockerfile for a Ruby application was generated by gcloud with:
# gcloud preview app gen-config --custom

# The base Dockerfile installs:
# * A number of packages needed by the Ruby runtime and by gems
#   commonly used in Ruby web apps (such as libsqlite3)
# * A recent version of NodeJS
# * A recent version of the standard Ruby runtime to use by default
# * The bundler and foreman gems
FROM gcr.io/google_appengine/ruby

# Install ruby 2.2.3 if not already preinstalled by the base image
# base image: https://github.com/GoogleCloudPlatform/ruby-docker/blob/master/appengine/Dockerfile
# preinstalled ruby versions: 2.0.0-p647 2.1.7 2.2.3
RUN rbenv install -s 2.2.3 && \
    rbenv global 2.2.3 && \
    gem install -q --no-rdoc --no-ri bundler --version 1.10.6 && \
    gem install -q --no-rdoc --no-ri foreman --version 0.78.0
ENV RBENV_VERSION 2.2.3

# To install additional packages needed by your gems, uncomment
# the "RUN apt-get update" and "RUN apt-get install" lines below
# and specify your packages.
# RUN apt-get update
# RUN apt-get install -y -q (your packages here)

# Install required gems.
COPY Gemfile Gemfile.lock /app/
RUN bundle install && rbenv rehash

# Start application on port 8080.
COPY . /app/
ENTRYPOINT bundle exec foreman start --formation "$FORMATION"

How can I make this process faster?

解决方案

Well, you're kinda mixing up 2 different cases:

  • re-deploying the exact same app code - indeed Google doesn't check if there was any change in the app to be deployed in which case the entire docker image could be re-used - but you already have that image, effectively you don't even need to re-deploy. Unless you suspect something went wrong and you really insist on re-building the image (and the deployment utility does exactly that). A rather academic case with little bearing to cost-effectiveness of real-life app deployments :)
  • you're deploying a different app code (doesn't matter how much different) - well, short of re-using the cached artifacts during the image building (which happens, according to your build logs) - the final image still needs to be built to incorporate the new app code - unavoidable. Re-using the previously built image is not really possible.

Update: I missed your point earlier, upon a closer look at both your logs I agree with your observation that the cache seems to be local to each build VM (explained by the cache hits only during building the worker modules, each on the same VM where the corresponding default module was built beforehand) and thus not re-used across deployments.

Another Update: there might be a way to get cache hits across deployments...

The gcloud preview app deploy DESCRIPTION indicates that the hosted build could also be done using the Container Builder API (which appears to be the default setting!) in addition to a temporary VM:

To use a temporary VM (with the default --docker-build=remote setting), rather than the Container Builder API to perform docker builds, run:

$ gcloud config set app/use_cloud_build false

Builds done using the Container Builder API might use a shared storage, which might allow cache hits across deployments. IMHO it's worth a try.

这篇关于如何加快在Google Cloud Platform上的Rails Docker部署?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆