在devops生命周期的开发阶段如何使用docker? [英] How to use docker in the development phase of a devops life cycle?

查看:83
本文介绍了在devops生命周期的开发阶段如何使用docker?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在开发阶段,我有几个与Docker使用有关的问题.

我将提出三种我认为在开发环境中如何使用Docker的方案.假设我们正在Java和Spring Boot中创建REST API.为此,我需要一个MySQL数据库.

  1. 第一种情况是使用MySQL容器进行开发的docker-compose,以及使用MySQL和另一个容器中的Java和Java应用程序(jar)的生产docker-compose.为了进行开发,我启动了docker-compose-dev.yml以仅启动数据库.该应用程序是使用IDE(例如IntelliJ Idea)启动和调试的.对代码进行的任何更改,IDE都将通过应用更改来识别并重新启动应用程序.

  2. 对于开发和生产环境,第二种情况是将docker-compose与数据库和应用程序容器一起使用.这样,每次更改代码时,都必须重建映像,以便将更改加载到映像中,并再次启动容器.这种情况可能是最典型的情况,并且用于使用Docker进行开发,但是由于每次发生更改时都需要重建映像,因此它看起来非常慢.

  3. 第三种情况由前两种情况组成.两个docker-compose.开发docker-compose包含两个容器,但具有允许实时重新加载应用程序,映射卷并使用例如Spring Dev Tools的机制.这样,将启动容器,并且如果文件有任何更改,应用程序容器将检测到存在更改并重新启动.对于生产,将仅使用两个容器创建一个docker-compose,但没有实时重新加载的功能.我认为这将是理想的情况,但我认为这很大程度上取决于所使用的技术,因为并非所有人都允许实时重新加载.

问题如下.

  • 在使用Docker进行阶段时,其中哪些是最典型的情况?

  • 方案1是否提出得好?也就是说,仅对外部服务(例如数据库,队列等)进行docker化,并使用IDE执行应用程序的开发和调试,而无需使用Docker.

提出方案2的问题后,我提出了疑问和方案.每次更改代码后,都必须重建映像并重新启动容器,这会浪费大量时间.简而言之,一个问题是:如何避免这种情况?

提前感谢您的时间.

注意:这可能是一个有待商opinion的问题,但是很高兴知道开发人员通常如何处理这些问题.

解决方案

免责声明:这是我个人对Mars先生提出的问题的看法.即使我已尽最大努力以实际来源来支持我的回答,但这主要是基于我自己的经验和一些常识

在使用Docker进行开发时,以下哪种情况最典型?

我已经在几个项目中看到了所有3个方案,每个方案都有其优点和缺点.但是,我认为在Docker Compose允许动态代码重新加载的情况下,在灵活性和一致性方面,这是最有利的:

  • Dev与Prod Docker Compose紧密匹配,这意味着Dev环境与Prod环境尽可能接近
  • 在开发时不必不断重建图像,但是在需要时很容易
  • 许多技术都支持这种情况,例如您提到的Spring Dev Tools,还包括Python Flask等.
  • 您可以轻松利用 Docker Compose扩展aka配置共享机制(在方案中也可以使用)2)

方案1是否提出得好?也就是说,仅对外部服务(例如数据库,队列等)进行docker化,并使用IDE执行应用程序的开发和调试,而无需使用Docker.

场景1非常普遍,但是IDE环境可能不同于Docker容器中的环境(并且很难维护IDE环境与Docker环境中每个库,依赖项等的版本匹配)).可能还需要在Dev和Production之间进行一个中间步骤,以实际测试在Dev工作之后构建的Docker映像,然后再进入Production.

以我自己的经验,当您在实际执行开发工作时不想对Docker进行过多处理和/或您使用的语言或技术不适用于场景3中所述的动态重载时,这样做非常好.最后,它只会增加您的环境之间的偏差,并增加Dev和Prod部署方法之间的复杂性.

必须重建映像并重新启动容器,这是浪费大量时间.简而言之,一个问题是:如何避免这种情况?

除了您描述的场景外,您还可以通过 Docker构建缓存并设计Dockerfile .例如,Python应用程序通常会将代码复制为构建的最后一步(或几乎最后一步),以避免使缓存无效;对于Java应用程序,可以拆分代码,从而避免每次编译整个应用程序代码更改-这取决于您的实际设置.


我个人使用的工作流程大致与方案3相符,例如:

  • 与我的生产环境相对应的 docker-compose.yml 文件
  • 一个 docker-compose.dev.yml ,它将覆盖我的主要Docker Compose文件的某些方面,例如从我的机器上替换代码,向命令添加dev特定标志等.运行如

      docker-compose -f docker-compose.yml -f docker-compose.dev.yml 

    但也可以将 docker-compose.override.yml 作为默认情况下,Docker Compose会使用它进行覆盖

  • 在某些情况下,我必须针对特定情况使用其他替代,例如我的CI上的 docker-compose.ci.yml ,但通常主要的Docker Compose文件足以描述我的Prod环境(如果不是这种情况, docker-compose.prod.yml 可以解决问题)

I have a couple of questions related to the usage of Docker in a development phase.

I am going to propose three different scenarios of how I think Docker could be used in a development environment. Let's imagine that we are creating a REST API in Java and Spring Boot. For this I will need a MySQL database.

  1. The first scenario is to have a docker-compose for development with the MySQL container and a production docker-compose with MySQL and the Java application (jar) in another container. To develop I launch the docker-compose-dev.yml to start only the database. The application is launched and debugged using the IDE, for example, IntelliJ Idea. Any changes made to the code, the IDE will recognize and relaunch the application by applying the changes.

  2. The second scenario is to have, for both the development and production environment, a docker-compose with the database and application containers. That way, every time I make a change in the code, I have to rebuild the image so that the changes are loaded in the image and the containers are lauched again. This scenario may be the most typical and used for development with Docker, but it seems very slow due to the need to rebuild the image every time there is a change.

  3. The third scenario consists of the mixture of the previous two. Two docker-compose. The development docker-compose contains both containers, but with mechanisms that allow a live reload of the application, mapping volumes and using, for example, Spring Dev Tools. In this way, the containers are launched and, in case of any change in the files, the application container will detect that there is a change and will be relaunched. For production, a docker-compose would be created simply with both containers, but without the functionality of live reload. This would be the ideal scenario, in my opinion, but I think it is very dependent on the technologies used since not all allow live reload.

The questions are as follows.

  • Which of these scenarios is the most typical when using Docker for phase?

  • Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.

The doubts and the scenarios that I raise came up after I raised the problem that scenario 2 has. With each change in the code, having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?

Thanks in advance for your time.

NOTE: It may be a question subject to opinion, but it would be nice to know how developers usually deal with these problems.

解决方案

Disclaimer: this is my own opinion on the subject as asked by Mr. Mars. Even though I did my best to back my answer with actual sources, it's mostly based on my own experience and a bit of common sense

Which of these scenarios is the most typical when using Docker for development?

I have seen all 3 scenarios iin several projects, each of them with their advantages and drawbacks. However I think scenario 3 with a Docker Compose allowing for dynamic code reload is the most advantageous in term of flexibility and consistency:

  • Dev and Prod Docker Compose are close matches, meaning Dev environment is as close as possible to Prod environment
  • You do not have to rebuild the image constantly when developping, but it's easy to do when you need to
  • Lots of technologies support such scenario, such as Spring Dev Tools as you mentionned, but also Python Flask, etc.
  • You can easily leverage Docker Compose extends a.k.a configuration sharing mechanism (also possible with scenario 2)

Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.

Scenario 1 is quite common, but the IDE environment would probably be different than the one from the Docker container (and it would be difficult to maintain a match of version for each libs, dependencies, etc. from IDE environment to Docker environment). It would also probably require to go through an intermediate step between Dev and Production to actually test the Docker image built after Dev is working before going to Production.

In my own experience doing this is great when you do not want to deal too much with Docker when actually doing dev and/or the language or technology you use is not adapted for dynamic reload as described in scenario 3. But in the end it only adds a drift between your environments and more complexity between Dev and Prod deployment method.

having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?

Beside the scenarios you describe, you have ways to decently (even drastically) reduce image build time by leveraging Docker build cache and designing your Dockerfile. For example, a Python application would typically copy code as the last (or almost last) step of the build to avoid invalidating the cache, and for Java app it would be possible to split code so as to avoid compiling the entire application everytime a bit of code changes - that would depend on your actual setup.


I personally use a workflow roughly matching scenario 3 such as:

  • a docker-compose.yml file corresponding to my Production environment
  • a docker-compose.dev.yml which will override some aspect of my main Docker Compose file such as mouting code from my machine, adding dev specific flags to commands, etc. - it would be run such as

    docker-compose -f docker-compose.yml -f docker-compose.dev.yml 
    

    but it would also be possible to have a docker-compose.override.yml as Docker Compose uses by default for override

  • in some situation I would have to use other overrides for specific situations such as docker-compose.ci.yml on my CI, but usually the main Docker Compose file is enough to describe my Prod environment (and if that's not the case, docker-compose.prod.yml does the trick)

这篇关于在devops生命周期的开发阶段如何使用docker?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆