Docker管道的“内部"消息在Docker容器中运行的Jenkins从属中不起作用 [英] Docker pipeline's "inside" not working in Jenkins slave running within Docker container

查看:235
本文介绍了Docker管道的“内部"消息在Docker容器中运行的Jenkins从属中不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使Jenkins管道脚本正常工作时遇到问题,该脚本使用Docker Pipeline插件在Docker容器中运行部分构建. Jenkins服务器和从属服务器都在Docker容器中运行.

I'm having issues getting a Jenkins pipeline script to work that uses the Docker Pipeline plugin to run parts of the build within a Docker container. Both Jenkins server and slave run within Docker containers themselves.

  • 在Docker容器中运行的詹金斯服务器
  • 基于自定义图像的詹金斯奴隶( https://github.com/simulogics/protokube- jenkins-slave )也在Docker容器中运行
  • 基于docker:1.12-dind映像的Docker守护程序容器
  • 奴隶开始是这样的:docker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave
  • Jenkins server running in a Docker container
  • Jenkins slave based on custom image (https://github.com/simulogics/protokube-jenkins-slave) running in a Docker container as well
  • Docker daemon container based on docker:1.12-dind image
  • Slave started like so: docker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave

基本D​​ocker操作(拉,构建和推送图像)在此设置下工作正常.

Basic Docker operations (pull, build and push images) are working just fine with this setup.

  • 我希望服务器完全不必了解Docker.这应该是从属/节点的特征.
  • 我不需要动态分配奴隶或短暂奴隶.手动启动一个奴隶足以满足我的目的.
  • 理想情况下,我想离开我为奴隶定制的Docker映像,转而使用通用Docker奴隶内Docker管道插件提供的inside功能.
  • I want the server to not have to know about Docker at all. This should be a characteristic of the slave/node.
  • I do not need dynamic allocation of slaves or ephemeral slaves. One slave started manually is quite enough for my purposes.
  • Ideally, I want to move away from my custom Docker image for the slave and instead use the inside function provided by the Docker pipeline plugin within a generic Docker slave.

这是导致问题的代表性构建步骤:

This is a representative build step that's causing the issue:

image.inside {
    stage ('Install Ruby Dependencies') {
        sh "bundle install"
    }
}

这会在日志中引起如下错误:

This would cause an error like this in the log:

sh:1:无法创建/workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q @ tmp/durable-98bb4c3d/pid:目录不存在

sh: 1: cannot create /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q@tmp/durable-98bb4c3d/pid: Directory nonexistent

以前,此警告将显示:

71f4de289962-5790bfcc似乎在容器内运行71f4de28996233340c2aed4212248f1e73281f1cd7282a54a36ceeac8c65ec0a 但在[]之间找不到/workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q

71f4de289962-5790bfcc seems to be running inside container 71f4de28996233340c2aed4212248f1e73281f1cd7282a54a36ceeac8c65ec0a but /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q could not be found among []

有趣的是,此问题确实在此插件的CloudBees文档中进行了描述,

Interestingly enough, exactly this problem is described in CloudBees documentation for the plugin here https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#docker-workflow-sect-inside:

要使内部工作,Docker服务器和Jenkins代理必须使用相同的文件系统,以便可以安装工作空间.确保这一点的最简单方法是让Docker服务器在本地主机(与代理位于同一台计算机)上运行.目前,Jenkins插件和Docker CLI都不会自动检测服务器正在远程运行的情况.典型的症状是嵌套的sh命令(例如

For inside to work, the Docker server and the Jenkins agent must use the same filesystem, so that the workspace can be mounted. The easiest way to ensure this is for the Docker server to be running on localhost (the same computer as the agent). Currently neither the Jenkins plugin nor the Docker CLI will automatically detect the case that the server is running remotely; a typical symptom would be errors from nested sh commands such as

无法创建/…@ tmp/durable-…/pid:目录不存在 或负退出代码.

cannot create /…@tmp/durable-…/pid: Directory nonexistent or negative exit codes.

当Jenkins可以检测到代理本身在Docker容器中运行时,它将自动将--volumes-from参数传递给内部容器,以确保它可以与代理共享工作区.

When Jenkins can detect that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.

不幸的是,上一段中描述的检测似乎无效.

Unfortunately, the detection described in the last paragraph doesn't seem to work.

既然我的服务器和从属服务器都在Docker容器中运行,那么我必须使用什么类型的卷映射来使其工作?

Since both my server and slave are running in Docker containers, what kid of volume mapping do I have to use to make it work?

推荐答案

我已经看到了此问题的变体,并且agentskubernetes-plugin驱动.

I've seen variations of this issue, also with the agents powered by the kubernetes-plugin.

我认为,要使agent/jnlp容器正常工作,必须与build容器共享工作区.

I think that for it to work the agent/jnlp container needs to share workspace with the build container.

通过build容器,我指的是将运行bundle install命令的容器.

By build container I am referring to the one that will run the bundle install command.

这可能通过withArgs

问题是您为什么要这样做?无论如何,大多数管道步骤都是在master上执行的,实际的构建将在build容器中运行.还使用agent的目的是什么?

The question is why would you want to do that? Most of the pipeline steps are being executed on master anyway and the actual build will run in the build container. What is the purpose of also using an agent?

这篇关于Docker管道的“内部"消息在Docker容器中运行的Jenkins从属中不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆