创建react app + Gitlab CI + Digital Ocean Drop-管道成功,但之后立即删除Docker容器 [英] Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after
问题描述
我正在迈向Docker/CI/CD的第一步.
I'm having my first steps into Docker/CI/CD.
为此,我正在尝试部署原始的 create-react-app 使用Gitlab CI到我的Digital Ocean Droplet(Docker一键式应用程序).这些是我的文件:
For that, I'm trying to deploy a raw create-react-app to my Digital Ocean droplet (Docker One-Click Application) using Gitlab CI. Those are my files:
Dockerfile.yml
# STAGE 1 - Building assets
FROM node:alpine as building_assets_stage
WORKDIR /workspace
## Preparing the image (installing dependencies and building static files)
COPY ./package.json .
RUN yarn install
COPY . .
RUN yarn build
# STAGE 2 - Serving static content
FROM nginx as serving_static_content_stage
ENV NGINX_STATIC_FILE_SERVING_PATH=/usr/share/nginx/html
EXPOSE 80
COPY --from=building_assets_stage /workspace/build ${NGINX_STATIC_FILE_SERVING_PATH}
docker-compose.yml
## Use a Docker image with "docker-compose" installed on top of it.
image: tmaier/docker-compose:latest
services:
- docker:dind
variables:
DOCKER_CONTAINER_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: ${SECRETS_DOCKER_LOGIN_USERNAME}/${CI_PROJECT_NAME}:latest
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test that everything is setup correctly
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}@${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
stages:
- deploy
deploy:
stage: deploy
script:
## Login this machine into Docker registry, creates a production build and push it to the registry.
- docker login -u ${SECRETS_DOCKER_LOGIN_USERNAME} -p ${SECRETS_DOCKER_LOGIN_PASSWORD}
- docker build -t ${DOCKER_IMAGE_TAG} .
- docker push ${DOCKER_IMAGE_TAG}
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}@${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
# Everything works, exit.
- exit 0
only:
- master
简而言之,在Gitlab CI上,我执行以下操作:
In a nutshell, on Gitlab CI, I do the following:
-
(before_install)安装ssh代理并将我的私有SSH密钥复制到此计算机上,以便我们可以连接到Digital Ocean Droplet;
(before_install) Install ssh agent and copy my private SSH key to this machine, so we can connect to the Digital Ocean Droplet;
(部署),我构建映像并将其推送到我的公共docker hub存储库;
(deploy) I build my image and push it to my public docker hub repository;
(部署),我通过SSH连接到Digital Ocean Droplet,提取刚刚构建的映像并运行它.
(deploy) I connect to my Digital Ocean Droplet via SSH, pull the image I've just built and run it.
问题是,如果我从计算机的终端执行所有操作,则会创建容器并成功部署应用程序.
The problem is that if I do everything from my computer's terminal, the container is created and the application is deployed successfully.
如果我从Gitlab CI任务执行,则会生成容器,但不会部署任何内容,因为容器在
If I execute it from the Gitlab CI task, the container is generated but nothing is deployed because the container dies right after (click here to see CI job output).
我可以保证容器将被擦除,因为如果我手动SSH服务器和docker ps -a
,它将不监听任何内容.
I can guarantee that the container is being erase because if I manually SSH the server and docker ps -a
, it doesn't listen anything.
我对此图像CMD为CMD ["nginx", "-g", "daemon off;"]
感到困惑,这不应该使我的容器因为正在运行的进程而被删除.
I'm mostly confused by the fact that this image CMD is CMD ["nginx", "-g", "daemon off;"]
, which shouldn't make my container gets deleted because it has a process running.
我做错了什么?我迷路了.
What I'm doing wrong? I'm lost.
谢谢.
推荐答案
dg 回答了我的问题-非常感谢!
My question was answered by d g - thank you very much!
问题取决于我是通过SSH连接到Digital Ocean Droplet并使用其bash在内部执行命令的事实,当时我应该将要执行的整个命令作为参数传递给ssh
连接指令.
The problem relies on the fact that I was connecting to my Digital Ocean Droplet via SSH and executing commands inside using its bash, when I should be passing the entire command to be executed as an argument to the ssh
connection instruction.
已将我的.gitlab.yml
文件从以下位置更改:
Changed my .gitlab.yml
file from:
## Connect to the Digital Ocean droplet, stop/remove all running containers, pull latest image and execute it.
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}@${SECRETS_DIGITAL_OCEAN_DROPLET_IP}
- docker ps -q --filter "name=${DOCKER_CONTAINER_NAME}" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}
- docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}
收件人:
# Execute as follow:
# ssh -t digital-ocean-server "docker cmd1; docker cmd2;
- ssh -T ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}@${SECRETS_DIGITAL_OCEAN_DROPLET_IP} "docker ps -q --filter \"name=${DOCKER_CONTAINER_NAME}\" | grep -q . && docker stop ${DOCKER_CONTAINER_NAME} && docker rm -fv ${DOCKER_CONTAINER_NAME} && docker rmi -f ${DOCKER_IMAGE_TAG}; docker run -d -p 80:80 --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_TAG}"
这篇关于创建react app + Gitlab CI + Digital Ocean Drop-管道成功,但之后立即删除Docker容器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!