如何在不将包发布到 npm 注册表的情况下将依赖于纱线工作区的应用程序部署到 Google App Engine? [英] How can I deploy to Google App Engine an app that depends on a yarn workspaces without publishing the packages to a npm registry?

查看:30
本文介绍了如何在不将包发布到 npm 注册表的情况下将依赖于纱线工作区的应用程序部署到 Google App Engine?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在将我们的 monorepo 迁移到 yarn 工作区.它包含多个包和服务.服务依赖于它们各自 package.json 中的包.我想将我的服务部署到 Google App Engine,而无需将包发布到私有 npm 注册表.

I am currently migrating our monorepo to yarn workspaces. It contains multiple packages and services. Services depends on packages in their respective package.json. I would like to deploy my services to Google App Engine without having to publish the packages to a private npm registry.

我设法通过使用自定义运行时并将 app.yamlDockerfile 移动到 monorepo 的根目录来部署单个服务,以便访问到构建上下文中的包和服务.问题是我有多个服务,我不能在 monorepo 的根目录下拥有所有 dockerfile,因为它们必须被命名为 Dockerfile 并且我无法更改构建上下文.

I managed to deploy a single service by using a custom runtime and by moving the app.yaml and the Dockerfile to the root of the monorepo in order to have access to the packages and the service in the build context. The issue is that I have multiple services and I cannot have all the dockerfiles at the root of the monorepo, as they have to be named Dockerfile and that I cannot change the build context.

我看到了 2 个幼稚的解决方案:

I see 2 naive solutions:

首先是在部署之前将相应服务的 app.yamlDockerfile 移动到 monorepo 的根目录.但这看起来很脏,会使 CI 代码变得非常复杂.

The first would be to move the app.yaml and Dockerfile of the corresponding service to the root of the monorepo before deploying. But this looks quite dirty and would make the CI code very complicated.

第二种是在 monorepo 的根目录下有一个 Dockerfileservice1.yamlservice2.yaml 等将变量传递给 Dockerfile.问题是我在 App Engine 文档中没有看到任何将变量传递给自定义运行时的 Dockerfile 的方法.

The second would be to have a single Dockerfile and service1.yaml, service2.yaml etc. at the root of the monorepo and to pass variables to the Dockerfile. The problem is that I don't see any way in App Engine documentation to pass variables to the Dockerfile of a custom runtime.

我梦想的解决方案是能够将每个 Dockerfileapp.yaml 保存在各自服务的目录中,并能够通过gcloud CLI(就像我们在 docker-compose 中所做的那样).示例:

My dream solution would be to be able to keep each Dockerfile and app.yaml in the directory of their respective services and to be able to set the build context through the gcloud CLI (like we can do in docker-compose). Example:

project
├── package.json
├── packages
│   ├── package1
│   │   ├── package.json
│   │   └── src
│   ├── package2
│   │   ├── package.json
│   │   └── src
│   └── package3
│       ├── package.json
│       └── src
├── services
│   ├── service1
│   │   ├── app.yaml
│   │   ├── Dockerfile
│   │   ├── package.json
│   │   └── src
│   └── service2
│       ├── app.yaml
│       ├── Dockerfile
│       ├── package.json
│       └── src
└── yarn.lock

并运行类似:gcloud app deploy services/service1/app.yaml --build-context=.

但我在文档中没有看到任何这样做的方法.

But I don't see any way of doing this in the documentation.

你知道我怎样才能接近我的梦想解决方案"吗?

Do you know how I can get closer to my "dream solution"?

推荐答案

添加在评论中建议的可能选项以提供更多可见性.

Adding possible option suggested in comments to give more visibility.

一种可能性是保留您正在使用的 docker-compose 工作流并将其与您的 App Engine 部署集成.

One possibility would be keeping the docker-compose workflow that you were using and integrate it with your App Engine deploys.

由于您已经使用 docker-compose 构建 docker 镜像以指定构建上下文,因此您可以push 构建操作的结果到 Google 的 Container Registry,以便以后可以使用 将映像用于部署 App Engine--image-url 标志.

As you were already building your docker images with docker-compose in order to specify the build context, you can push the result of the build operations to Google's Container Registry so the images can be later used to deploy App Engine by using the --image-url flag.

这篇关于如何在不将包发布到 npm 注册表的情况下将依赖于纱线工作区的应用程序部署到 Google App Engine?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆