如何通过docker连接API与Web SPA [英] How to connect API with Web SPA through docker

查看:28
本文介绍了如何通过docker连接API与Web SPA的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个基于 PHP (lumen) 的 API 和一个基于 React 的电子商务.两者的工作都很好.当我尝试通过 Docker 使其工作时,问题就来了.我想部署仅运行单个命令的整个应用程序.

问题是 react 应用没有与 API 连接.

我在这篇文章中尝试了@Suman Kharel 的回答

在运行 React 应用的 Docker 容器中代理 API 请求

但它不起作用.有大佬知道怎么解决吗?

这是我关于 bitbucket 的存储库.

https://bitbucket.org/mariogarciait/ecommerce-submodule/src/大师/

希望有人知道我做错了什么.

谢谢

解决方案

如果您想使用 docker 的单个命令启动所有应用程序,您可以使用 docker-compose.

使用 docker-compose 仅​​用于测试目的或非常有限的生产基础设施.最好的方法是将您的工件分别存放在不同的主机中.

请阅读这些内容以了解一些要点:

当您使用 docker-compose 时,所有服务都部署在同一台机器上,但每个服务都在一个容器中.并且只有一个进程在容器内运行.

因此,如果您进入一个容器(例如 nodejs 中的网络)并列出进程,您将看到如下内容:

nodejs .... 3001

进入另一个容器,如数据库 postgres:

postgres .... 5432

所以,如果nodejs web需要从内部连接到数据库,必须需要postgress数据库的ip而不是localhost,因为在nodejs容器内部,只有一个进程在localhost中运行:

本地主机 3001

因此,在 nodejs 容器中使用 localhost:5432 将不起作用.解决办法是使用postgres的ip代替localhost:10.10.100.101:5432

解决方案

当我们有多个容器(docker-compose)并且它们之间存在依赖关系时,docker会建议我们:

总而言之,通过这些特性,docker创建了一种特殊网络"您的所有容器都可以安静地离开,没有 ips 的并发症!

Docker 网络与 host.docker.internal

仅用于测试、快速部署或在非常有限的生产环境中,您可以使用最新版本的 docker-compose(1.29.2) 和 docker 中的新功能.

在你的 docker-compose 末尾添加这个

网络:我的网络:司机:桥

这适用于您的所有容器

网络:- 我的网络

如果某些容器需要主机 ip,请使用 host.docker.internal 而不是 ip

环境:- DATABASE_HOST=host.docker.internal- API_BASE_URL=host.docker.internal:8020/api

最后在使用 host.docker.internal 的容器中添加:

extra_hosts:- host.docker.internal:host-gateway"

<块引用>

注意:这是在 ubuntu 上测试的,而不是在 mac 或 windows 上测试,因为没有机构在这些操作系统上部署其真实应用程序

环境变量方法

在我看来,Docker 链接或网络是一种幻觉或欺骗,因为这仅适用于一台机器(开发或暂存),隐藏了我们和其他复杂主题的依赖关系,当您的应用程序离开您的笔记本电脑和转到您的真实服务器,供您的用户使用.

无论如何,如果您将 docker-compose 用于开发人员或实际用途,这些步骤将帮助您管理容器之间的 ip:

  • 获取您机器的本地 ip 并存储在类似 $MACHINE_HOST 的 var 中,例如:startup.sh
  • 从 docker-compose.json 中删除链接或网络
  • 使用 $MACHINE_HOST 引用容器中的另一个容器.

示例:

db:图像:mysql:5.7.22容器名称:db_ecommerce端口:- 5003:3306"环境:MYSQL_DATABASE:流明MYSQL_ROOT_PASSWORD:${DATABASE_PASSWORD}api-php:容器名称:api_ecommerce端口:- 8020:80"- 445:443"环境:- DATABASE_HOST=$MACHINE_HOST- DATABASE_USER=$DATABASE_USER- DATABASE_PASSWORD=$DATABASE_PASSWORD- ETC=$ETC网络反应:容器名称:react_ecommerce端口:- 3001:3000环境:- API_BASE_URL=$MACHINE_HOST:8020/api

  • 最后运行你的startup.sh,它包含变量和经典的docker-compose up -d

也在你的反应应用程序中使用 var 代替 package.json 中的代理读取你的 api 的 url:

process.env.REACT_APP_API_BASE_URL

检查this 了解如何读取环境变量来自反应应用.

您可以在此处找到如何使用MACHINE_HOST 变量及其使用的更详细步骤:

建议

  • 在 docker-compose.json 文件中使用变量而不是硬编码值
  • 分离您的环境:开发、测试和生产
  • Build 刚刚处于开发阶段.换句话说,不要在 docker-compose.json 中使用 build.也许对于本地发展可能是另一种选择
  • 对于测试和生产阶段,只需运行在开发阶段构建和上传的容器(docker 注册表)
  • 如果您在 React 应用程序中使用代理或环境变量读取 api 的 url,则您的构建将在一台机器上运行.如果您需要在多个环境之间移动它,例如:测试、暂存、uat 等,您必须执行新的构建,因为 react 中的代理或环境变量是在您的 bundle.js 中硬编码的.
  • 这不仅仅是 react 的问题,也存在于 angular、vue 等中:检查 限制 1:每个环境都需要单独构建 部分在 这个页面
  • 您可以评估 https://github.com/utec/geofrontend-server如果适合您,请修复之前解释的问题(以及其他问题,例如身份验证).
  • 如果您的计划是向真实用户展示您的网络,则 web 和 api 必须具有不同的域,当然还有 https.例子
    • ecomerce.zenit.com 适用于您的 React 应用
    • api.zenit.com 或 ecomerce-api.zenit.com 用于您的 php api
  • 最后,如果您想避免这种令人头疼的基础设施复杂性问题,并且您没有 DevOps 和系统管理员团队,您可以使用 heroku、数字海洋、openshift 或其他类似的平台.几乎所有这些都与 docker 兼容.因此,您只需要对每个包含 Dockerfile 的 repo 执行 git push.该平台将解释您的 Dockerfile,部署并为您分配一个现成的 http 域用于测试或一个很酷的域用于生产(在获取域和证书之前).

I have an API based on PHP (lumen) and an ecommerce based in React. Both of the work fine. The problem come when I try to make it work through Docker. I'd like to deploy the whole app running just a single command.

The problem is that the react app doesn't connect with the API.

I tried the answer of @Suman Kharel on this post

Proxying API Requests in Docker Container running react app

But it doesn't work. Any one know how can I sort it out?

Here is my repo on bitbucket.

https://bitbucket.org/mariogarciait/ecommerce-submodule/src/master/

Hopefully someone knows what i am doing wrong.

Thanks

解决方案

If you want to start all of your apps with a single command with docker, yo could use docker-compose.

Using docker-compose is just for test purposes or a very limited production infrastructure. Best approach is to have your artifacts in different host each one.

Please read these to understand some points:

When you use docker-compose, all the services are deployed in the same machine, but each one in a container. And just one process is running inside a container.

So if you enter into a container (for example a web in nodejs) and list the process, you will see something like this:

nodejs .... 3001

And into another container like a database postgres:

postgres .... 5432

So, if the nodejs web needs to connect to the database, from inside, must need the ip instead localhost of postgress database because inside of nodejs container, just one process is running in the localhost:

localhost 3001

So, use localhost:5432 won't work inside of nodejs container. Solution is to use the ip of postgres instead localhost : 10.10.100.101:5432

Solutions

When we have several containers (docker-compose) with dependencies between them, docker proposes us :

As a summary, with these features, docker create a kind of "special network" in which all your container leave in peace without complications of ips!

Docker networks with host.docker.internal

Just for test, quickly deploy or in a very limited production environment you could use a new feature in latest version of docker-compose(1.29.2) and docker.

Add this at the end of your docker-compose

networks:
  mynetwork:
    driver: bridge

this to all of your containers

networks:
  - mynetwork     

And if some container needs the host ip, use host.docker.internal instead of the ip

environment:
  - DATABASE_HOST=host.docker.internal
  - API_BASE_URL=host.docker.internal:8020/api

Finally in the containers that use host.docker.internal add this:

extra_hosts:
  - "host.docker.internal:host-gateway"

Note: This was tested on ubuntu, not on mac or windows, because no bodies deploy its real applications on that operative systems

Environment variables approach

In my opinion, Docker links or networks are a kind of illusion or deceit because this only works in one machine (develop or staging), hiding dependencies from us and other complex topics, which are required when your apps leave your laptop and go to your real servers ready to be used by your users.

Anyway if you you will use docker-compose for developer or real purposes, these steps will help you to manage the ips between your containers:

  • get the local ip of your machine and store in a var like $MACHINE_HOST in a script like : startup.sh
  • remove links or networks from docker-compose.json
  • use $MACHINE_HOST to refer another container in your container.

Example:

db:
  image: mysql:5.7.22
  container_name: db_ecommerce
  ports:
    - "5003:3306"
  environment:
    MYSQL_DATABASE: lumen
    MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}

api-php:
  container_name: api_ecommerce
  ports:
    - "8020:80"
    - "445:443"
  environment:
    - DATABASE_HOST=$MACHINE_HOST
    - DATABASE_USER=$DATABASE_USER
    - DATABASE_PASSWORD=$DATABASE_PASSWORD
    - ETC=$ETC

web-react:
  container_name: react_ecommerce
  ports:
    - 3001:3000
  environment:
    - API_BASE_URL=$MACHINE_HOST:8020/api

  • Finally just run your startup.sh which has the variables and the classic docker-compose up -d

Also in your react app read the url of your api using a var instead proxy in package.json:

process.env.REACT_APP_API_BASE_URL

Check this to learn how read environment variables from react app.

Here you can find a more detailed steps of how use MACHINE_HOST variable and its use:

Advices

  • Use variables instead hardcoded values in your docker-compose.json file
  • Separate your environments : development, testing and production
  • Build is just in development stage. In other words, don't use build in your docker-compose.json. Maybe for local development could be an alternative
  • For testing and production stages, just run your containers, built and uploaded in development stage (docker registry)
  • If you use proxy or environment variable to read the url of your api in your react app, your build just will work in one machine. If you need to move it between several environment like: testing, staging, uat, etc you must perform a new build because proxy or environment var in react is hardcoded inside of your bundle.js.
  • This is not a problem just for react, also exist in angular, vue, etc : Check Limitation 1: Every environment requires a separate build section in this page
  • You can evaluate https://github.com/utec/geofrontend-server to fix the previous explained problem (and others like authentication) if apply for you.
  • If your plan is to show your web to real users, web and api must have a different domains and of course with https. Example
    • ecomerce.zenit.com for your react app
    • api.zenit.com or ecomerce-api.zenit.com for your php api
  • Finally if you want to avoid this headache of infrastructure complications and you don't have a team of devops and syadmins,you can use heroku, digital ocean, openshift or another platforms like them. Almost all of them are docker compatible. So you just need to perform a git push of each repo with its Dockerfile inside. That platform will interpret your Dockerfile, deploy and assign you a ready to use http domain for testing or a cool domain for production (prior acquisition of the domain and certificate).

这篇关于如何通过docker连接API与Web SPA的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆