如何将一个Pod与Kubernetes中的另一个Pod联网? (简单) [英] How do I get one pod to network to another pod in Kubernetes? (SIMPLE)

查看:70
本文介绍了如何将一个Pod与Kubernetes中的另一个Pod联网? (简单)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在脑海中反复晃动我的头一段时间。网络上有大量关于Kubernetes的信息,但是所有这些假设都是基于如此多的知识,以至于像我这样的n00b并没有太多的事情要做。

I've been banging my head against this wall on and off for a while. There is a ton of information on Kubernetes on the web, but it's all assuming so much knowledge that n00bs like me don't really have much to go on.

谁可以共享以下简单示例(作为yaml文件)?我只想要

So, can anyone share a simple example of the following (as a yaml file)? All I want is


  • 两个豆荚

  • 我们说一个豆荚有一个后端(我不不知道-node.js),并且有一个前端(例如React)。

  • 在它们之间进行联网的方式。

然后是一个调用api调用的示例从后面到前面。

And then an example of calling an api call from the back to the front.

我开始研究这种事情,突然间我点击了此页面- https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-实现这一目标。这是超级无益。我不需要或不需要高级网络策略,也没有时间浏览映射在kubernetes顶部的几个不同的服务层。我只是想找出一个简单的网络请求示例。

I start looking into this sort of thing, and all of a sudden I hit this page - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this. This is super unhelpful. I don't want or need advanced network policies, nor do I have the time to go through several different service layers that are mapped on top of kubernetes. I just want to figure out a trivial example of a network request.

希望如果这个示例存在于stackoverflow上,它也会为其他人服务。

Hopefully if this example exists on stackoverflow it will serve other people as well.

任何帮助将不胜感激。谢谢。

Any help would be appreciated. Thanks.

编辑; 似乎最简单的示例是使用Ingress控制器。

EDIT; it looks like the easiest example may be using the Ingress controller.

编辑编辑;

我正在努力尝试并尽量减少部署示例-我将在这里逐步完成一些步骤,并指出我的问题。

I'm working to try and get a minimal example deployed - I'll walk through some steps here and point out my issues.

所以下面是我的 yaml 文件:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/frontend_example
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: LoadBalancer
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/backend_example
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  type: LoadBalancer
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
spec:      
  rules:
  - host: www.kubeplaytime.example
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80
      - path: /api
        backend:
          serviceName: backend
          servicePort: 80

我认为这是


  • 部署前端和后端应用程序-我将 Patientplatypus / frontend_example Patientplatypus / backend_example 部署到dockerhub,然后将图像拉下。我有一个悬而未决的问题,如果我不想从Docker集线器中拉取图像,而只想从本地主机加载,那有可能吗?在这种情况下,我会将代码推送到生产服务器,在服务器上构建docker映像,然后上传到kubernetes。好处是,如果我希望我的图像是私有的,则不必依赖dockerhub。

  • Deploying a frontend and backend app - I deployed patientplatypus/frontend_example and patientplatypus/backend_example to dockerhub and then pull the images down. One open question I have is, what if I don't want to pull the images from docker hub and rather would just like to load from my localhost, is that possible? In this case I would push my code to the production server, build the docker images on the server and then upload to kubernetes. The benefit is that I don't have to rely on dockerhub if I want my images to be private.

它正在创建两个路由外部流量的服务端点从Web浏览器到每个部署。这些服务的类型为 loadBalancer ,因为它们平衡了部署中我拥有的(在本例中为3个)副本集之间的流量。

It is creating two service endpoints that route outside traffic from a web browser to each of the deployments. These services are of type loadBalancer because they are balancing the traffic among the (in this case 3) replicasets that I have in the deployments.

最后,我有一个入口控制器,该入口控制器被设置为 ,以允许我的服务通过 www.kubeplaytime.example www.kubeplaytime.example / api 。但是,这不起作用。

Finally, I have an ingress controller which is supposed to allow my services to route to each other through www.kubeplaytime.example and www.kubeplaytime.example/api. However this is not working.

运行此命令会发生什么?

What happens when I run this?

patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created




  • 因此,首先,它似乎可以正确创建所有我需要的零件。

    • So first, it appears to create all the parts that I need fine with no errors.

      Patientplatypus:〜/ Documents / kubePlay:09:22:30 $ kubectl get --watch services

      名称类型集群IP外部IP端口年龄

      后端LoadBalancer 10.0.18.174< pending> 80:31649 / TCP 1m

      前端LoadBalancer 10.0.100.65< pending> 80:32635 / TCP 1m

      kubernetes ClusterIP 10.0.0.1< none> 443 / TCP 10d

      前端负载均衡器10.0.100.65 138.91.126.178 80:32635 / TCP 2m

      后端LoadBalancer 10.0.18.174 138.91.121.182 80:31649 / TCP 2m

      第二,如果我观看服务,最终将获得IP地址,可用于在浏览器中导航到这些站点。上面的每个IP地址都分别将我路由到前端和后端。

      Second, if I watch the services, I eventually get IP addresses that I can use to navigate in my browser to these sites. Each of the above IP addresses works in routing me to the frontend and backend respectively.

      但是

      当我尝试使用入口控制器时遇到了一个问题-它似乎已部署,但我不知道该怎么去。

      I reach an issue when I try and use the ingress controller - it seemingly deployed, but I don't know how to get there.

      patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
      NAME       HOSTS                      ADDRESS   PORTS     AGE
      frontend   www.kubeplaytime.example             80        16m
      




      • 所以我没有地址我可以使用,而 www.kubeplaytime.example 似乎不起作用。

        • So I have no address I can use, and www.kubeplaytime.example does not appear to work.
        • 路由到我刚创建的入口扩展程序似乎必须要做的是使用服务并在 it 上进行部署以获得IP地址,但这很快就变得异常复杂。

          What it appears that I have to do to route to the ingress extension I just created is to use a service and deployment on it in order to get an IP address, but this starts to look incredibly complicated very quickly.

          例如,看一下这篇中等文章: https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e

          For example, take a look at this medium article: https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e.

          看来,仅用于将服务路由到Ingress的必要代码(即他所说的 Ingress Controller )似乎是这样的:

          It would appear that the necessary code to add for just the service routing to the Ingress (ie what he calls the Ingress Controller) appears to be this:

          ---
          kind: Service
          apiVersion: v1
          metadata:
            name: ingress-nginx
          spec:
            type: LoadBalancer
            selector:
              app: ingress-nginx
            ports:
            - name: http
              port: 80
              targetPort: http
            - name: https
              port: 443
              targetPort: https
          ---
          kind: Deployment
          apiVersion: extensions/v1beta1
          metadata:
            name: ingress-nginx
          spec:
            replicas: 1
            template:
              metadata:
                labels:
                  app: ingress-nginx
              spec:
                terminationGracePeriodSeconds: 60
                containers:
                - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
                  name: ingress-nginx
                  imagePullPolicy: Always
                  ports:
                    - name: http
                      containerPort: 80
                      protocol: TCP
                    - name: https
                      containerPort: 443
                      protocol: TCP
                  livenessProbe:
                    httpGet:
                      path: /healthz
                      port: 10254
                      scheme: HTTP
                    initialDelaySeconds: 30
                    timeoutSeconds: 5
                  env:
                    - name: POD_NAME
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.name
                    - name: POD_NAMESPACE
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.namespace
                  args:
                  - /nginx-ingress-controller
                  - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
          ---
          kind: Service
          apiVersion: v1
          metadata:
            name: nginx-default-backend
          spec:
            ports:
            - port: 80
              targetPort: http
            selector:
              app: nginx-default-backend
          ---
          kind: Deployment
          apiVersion: extensions/v1beta1
          metadata:
            name: nginx-default-backend
          spec:
            replicas: 1
            template:
              metadata:
                labels:
                  app: nginx-default-backend
              spec:
                terminationGracePeriodSeconds: 60
                containers:
                - name: default-http-backend
                  image: gcr.io/google_containers/defaultbackend:1.0
                  livenessProbe:
                    httpGet:
                      path: /healthz
                      port: 8080
                      scheme: HTTP
                    initialDelaySeconds: 30
                    timeoutSeconds: 5
                  resources:
                    limits:
                      cpu: 10m
                      memory: 20Mi
                    requests:
                      cpu: 10m
                      memory: 20Mi
                  ports:
                  - name: http
                    containerPort: 8080
                    protocol: TCP
          

          似乎需要将其附加到上面的其他 yaml 代码中,以获取以下内容的服务入口点:我的入口路由,它的确提供了ip:

          This would seemingly need to be appended to my other yaml code above in order to get a service entry point for my ingress routing, and it does appear to give an ip:

          patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
          NAME                    TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
          backend                 LoadBalancer   10.0.31.209   <pending>     80:32428/TCP                 4m
          frontend                LoadBalancer   10.0.222.47   <pending>     80:32482/TCP                 4m
          ingress-nginx           LoadBalancer   10.0.28.157   <pending>     80:30573/TCP,443:30802/TCP   4m
          kubernetes              ClusterIP      10.0.0.1      <none>        443/TCP                      10d
          nginx-default-backend   ClusterIP      10.0.71.121   <none>        80/TCP                       4m
          frontend   LoadBalancer   10.0.222.47   40.121.7.66   80:32482/TCP   5m
          ingress-nginx   LoadBalancer   10.0.28.157   40.121.6.179   80:30573/TCP,443:30802/TCP   6m
          backend   LoadBalancer   10.0.31.209   40.117.248.73   80:32428/TCP   7m
          

          所以 ingress-nginx 似乎是我想要访问的网站。导航到 40.121.6.179 会返回默认的404消息(默认后端-404 )-不会转到前端作为 / 的路由。 / api 返回相同的结果。导航到我的主机命名空间 www.kubeplaytime.example 从浏览器返回404-没有错误处理。

          So ingress-nginx appears to be the site I want to get to. Navigating to 40.121.6.179 returns a default 404 message (default backend - 404) - it does not go to frontend as / aught to route. /api returns the same. Navigating to my host namespace www.kubeplaytime.example returns a 404 from the browser - no error handling.

          问题


          • 是严格来说,Ingress Controller是必须的,如果是的话,它是否有一个不太复杂的版本?

          • Is the Ingress Controller strictly necessary, and if so is there a less complicated version of this?

          我感觉自己很亲密,我在做什么错了?

          I feel I am close, what am I doing wrong?

          完全YAML

          在这里可用: https://gist.github.com/ Patientplatypus / fa07648339ee6538616cb69282a84938

          感谢您的帮助!

          EDIT EDIT EDIT

          我尝试使用 HELM 。从表面上看,这似乎是一个简单的界面,所以我尝试对其进行旋转:

          I've attempted to use HELM. On the surface it appears to be a simple interface, and so I tried spinning it up:

          patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
          NAME:   erstwhile-beetle
          LAST DEPLOYED: Sun May  6 12:13:30 2018
          NAMESPACE: default
          STATUS: DEPLOYED
          
          RESOURCES:
          ==> v1/ConfigMap
          NAME                                       DATA  AGE
          erstwhile-beetle-nginx-ingress-controller  1     1s
          
          ==> v1/Service
          NAME                                            TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)                     AGE
          erstwhile-beetle-nginx-ingress-controller       LoadBalancer  10.0.216.38  <pending>    80:31494/TCP,443:32118/TCP  1s
          erstwhile-beetle-nginx-ingress-default-backend  ClusterIP     10.0.55.224  <none>       80/TCP                      1s
          
          ==> v1beta1/Deployment
          NAME                                            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
          erstwhile-beetle-nginx-ingress-controller       1        1        1           0          1s
          erstwhile-beetle-nginx-ingress-default-backend  1        1        1           0          1s
          
          ==> v1beta1/PodDisruptionBudget
          NAME                                            MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
          erstwhile-beetle-nginx-ingress-controller       1              N/A              0                    1s
          erstwhile-beetle-nginx-ingress-default-backend  1              N/A              0                    1s
          
          ==> v1/Pod(related)
          NAME                                                             READY  STATUS             RESTARTS  AGE
          erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz       0/1    ContainerCreating  0         1s
          erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w  0/1    ContainerCreating  0         1s
          
          
          NOTES:
          The nginx-ingress controller has been installed.
          It may take a few minutes for the LoadBalancer IP to be available.
          You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'
          
          An example Ingress that makes use of the controller:
          
            apiVersion: extensions/v1beta1
            kind: Ingress
            metadata:
              annotations:
                kubernetes.io/ingress.class: nginx
              name: example
              namespace: foo
            spec:
              rules:
                - host: www.example.com
                  http:
                    paths:
                      - backend:
                          serviceName: exampleService
                          servicePort: 80
                        path: /
              # This section is only required if TLS is to be enabled for the Ingress
              tls:
                  - hosts:
                      - www.example.com
                    secretName: example-tls
          
          If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
          
            apiVersion: v1
            kind: Secret
            metadata:
              name: example-tls
              namespace: foo
            data:
              tls.crt: <base64 encoded cert>
              tls.key: <base64 encoded key>
            type: kubernetes.io/tls
          

          貌似这真的很好-它可以使所有内容旋转并举例说明如何添加入口。由于我在空白的 kubectl 中旋转了头盔,因此我使用以下 yaml 文件添加了我认为需要的文件

          Seemingly this is really nice - it spins everything up and gives an example of how to add an ingress. Since I spun up helm in a blank kubectl I used the following yaml file to add in what I thought would be required.

          文件:

          apiVersion: apps/v1beta1
          kind: Deployment
          metadata:
            name: frontend
            labels:
              app: frontend
          spec:
            replicas: 3
            selector:
              matchLabels:
                app: frontend
            template:
              metadata:
                labels:
                  app: frontend
              spec:
                containers:
                - name: nginx
                  image: patientplatypus/frontend_example
                  ports:
                  - containerPort: 3000
          ---
          apiVersion: v1
          kind: Service
          metadata:
            name: frontend
          spec:
            type: LoadBalancer
            selector:
              app: frontend
            ports:
              - protocol: TCP
                port: 80
                targetPort: 3000
          ---
          apiVersion: apps/v1beta1
          kind: Deployment
          metadata:
            name: backend
            labels:
              app: backend
          spec:
            replicas: 3
            selector:
              matchLabels:
                app: backend
            template:
              metadata:
                labels:
                  app: backend
              spec:
                containers:
                - name: nginx
                  image: patientplatypus/backend_example
                  ports:
                  - containerPort: 5000
          ---
          apiVersion: v1
          kind: Service
          metadata:
            name: backend
          spec:
            type: LoadBalancer
            selector:
              app: backend
            ports:
              - protocol: TCP
                port: 80
                targetPort: 5000
          ---
          apiVersion: extensions/v1beta1
          kind: Ingress
          metadata:
            annotations:
              kubernetes.io/ingress.class: nginx
          spec:
            rules:
              - host: www.example.com
                http:
                  paths:
                    - path: /api
                      backend:
                        serviceName: backend
                        servicePort: 80
                    - path: /
                      frontend:
                        serviceName: frontend
                        servicePort: 80
          

          但是将此部署到集群会遇到此错误:

          Deploying this to the cluster however runs into this error:

          patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
          deployment.apps "frontend" created
          service "frontend" created
          deployment.apps "backend" created
          service "backend" created
          error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false
          

          关闭验证然后变成了,废话我该如何调试呢?
          如果您吐出头盔产生的代码,那么它基本上是人不可读的-无法进入那里弄清楚到底发生了什么。

          So, the question then becomes, well crap how do I debug this? If you spit out the code that helm produces, it's basically non-readable by a person - there's no way to go in there and figure out what's going on.

          检查一下: https:// gist。 github.com/ Patientplatypus / 0e281bf61307f02e16e0091397a1d863 -超过1000行!

          Check it out: https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - over a 1000 lines!

          如果有人有更好的方法来调试头盔,请将其添加到打开的列表中问题。

          If anyone has a better way to debug a helm deploy add it to the list of open questions.

          编辑编辑编辑

          为简化极端我尝试仅使用名称空间从一个Pod呼叫另一个Pod。

          To simplify in the extreme I attempt to make a call from one pod to another only using namespace.

          这是我发出http请求的React代码:

          So here is my React code where I make the http request:

          axios.get('http://backend/test')
          .then(response=>{
            console.log('return from backend and response: ', response);
          })
          .catch(error=>{
            console.log('return from backend and error: ', error);
          })
          

          我也尝试使用 http://backend.exampledeploy.svc.cluster.local/test 没有运气。

          I've also attempted to use http://backend.exampledeploy.svc.cluster.local/test without luck.

          这是我的处理get的节点代码:

          Here is my node code handling the get:

          router.get('/test', function(req, res, next) {
            res.json({"test":"test"})
          });
          

          这是我上传的 yaml 文件到 kubectl 集群:

          Here is my yaml file that I uploading to the kubectl cluster:

          apiVersion: apps/v1beta1
          kind: Deployment
          metadata:
            name: frontend
            namespace: exampledeploy
            labels:
              app: frontend
          spec:
            replicas: 3
            selector:
              matchLabels:
                app: frontend
            template:
              metadata:
                labels:
                  app: frontend
              spec:
                containers:
                - name: nginx
                  image: patientplatypus/frontend_example
                  ports:
                  - containerPort: 3000
          ---
          apiVersion: v1
          kind: Service
          metadata:
            name: frontend
            namespace: exampledeploy
          spec:
            type: LoadBalancer
            selector:
              app: frontend
            ports:
              - protocol: TCP
                port: 80
                targetPort: 3000
          ---
          apiVersion: apps/v1beta1
          kind: Deployment
          metadata:
            name: backend
            namespace: exampledeploy
            labels:
              app: backend
          spec:
            replicas: 3
            selector:
              matchLabels:
                app: backend
            template:
              metadata:
                labels:
                  app: backend
              spec:
                containers:
                - name: nginx
                  image: patientplatypus/backend_example
                  ports:
                  - containerPort: 5000
          ---
          apiVersion: v1
          kind: Service
          metadata:
            name: backend
            namespace: exampledeploy
          spec:
            type: LoadBalancer
            selector:
              app: backend
            ports:
              - protocol: TCP
                port: 80
                targetPort: 5000
          

          我在终端中看到的上传到群集似乎可以正常工作:

          The uploading to the cluster appears to work as I can see in my terminal:

          patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy 
          NAME                            READY     STATUS    RESTARTS   AGE
          pod/backend-584c5c59bc-5wkb4    1/1       Running   0          15m
          pod/backend-584c5c59bc-jsr4m    1/1       Running   0          15m
          pod/backend-584c5c59bc-txgw5    1/1       Running   0          15m
          pod/frontend-647c99cdcf-2mmvn   1/1       Running   0          15m
          pod/frontend-647c99cdcf-79sq5   1/1       Running   0          15m
          pod/frontend-647c99cdcf-r5bvg   1/1       Running   0          15m
          
          NAME               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
          service/backend    LoadBalancer   10.0.112.160   168.62.175.155   80:31498/TCP   15m
          service/frontend   LoadBalancer   10.0.246.212   168.62.37.100    80:31139/TCP   15m
          
          NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
          deployment.extensions/backend    3         3         3            3           15m
          deployment.extensions/frontend   3         3         3            3           15m
          
          NAME                                        DESIRED   CURRENT   READY     AGE
          replicaset.extensions/backend-584c5c59bc    3         3         3         15m
          replicaset.extensions/frontend-647c99cdcf   3         3         3         15m
          
          NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
          deployment.apps/backend    3         3         3            3           15m
          deployment.apps/frontend   3         3         3            3           15m
          
          NAME                                  DESIRED   CURRENT   READY     AGE
          replicaset.apps/backend-584c5c59bc    3         3         3         15m
          replicaset.apps/frontend-647c99cdcf   3         3         3         15m
          

          但是,当我尝试发出请求时,出现以下错误:

          However, when I attempt to make the request I get the following error:

          return from backend and error:  
          Error: Network Error
          Stack trace:
          createError@http://168.62.37.100/static/js/bundle.js:1555:15
          handleError@http://168.62.37.100/static/js/bundle.js:1091:14
          App.js:14
          

          由于 axios 调用是通过浏览器进行的,我想知道是否根本不可能使用此方法来调用后端,即使后端和前端位于不同的容器中也是如此。我有点迷茫,因为我认为这是将Pod联网在一起的最简单方法。

          Since the axios call is being made from the browser, I'm wondering if it is simply not possible to use this method to call the backend, even though the backend and the frontend are in different pods. I'm a little lost, as I thought this was the simplest possible way to network pods together.

          EDIT X5

          我已经确定可以像这样通过执行到pod中来从命令行卷曲后端:

          I've determined that it is possible to curl the backend from the command line by exec'ing into the pod like this:

          patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
          * Hostname was NOT found in DNS cache
          *   Trying 10.0.249.147...
          * Connected to backend (10.0.249.147) port 80 (#0)
          > GET /test HTTP/1.1
          > User-Agent: curl/7.38.0
          > Host: backend
          > Accept: */*
          > 
          < HTTP/1.1 200 OK
          < X-Powered-By: Express
          < Content-Type: application/json; charset=utf-8
          < Content-Length: 15
          < ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
          < Date: Sun, 06 May 2018 20:25:49 GMT
          < Connection: keep-alive
          < 
          * Connection #0 to host backend left intact
          {"test":"test"}
          

          毫无疑问,这是因为前端代码是在浏览器中执行的,它需要Ingress才能进入Pod,因为前端的http请求是简单Pod联网所无法解决的。我不确定这一点,但这意味着必须要使用Ingress。

          What this means is, without a doubt, because the front end code is being executed in the browser it needs Ingress to gain entry into the pod, as http requests from the front end are what's breaking with simple pod networking. I was unsure of this, but it means Ingress is necessary.

          推荐答案

          事实证明我在使事情变得过于复杂。这是Kubernetes文件,可以执行我想要的操作。您可以使用两个部署(前端和后端)和一个服务入口点来执行此操作。据我所知,服务可以负载平衡到许多(而不仅仅是2个)不同的部署,这意味着对于实际开发而言,这应该是微服务开发的良好开端。入口方法的优点之一是允许使用路径名而不是端口号,但是由于存在困难,因此在开发中似乎不切实际。

          As it turns out I was over-complicating things. Here is the Kubernetes file that works to do what I want. You can do this using two deployments (front end, and backend) and one service entrypoint. As far as I can tell, a service can load balance to many (not just 2) different deployments, meaning for practical development this should be a good start to micro service development. One of the benefits of an ingress method is allowing the use of path names rather than port numbers, but given the difficulty it doesn't seem practical in development.

          这里是 yaml 文件:

          apiVersion: apps/v1beta1
          kind: Deployment
          metadata:
            name: frontend
            labels:
              app: exampleapp
          spec:
            replicas: 3
            selector:
              matchLabels:
                app: exampleapp
            template:
              metadata:
                labels:
                  app: exampleapp
              spec:
                containers:
                - name: nginx
                  image: patientplatypus/kubeplayfrontend
                  ports:
                  - containerPort: 3000
          ---
          apiVersion: apps/v1beta1
          kind: Deployment
          metadata:
            name: backend
            labels:
              app: exampleapp
          spec:
            replicas: 3
            selector:
              matchLabels:
                app: exampleapp
            template:
              metadata:
                labels:
                  app: exampleapp
              spec:
                containers:
                - name: nginx
                  image: patientplatypus/kubeplaybackend
                  ports:
                  - containerPort: 5000
          ---
          apiVersion: v1
          kind: Service
          metadata:
            name: entrypt
          spec:
            type: LoadBalancer
            ports:
            - name: backend
              port: 8080
              targetPort: 5000
            - name: frontend
              port: 81
              targetPort: 3000
            selector:
              app: exampleapp
          

          这是我用来获取的bash命令它旋转起来(您可能必须添加登录命令- docker login -推送到dockerhub):

          Here are the bash commands I use to get it to spin up (you may have to add a login command - docker login - to push to dockerhub):

          #!/bin/bash
          
          # stop all containers
          echo stopping all containers
          docker stop $(docker ps -aq)
          # remove all containers
          echo removing all containers
          docker rm $(docker ps -aq)
          # remove all images
          echo removing all images
          docker rmi $(docker images -q)
          
          echo building backend
          cd ./backend
          docker build -t patientplatypus/kubeplaybackend .
          echo push backend to dockerhub
          docker push patientplatypus/kubeplaybackend:latest
          
          echo building frontend
          cd ../frontend
          docker build -t patientplatypus/kubeplayfrontend .
          echo push backend to dockerhub
          docker push patientplatypus/kubeplayfrontend:latest
          
          echo now working on kubectl
          cd ..
          echo deleting previous variables
          kubectl delete pods,deployments,services entrypt backend frontend
          echo creating deployment
          kubectl create -f kube-deploy.yaml
          echo watching services spin up
          kubectl get services --watch
          

          实际的代码只是一个前端反应应用,对后端节点路由进行axios http调用 componentDidMount 。

          The actual code is just a frontend react app making an axios http call to a backend node route on componentDidMount of the starting App page.

          您还可以在此处看到一个有效的示例: https://github.com/ Patientplatypus / KubernetesMultiPodCommunication

          You can also see a working example here: https://github.com/patientplatypus/KubernetesMultiPodCommunication

          再次感谢大家的帮助。

          这篇关于如何将一个Pod与Kubernetes中的另一个Pod联网? (简单)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆