如何在K8S集群内部中前后连接(拒绝连接) [英] How to connect front to back in k8s cluster internal (connection refused)

查看:600
本文介绍了如何在K8S集群内部中前后连接(拒绝连接)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

尝试将React前端Web连接到Node.js Express API服务器到kubernetes集群时出错.

可以在浏览器中导航到http:localhost:3000,并且网站正常.

但是无法按预期导航至http:localhost:3008(不应公开)

我的目标是将 REACT_APP_API_URL 环境变量传递给前端,以便设置axios baseURL并能够在前端与其api服务器之间建立通信.

deploy-front.yml

 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: gbpd-front
spec:
  selector:
    matchLabels:
      app: gbpd-api
      tier: frontend
      track: stable
  replicas: 2
  template:
    metadata:
      labels:
        app: gbpd-api
        tier: frontend
        track: stable
    spec:
      containers:
        - name: react
          image: binomio/gbpd-front:k8s-3
          ports:
            - containerPort: 3000
          resources:
            limits:
              memory: "150Mi"
            requests:
              memory: "100Mi"
          imagePullPolicy: Always
 

service-front.yaml

 apiVersion: v1
kind: Service
metadata:
  name: gbpd-front
spec:
  selector:
    app: gbpd-api
    tier: frontend
  ports:
  - protocol: "TCP"
    port: 3000
    targetPort: 3000
  type: LoadBalancer

 

Deploy-back.yaml

 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: gbpd-api
spec:
  selector:
    matchLabels:
      app: gbpd-api
      tier: backend
      track: stable
  replicas: 3 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: gbpd-api
        tier: backend
        track: stable
    spec:
      containers:
        - name: gbpd-api
          image: binomio/gbpd-back:dev
          ports:
            - name: http
              containerPort: 3008
 

service-back.yaml

 apiVersion: v1
kind: Service
metadata:
  name: gbpd-api
spec:
  selector:
    app: gbpd-api
    tier: backend
  ports:
  - protocol: "TCP"
    port: 3008
    targetPort: http
 

我尝试了许多组合,还尝试将"LoadBalancer"添加到backservice中,但是什么也没有...

我可以将perfecto连接到localhost:3000并使用前端,但是前端无法连接到后端服务.

问题1 :要通过REACT_APP_API_URL正确进行设置,要使用的IP/名称是什么? 问题2 :为什么curl localhost:3008没有回答?

经过2天的尝试,几乎可以尝试k8s官方文档中的所有内容...无法弄清这里发生了什么,因此任何帮助将不胜感激.

kubectl描述svc gbpd-api 响应:

kubectl describe svc gbpd-api
Name:                     gbpd-api
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"gbpd-api","namespace":"default"},"spec":{"ports":[{"port":3008,"p...
Selector:                 app=gbpd-api,tier=backend
Type:                     LoadBalancer
IP:                       10.107.145.227
LoadBalancer Ingress:     localhost
Port:                     <unset>  3008/TCP
TargetPort:               http/TCP
NodePort:                 <unset>  31464/TCP
Endpoints:                10.1.1.48:3008,10.1.1.49:3008,10.1.1.50:3008
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

解决方案

我测试了您的环境,并且在使用Nginx图像时它可以正常工作,让我们回顾一下环境:

  • 正确描述了前端部署.
  • 前端服务将其公开为负载均衡器,这意味着您可以从外部轻松访问前端.
  • 还正确描述了后端部署.
  • 后端服务与ClusterIP一起使用,以便只能从群集内部访问.

下面,我将演示前端和后端之间的通信.

  • 我使用的是您提供的相同Yaml,出于示例目的,仅将图像更改为Nginx,并且由于它是http服务器,因此我将Containerport更改为80.

问题1:为了传递REACT_APP_API_URL正确进行前端访问,要使用的ip/名称是什么?

  • 我已根据要求将ENV变量添加到前端部署中,并且还将使用它进行演示.您必须使用服务名称进行卷曲,而我使用的是简短版本,因为我们使用的是同一名称空间.您也可以使用全名: http://gbpd-api.default.svc. cluster.local:3008

复制:

  • 创建Yaml并应用它们:

$ cat deploy-front.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gbpd-front
spec:
  selector:
    matchLabels:
      app: gbpd-api
      tier: frontend
      track: stable
  replicas: 2
  template:
    metadata:
      labels:
        app: gbpd-api
        tier: frontend
        track: stable
    spec:
      containers:
        - name: react
          image: nginx
          env:
            - name: REACT_APP_API_URL
              value: http://gbpd-api:3008
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: "150Mi"
            requests:
              memory: "100Mi"
          imagePullPolicy: Always

$ cat service-front.yaml 
cat: cat: No such file or directory
apiVersion: v1
kind: Service
metadata:
  name: gbpd-front
spec:
  selector:
    app: gbpd-api
    tier: frontend
  ports:
  - protocol: "TCP"
    port: 3000
    targetPort: 80
  type: LoadBalancer

$ cat deploy-back.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gbpd-api
spec:
  selector:
    matchLabels:
      app: gbpd-api
      tier: backend
      track: stable
  replicas: 3
  template:
    metadata:
      labels:
        app: gbpd-api
        tier: backend
        track: stable
    spec:
      containers:
        - name: gbpd-api
          image: nginx
          ports:
            - name: http
              containerPort: 80

$ cat service-back.yaml 
apiVersion: v1
kind: Service
metadata:
  name: gbpd-api
spec:
  selector:
    app: gbpd-api
    tier: backend
  ports:
  - protocol: "TCP"
    port: 3008
    targetPort: http

$ kubectl apply -f deploy-front.yaml 
deployment.apps/gbpd-front created
$ kubectl apply -f service-front.yaml 
service/gbpd-front created
$ kubectl apply -f deploy-back.yaml 
deployment.apps/gbpd-api created
$ kubectl apply -f service-back.yaml 
service/gbpd-api created

  • 请记住,在Kubernetes中,通信是设计为在服务,因为当部署发生更改或Pod发生故障时,总是会重新创建Pod.

$ kubectl get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/gbpd-api-dc5b4b74b-kktb9      1/1     Running   0          41m
pod/gbpd-api-dc5b4b74b-mzpbg      1/1     Running   0          41m
pod/gbpd-api-dc5b4b74b-t6qxh      1/1     Running   0          41m
pod/gbpd-front-66b48f8b7c-4zstv   1/1     Running   0          30m
pod/gbpd-front-66b48f8b7c-h58ds   1/1     Running   0          31m

NAME                 TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)          AGE
service/gbpd-api     ClusterIP      10.0.10.166   <none>         3008/TCP         40m
service/gbpd-front   LoadBalancer   10.0.11.78    35.223.4.218   3000:32411/TCP   42m

  • 这些Pod是工作人员,由于它们可以自然替换,因此我们将连接到前端Pod以模拟他的行为,并尝试连接到后端服务(该网络层会将流量定向到其中一个后端容器).
  • nginx映像未预装curl,因此出于演示目的,我将必须安装它:

$ kubectl exec -it pod/gbpd-front-66b48f8b7c-4zstv -- /bin/bash
root@gbpd-front-66b48f8b7c-4zstv:/# apt update && apt install curl -y
done.

root@gbpd-front-66b48f8b7c-4zstv:/# curl gbpd-api:3008
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

  • 现在让我们尝试使用定义的环境变量:

root@gbpd-front-66b48f8b7c-4zstv:/# printenv | grep REACT
REACT_APP_API_URL=http://gbpd-api:3008
root@gbpd-front-66b48f8b7c-4zstv:/# curl $REACT_APP_API_URL
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...


注意事项:

问题2:为什么curl localhost:3008没有回答?

  • 由于正确描述了所有Yaml,因此必须检查image: binomio/gbpd-back:dev是否在预期的端口3008上正确提供了服务.
  • 由于它不是公共图像,因此我无法对其进行测试,因此,我将向您提供故障排除步骤:
    • 就像我们登录前端pod一样,您必须登录到此后端pod并测试curl localhost:3008.
    • 如果它基于带有apt-get的Linux发行版,则可以像在演示中一样运行命令:
    • 从后端部署获取pod名称(例如:gbpd-api-6676c7695c-6bs5n)
    • 运行kubectl exec -it pod/<POD_NAME> -- /bin/bash
    • 然后运行apt update && apt install curl -y
    • 并测试curl localhost:3008
    • 如果没有答案,请运行"apt update&&& apt安装net-tools
    • 并测试netstat -nlpt,它将必须向您显示正在运行的服务的输出以及相应的端口,例如:

root@gbpd-api-585df9cb4d-xr6nk:/# netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/nginx: master pro 

  • 如果即使采用这种方法,吊舱也没有返回任何内容,则您将不得不检查图像中的代码.

让我知道之后是否需要帮助!

Error while trying to connect React frontend web to nodejs express api server into kubernetes cluster.

Can navigate in browser to http:localhost:3000 and web site is ok.

But can't navigate to http:localhost:3008 as expected (should not be exposed)

My goal is to pass REACT_APP_API_URL environment variable to frontend in order to set axios baseURL and be able to establish communication between front and it's api server.

deploy-front.yml

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: gbpd-front
spec:
  selector:
    matchLabels:
      app: gbpd-api
      tier: frontend
      track: stable
  replicas: 2
  template:
    metadata:
      labels:
        app: gbpd-api
        tier: frontend
        track: stable
    spec:
      containers:
        - name: react
          image: binomio/gbpd-front:k8s-3
          ports:
            - containerPort: 3000
          resources:
            limits:
              memory: "150Mi"
            requests:
              memory: "100Mi"
          imagePullPolicy: Always

service-front.yaml

apiVersion: v1
kind: Service
metadata:
  name: gbpd-front
spec:
  selector:
    app: gbpd-api
    tier: frontend
  ports:
  - protocol: "TCP"
    port: 3000
    targetPort: 3000
  type: LoadBalancer

Deploy-back.yaml

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: gbpd-api
spec:
  selector:
    matchLabels:
      app: gbpd-api
      tier: backend
      track: stable
  replicas: 3 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: gbpd-api
        tier: backend
        track: stable
    spec:
      containers:
        - name: gbpd-api
          image: binomio/gbpd-back:dev
          ports:
            - name: http
              containerPort: 3008

service-back.yaml

apiVersion: v1
kind: Service
metadata:
  name: gbpd-api
spec:
  selector:
    app: gbpd-api
    tier: backend
  ports:
  - protocol: "TCP"
    port: 3008
    targetPort: http

I tried many combinations, also tried adding "LoadBalancer" to backservice but nothing...

I can connect perfecto to localhost:3000 and use frontend but frontend can't connect to backend service.

Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly? Question 2: Why is curl localhost:3008 not answering?

After 2 days trying almost everything in k8s official docs... can't figure out what's happening here, so any help will be much appreciated.

kubectl describe svc gbpd-api Response:

kubectl describe svc gbpd-api
Name:                     gbpd-api
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"gbpd-api","namespace":"default"},"spec":{"ports":[{"port":3008,"p...
Selector:                 app=gbpd-api,tier=backend
Type:                     LoadBalancer
IP:                       10.107.145.227
LoadBalancer Ingress:     localhost
Port:                     <unset>  3008/TCP
TargetPort:               http/TCP
NodePort:                 <unset>  31464/TCP
Endpoints:                10.1.1.48:3008,10.1.1.49:3008,10.1.1.50:3008
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

解决方案

I tested your environment, and it worked when using a Nginx image, let's review the environment:

  • The front-deployment is correctly described.
  • The front-service exposes it as loadbalancer, meaning your frontend is accessible from outside, perfect.
  • The back deployment is also correctly described.
  • The backend-service stays with as ClusterIP in order to be only accessible from inside the cluster, great.

Below I'll demonstrate the communication between front-end and back end.

  • I'm using the same yamls you provided, just changed the image to Nginx for example purposes, and since it's a http server I'm changing containerport to 80.

Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?

  • I added the ENV variable to the front deploy as requested, and I'll use it to demonstrate also. You must use the service name to curl, I used the short version because we are working in the same namespace. you can also use the full name: http://gbpd-api.default.svc.cluster.local:3008

Reproduction:

  • Create the yamls and applied them:

$ cat deploy-front.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gbpd-front
spec:
  selector:
    matchLabels:
      app: gbpd-api
      tier: frontend
      track: stable
  replicas: 2
  template:
    metadata:
      labels:
        app: gbpd-api
        tier: frontend
        track: stable
    spec:
      containers:
        - name: react
          image: nginx
          env:
            - name: REACT_APP_API_URL
              value: http://gbpd-api:3008
          ports:
            - containerPort: 80
          resources:
            limits:
              memory: "150Mi"
            requests:
              memory: "100Mi"
          imagePullPolicy: Always

$ cat service-front.yaml 
cat: cat: No such file or directory
apiVersion: v1
kind: Service
metadata:
  name: gbpd-front
spec:
  selector:
    app: gbpd-api
    tier: frontend
  ports:
  - protocol: "TCP"
    port: 3000
    targetPort: 80
  type: LoadBalancer

$ cat deploy-back.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gbpd-api
spec:
  selector:
    matchLabels:
      app: gbpd-api
      tier: backend
      track: stable
  replicas: 3
  template:
    metadata:
      labels:
        app: gbpd-api
        tier: backend
        track: stable
    spec:
      containers:
        - name: gbpd-api
          image: nginx
          ports:
            - name: http
              containerPort: 80

$ cat service-back.yaml 
apiVersion: v1
kind: Service
metadata:
  name: gbpd-api
spec:
  selector:
    app: gbpd-api
    tier: backend
  ports:
  - protocol: "TCP"
    port: 3008
    targetPort: http

$ kubectl apply -f deploy-front.yaml 
deployment.apps/gbpd-front created
$ kubectl apply -f service-front.yaml 
service/gbpd-front created
$ kubectl apply -f deploy-back.yaml 
deployment.apps/gbpd-api created
$ kubectl apply -f service-back.yaml 
service/gbpd-api created

  • Remember, in Kubernetes the communication is designed to be made between services, because the pods are always recreated when there is a change in the deployment or when the pod fail.

$ kubectl get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/gbpd-api-dc5b4b74b-kktb9      1/1     Running   0          41m
pod/gbpd-api-dc5b4b74b-mzpbg      1/1     Running   0          41m
pod/gbpd-api-dc5b4b74b-t6qxh      1/1     Running   0          41m
pod/gbpd-front-66b48f8b7c-4zstv   1/1     Running   0          30m
pod/gbpd-front-66b48f8b7c-h58ds   1/1     Running   0          31m

NAME                 TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)          AGE
service/gbpd-api     ClusterIP      10.0.10.166   <none>         3008/TCP         40m
service/gbpd-front   LoadBalancer   10.0.11.78    35.223.4.218   3000:32411/TCP   42m

  • The pods are the workers, and since they are replaceable by nature, we will connect to a frontend pod to simulate his behaviour and try to connect to the backend service (which is the network layer that will direct the traffic to one of the backend pods).
  • The nginx image does not come with curl preinstalled, so I will have to install it for demonstration purposes:

$ kubectl exec -it pod/gbpd-front-66b48f8b7c-4zstv -- /bin/bash
root@gbpd-front-66b48f8b7c-4zstv:/# apt update && apt install curl -y
done.

root@gbpd-front-66b48f8b7c-4zstv:/# curl gbpd-api:3008
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

  • Now let's try using the environment variable that was defined:

root@gbpd-front-66b48f8b7c-4zstv:/# printenv | grep REACT
REACT_APP_API_URL=http://gbpd-api:3008
root@gbpd-front-66b48f8b7c-4zstv:/# curl $REACT_APP_API_URL
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...


Considerations:

Question 2: Why is curl localhost:3008 not answering?

  • Since all yamls are correctly described you must check if image: binomio/gbpd-back:dev is correctly serving on port 3008 as intended.
  • Since it's not a public image, I can't test it, so I'll give you troubleshooting steps:
    • just like we logged inside the front-end pod you will have to log into this backend-pod and test curl localhost:3008.
    • If it's based on a linux distro with apt-get, you can run the commands just like I did on my demo:
    • get the pod name from backend deploy (example: gbpd-api-6676c7695c-6bs5n)
    • run kubectl exec -it pod/<POD_NAME> -- /bin/bash
    • then run apt update && apt install curl -y
    • and test curl localhost:3008
    • if no answer run `apt update && apt install net-tools
    • and test netstat -nlpt, it will have to show you the output of the services running and the respective port, example:

root@gbpd-api-585df9cb4d-xr6nk:/# netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/nginx: master pro 

  • If the pod does not return nothing even on this approach, you will have to check the code in the image.

Let me know if you need help after that!

这篇关于如何在K8S集群内部中前后连接(拒绝连接)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆