服务中的Pod到Pod通讯 [英] Pod to Pod communication within a Service

查看:346
本文介绍了服务中的Pod到Pod通讯的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以向Kubernetes中属于同一服务的另一个K8 Pod发送http Rest请求?

E. G. 服务名称= UserService,2个Pod(副本= 2)

Pod 1 --> Pod 2 //using pod ip not load balanced hostname 
Pod 2 --> Pod 1

连接已超过Rest GET 1.2.3.4:7079/user/1

主机+端口的值取自kubectl get ep

两个Pod IP的工作都在Pod之外成功完成,但是当我对Pod进行kubectl exec -it并通过CURL发出请求时,它会返回未为端点找到的404.

,我想知道是否可以向同一服务中的另一个K8 Pod发出请求?

Q 为什么我可以成功获得ping 1.2.3.4却不能使用Rest API?

以下是我的配置文件

 #values.yml
replicaCount: 1

 image:
  repository: "docker.hosted/app"
  tag: "0.1.0"
  pullPolicy: Always
  pullSecret: "a_secret"

service:
 name: http
 type: NodePort
 externalPort: 7079
 internalPort: 7079

ingress:
 enabled: false

deployment.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "app.fullname" . }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ template "app.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:

            - name: MY_POD_IP
              valueFrom:
               fieldRef:
                fieldPath: status.podIP
            - name: MY_POD_PORT
              value: "{{ .Values.service.internalPort }}"
          ports:
            - containerPort: {{ .Values.service.internalPort }}
          livenessProbe:
            httpGet:
              path: /actuator/alive
              port: {{ .Values.service.internalPort }}
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /actuator/ready
              port: {{ .Values.service.internalPort }}
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 3
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      imagePullSecrets:
        - name: {{ .Values.image.pullSecret }

service.yml

kind: Service
metadata:
  name: {{ template "app.fullname" . }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.externalPort }}
      targetPort: {{ .Values.service.internalPort }}
      protocol: TCP
      name: {{ .Values.service.name }}
  selector:
    app: {{ template "app.name" . }}
    release: {{ .Release.Name }}

从主人处执行

从同一MicroService的pod内执行

解决方案

是的,正如Mathew回答的那样,您确实可以使用Kubernetes在Service的Pod之间进行通信,我遇到的问题是Istio阻止了彼此的请求.

解决方案: 禁用Istio注入为我解决了这个问题,然后在病房中启用了Istio注入,并继续进行负载平衡,希望它将来能对某些人有所帮助.

在此处查看答案:在K8的服务中在Envo Pod到Pod通讯

Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes?

E. G. Service name = UserService , 2 Pods (replica = 2)

Pod 1 --> Pod 2 //using pod ip not load balanced hostname 
Pod 2 --> Pod 1

The connection is over Rest GET 1.2.3.4:7079/user/1

The value for host + port is taken from kubectl get ep

Both of the pod IP's work successfully outside of the pods but when I do a kubectl exec -it into the pod and make the request via CURL, it returns a 404 not found for the endpoint.

Q What I would like to know if it is possible to make a request to another K8 Pod that is in the same Service?

Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?

below is my config files

 #values.yml
replicaCount: 1

 image:
  repository: "docker.hosted/app"
  tag: "0.1.0"
  pullPolicy: Always
  pullSecret: "a_secret"

service:
 name: http
 type: NodePort
 externalPort: 7079
 internalPort: 7079

ingress:
 enabled: false

deployment.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "app.fullname" . }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ template "app.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:

            - name: MY_POD_IP
              valueFrom:
               fieldRef:
                fieldPath: status.podIP
            - name: MY_POD_PORT
              value: "{{ .Values.service.internalPort }}"
          ports:
            - containerPort: {{ .Values.service.internalPort }}
          livenessProbe:
            httpGet:
              path: /actuator/alive
              port: {{ .Values.service.internalPort }}
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /actuator/ready
              port: {{ .Values.service.internalPort }}
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 3
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      imagePullSecrets:
        - name: {{ .Values.image.pullSecret }

service.yml

kind: Service
metadata:
  name: {{ template "app.fullname" . }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.externalPort }}
      targetPort: {{ .Values.service.internalPort }}
      protocol: TCP
      name: {{ .Values.service.name }}
  selector:
    app: {{ template "app.name" . }}
    release: {{ .Release.Name }}

executed from master

executed from inside a pod of the same MicroService

解决方案

Yes as Mathew answered you can indeed communicate between pods in a Service with Kubernetes, the problem I was having was Istio was blocking the requests to each other.

Solution: Disabling Istio injection solved this problem for me , I then enabled it after wards and load balancing continued, hopefully it can help some one in the future.

see answer here: Envoy Pod to Pod communication within a Service in K8

这篇关于服务中的Pod到Pod通讯的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆