无法为gRPC打开Istio入口网关 [英] Unable to open Istio ingress-gateway for gRPC

查看:264
本文介绍了无法为gRPC打开Istio入口网关的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这个问题是关于我无法使用Istio入口网关将gRPC客户端连接到Kubernetes(AWS EKS)中托管的gRPC服务.

This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.

在kubernetes端::我有一个带有Go进程的容器,该进程在端口8081上监听gRPC.该端口在容器级别暴露.我定义了一个kubernetes服务并公开了8081.我定义了一个istio网关,它选择istio:ingressgateway并为gRPC打开端口8081.最后,我定义了一个istio虚拟服务,该虚拟服务的端口8081上有一条路由.

On the kubernetes side: I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.

在客户端:我有一个Go客户端,可以将gRPC请求发送到服务.

On the client side: I have a Go client which can send gRPC requests to the service.

  • 当我kubectl port-forward -n mynamespace service/myservice 8081:8081并通过client -url localhost:8081致电客户时,效果很好.
  • 当我向前关闭端口并用client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081呼叫时,客户端无法连接. (该网址是kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'的输出,并附加了:8081.
  • It works fine when I kubectl port-forward -n mynamespace service/myservice 8081:8081 and call my client via client -url localhost:8081.
  • When I close the port forward, and call with client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081 my client fails to connect. (That url is the output of kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' with :8081 appended.

日志:

  • 我查看了istio-system/istio-ingressgateway服务日志.我没有看到尝试的连接.
  • istio上,我确实看到了我之前建立的bookinfo连接bookinfo 教程.该教程有效,我能够打开浏览器并查看bookinfo产品页面,并且Ingressgateway日志显示"GET /productpage HTTP/1.1" 200.因此Istio入口网关可以正常工作,只是我不知道如何为新的gRPC端点配置它.
  • I looked at the istio-system/istio-ingressgateway service logs. I do not see an attempted connection.
  • I do see the bookinfo connections I made earlier when going over the istio bookinfo tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show "GET /productpage HTTP/1.1" 200. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.

Istio的入口网关

kubectl describe service -n istio-system istio-ingressgateway

输出以下内容,我怀疑是问题所在,尽管我努力打开端口8081,但未列出该端口.我为默认打开了多少个端口感到困惑,我没有打开它们(欢迎评论如何关闭不使用的端口,但这不是出现此问题的原因)

outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)

Name:                     istio-ingressgateway
Namespace:                istio-system
Labels:                   [redacted]
Annotations:              [redacted]
Selector:                 app=istio-ingressgateway,istio=ingressgateway
Type:                     LoadBalancer
IP:                       [redacted]
LoadBalancer Ingress:     [redacted]
Port:                     status-port  15021/TCP
TargetPort:               15021/TCP
NodePort:                 status-port  31125/TCP
Endpoints:                192.168.101.136:15021
Port:                     http2  80/TCP
TargetPort:               8080/TCP
NodePort:                 http2  30717/TCP
Endpoints:                192.168.101.136:8080
Port:                     https  443/TCP
TargetPort:               8443/TCP
NodePort:                 https  31317/TCP
Endpoints:                192.168.101.136:8443
Port:                     tcp  31400/TCP
TargetPort:               31400/TCP
NodePort:                 tcp  31102/TCP
Endpoints:                192.168.101.136:31400
Port:                     tls  15443/TCP
TargetPort:               15443/TCP
NodePort:                 tls  30206/TCP
Endpoints:                192.168.101.136:15443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

所以我认为我没有为GRPC正确打开端口8081.我还可以运行其他哪些日志或测试来帮助确定其来源?

So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?

以下是相关的Yaml:

Here is the relevant yaml:

Kubernetes Istio虚拟服务:其目的是将端口8081上的任何内容路由到myservice

Kubernetes Istio virtualservice: whose intent is to route anything on port 8081 to myservice

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myservice
  namespace: mynamespace
spec:
  hosts:
  - "*" 
  gateways:
  - myservice
  http:
  - match:
    - port: 8081
    route:
    - destination:
        host: myservice

Kubernetes Istio网关:其目的是为GRPC打开端口8081

Kubernetes Istio gateway: whose intent is to open port 8081 for GRPC

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: myservice
  namespace: mynamespace
spec:
  selector:
    istio: ingressgateway 
  servers:
    - name: myservice-plaintext
      port:
        number: 8081
        name: grpc-svc-plaintext
        protocol: GRPC
      hosts:
      - "*"

Kubernetes服务:显示端口8081在服务级别公开,我通过前面提到的端口转发测试进行了确认

Kubernetes service: showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier

apiVersion: v1
kind: Service
metadata:
  name: myservice
  namespace: mynamespace
  labels:
    app: myservice
spec:
  selector:
    app: myservice
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081
      name: grpc-svc-plaintext

Kubernetes部署:显示端口8081在容器级别公开,我通过前面提到的端口转发测试对此进行了确认

Kubernetes deployment: showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myservice
  namespace: mynamespace
  labels:
    app: myservice
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myservice
  template:
    metadata:
      labels:
        app: myservice
    spec:
      containers:
      - name: myservice
        image: [redacted]
        ports:
        - containerPort: 8081

重新检查客户端上的DNS是否有效:

getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com

输出3个IP,我认为这很好.

outputs 3 IP's, I'm assuming that's good.

[IP_1 redacted]  [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted]  [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted]  [redacted]-[redacted].us-west-2.elb.amazonaws.com

检查Istio Ingressgateway的路线:

istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]

返回

Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)

NOTE: This output only contains routes loaded via RDS.
NAME          DOMAINS     MATCH                  VIRTUAL SERVICE
http.8081     *           /*                     myservice.mynamespace
              *           /healthz/ready*        
              *           /stats/prometheus*

端口8081被路由到myservice.mynamespace,对我来说似乎很好.

Port 8081 is routed to myservice.mynamespace, seems good to me.

更新1: 我开始理解我无法使用默认的istio入口网关打开端口8081.该服务没有公开该端口,并且我假设创建网关将在幕后"更新该服务.但事实并非如此. 我可以选择的外部端口是:80、443、31400、15443、15021,我认为我的网关只需要依赖这些端口即可.我已经更新了网关和虚拟服务以使用端口80,然后客户端可以很好地连接到服务器.

UPDATE 1: I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.

这意味着我必须区分多个服务,而不是通过端口(显然不能从同一端口路由到两个服务),而是通过SNI来区分,我不清楚如何在gRPC中做到这一点我可以在gRPC标头中添加Host:[hostname].不幸的是,如果这就是我的路由方式,则意味着需要在网关上读取标头,并且当我希望在Pod处终止时,这要求在网关处终止TLS.

That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.

推荐答案

我开始理解我无法使用默认的istio入口网关打开端口8081.该服务没有公开该端口,并且我假设创建网关将在幕后"更新该服务.但事实并非如此.我可以选择的外部端口是:80、443、31400、15443、15021,我认为我的网关只需要依赖这些端口即可.我已经更新了网关和虚拟服务以使用端口80,然后客户端可以很好地连接到服务器.

I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.

我不确定您尝试为入口网关添加自定义端口的方式到底有多少,但是有可能.

I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.

据我检查此处可能以3种方式进行操作,以下是这些选项以及指向@ A_Suh,@ Ryota和@peppered提供的示例的链接.

As far as I checked here it's possible to do in 3 ways, here are the options with links to examples provided by @A_Suh, @Ryota and @peppered.

  • Kubectl edit
  • Helm
  • Istio Operator

其他资源:

  • How to create custom istio ingress gateway controller?
  • How to configure ingress gateway in istio?

这意味着我必须区分多个服务,而不是通过端口(显然不能从同一端口路由到两个服务),而是通过SNI来区分,我不清楚如何在gRPC中做到这一点我可以在gRPC标头中添加Host:[hostname].不幸的是,如果这就是我的路由方式,则意味着需要在网关上读取标头,并且当我希望在Pod处终止时,这要求在网关处终止TLS.

That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.

我看到您已经创建了新问题

I see you have already create new question here, so let's just move there.

这篇关于无法为gRPC打开Istio入口网关的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆