Istio中的内部服务请求 [英] Internal service requests in Istio

查看:588
本文介绍了Istio中的内部服务请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经设法开始使用Istio.我已经测试了很多基础知识,并且拥有一个可以很好地与HTTP和gRPC一起使用的基本集群.我有一个服务,但是需要向另一个未在外部公开的服务发出内部请求.

I have managed to get going with Istio. I've been testing a lot of the fundamentals and have a basic cluster working nicely with HTTP and gRPC. I have a Service that however needs to make an internal request to another service that isn't externally exposed.

举个例子:

  1. 请求从Istio网关作为HTTP
  2. 发出
  3. 我的自定义grpc-gateway处理程序将请求代理到gRPC服务
  4. 网关通过HTTP响应用户

我有一个网关和一个声明的VirtualService:

I have a Gateway and a VirtualService declared:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-ingress
spec:
  hosts:
  - "*"
  gateways:
  - my-gateway
  http:
  - match:
    - port: 80
    route:
    - destination:
        host: my-grpc-gateway.default.svc.cluster.local
    corsPolicy:
      allowOrigin:
      - "*"
      allowMethods:
      - POST
      - GET
      - DELETE
      - PUT
      - OPTIONS
      allowCredentials: false
      allowHeaders:
      - Authorization
      maxAge: "24h"
  - match:
    - port: 30051
    route:
    - destination:
        host: api.default.svc.cluster.local
        port:
          number: 8443

这是我的网关:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: my-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      name: http
      number: 80
      protocol: HTTP
    tls:
      mode: PASSTHROUGH
    hosts:
    - "*"
  - port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      mode: PASSTHROUGH
    hosts:
    - "*"
  - port:
      name: grpc
      number: 30051
      protocol: GRPC
    tls:
      mode: PASSTHROUGH
    hosts:
    - "*"

我的代理服务随gRPC服务器的坐标一起提供:

My proxy service is being provided with the coordinates of the gRPC server:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rest-proxy
  labels:
    app: prox
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rest-proxy
  template:
    metadata:
      labels:
        app: rest-proxy
    spec:
      containers:
        - image: redacted/rest-proxy:latest
          name: rest-proxy
          ports:
            - containerPort: 80
          command: ["./rest-proxy"]
          args: ["-host", "0.0.0.0", "-port", "80", "-apipath", "$(API_SERVICE_HOST):$(API_SERVICE_PORT)"]
      imagePullSecrets:
      - name: regcred
---
apiVersion: v1
kind: Service
metadata:
  name: rest-proxy
  labels:
    app: rest-proxy
spec:
  ports:
  - name: http
    port: 80
  - name: grpc-port
    port: 8444
  selector:
   app: rest-proxy

这是ServiceEntry资源发挥作用​​的地方吗?现在,我只想确保内部服务可以互相通信,最终我将创建一个负载平衡器,以处理从网关到API的代理(随着我的扩展).

Is this where a ServiceEntry resource comes into play? For now I just want to make sure my internal services can talk to each other, eventually I'll create a load balancer to handle proxying from the gateway to the API (as I scale out).

任何建议/指导都会有所帮助!

Any suggestions/guidance would be helpful!

推荐答案

进行更多的挖掘之后,我意识到我的代理服务已绑定到端口:API_SERVICE_PORT,该端口设置为8080.gRPC服务存在于8443,因此连接从来没有做过.

After much more digging I realized that my proxy service was binding to the port: API_SERVICE_PORT which was set to 8080. The gRPC service existed at 8443, so the connection was never made.

网格内的所有内部服务都应该自然地相互通信.只是需要明确规则才能进入网状网络的入口.

All internal services within the mesh should naturally talk to each other. It's only the ingress that needs explicit rules to come into the mesh.

这篇关于Istio中的内部服务请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆