如何在 istio 的入口网关处终止 ssl? [英] how to terminate ssl at ingress-gateway in istio?

查看:36
本文介绍了如何在 istio 的入口网关处终止 ssl?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 istio 入口网关中试验 ssl 连接.

I am trying to experiment ssl connection in istio ingress gateway.

从这里没有终止的istio ssl网关,我假设 istio 入口网关默认应该终止 ssl.

From here istio ssl gateway without termination, i assume that istio ingress gateway by default should terminate ssl.

我已经通过 istioctl 安装了带有演示配置文件的 istio.我还安装了我的服务 svc1.

I have installed istio with demo profile, via istioctl. I have also installed my service svc1.

除此之外,以下是我的路由逻辑资源:

Apart from these, below are what my resources are with routng logic:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: vs-gateway
  namespace: myns
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
      privateKey: /etc/istio/ingressgateway-certs/tls.key
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: vs
  namespace: myns
spec:
  hosts:
  - "*"
  gateways:
  - vs-gateway
  http:
  - match:
    - uri:
        prefix: "/svc1/"
    rewrite:
      uri: "/"
    route:
    - destination:
        host: svc1
        port:
          number: 80

我通过 这个.对于实验,我还在入口网关上启用了 http.这样 curl http://172.17.0.2: 可以使用 200 响应.后来我从入口网关中删除了 http 并只保留了 https(因为 https 是我被入口网关接收的主要目标)

I found the Gateway url via this. For experiment i had also enabled http on ingress gateway. That way curl http://172.17.0.2:<http_node_port> works with 200 response. Later i removed http from ingress gateway and kept only https (as https is my primary goal to be recieved by ingress gateway)

然后在网关 url 上尝试使用 https 卷曲.我得到 503.

And then trying curl with https on gateway url. I get 503.

$ curl -ivk https://172.17.0.2:<https_node_port>/svc1/user
*   Trying 172.17.0.2...
* TCP_NODELAY set
* Connected to 172.17.0.2 (172.17.0.2) port 30278 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Unknown (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Client hello (1):
* TLSv1.3 (OUT), TLS Unknown, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=CA; O=Abc; CN=example.com
*  start date: Dec 31 08:22:32 2019 GMT
*  expire date: Jan 30 08:22:32 2020 GMT
*  issuer: C=US; ST=CA; O=Abc; CN=example.com
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* Using Stream ID: 1 (easy handle 0x55c961626580)
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
> GET /svc1/user HTTP/2
> Host: 172.17.0.2:30278
> User-Agent: curl/7.58.0
> Accept: */*
> 
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS Unknown, Unknown (23):
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* TLSv1.3 (IN), TLS Unknown, Unknown (23):
< HTTP/2 503 
HTTP/2 503 
< content-length: 95
content-length: 95
< content-type: text/plain
content-type: text/plain
< date: Thu, 02 Jan 2020 08:13:49 GMT
date: Thu, 02 Jan 2020 08:13:49 GMT
< server: istio-envoy
server: istio-envoy

< 
* Connection #0 to host 172.17.0.2 left intact
upstream connect error or disconnect/reset before headers. reset reason: connection termination

我还在 svc pod sidecar 上启用了 sidecar istio 代理调试.我得到了以下日志

I also enabled sidecar istio proxy debuggin on svc pod sidecar. I got below logs

[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][filter] [external/envoy/source/extensions/filters/listener/original_dst/original_dst.cc:18] original_dst: New connection accepted
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][filter] [external/envoy/source/extensions/filters/listener/tls_inspector/tls_inspector.cc:72] tls inspector: new connection accepted
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:30] Called tcp filter: Filter
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:40] Called tcp filter: initializeReadFilterCallbacks
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:204] [C88] new tcp proxy session
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:133] [C88] Called tcp filter onNewConnection: remote 10.244.0.5:34148, local 10.244.0.16:3000
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:347] [C88] Creating connection to cluster inbound|80|serviceport|svc1.myns.svc.cluster.local
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:83] creating a new connection
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:364] [C89] connecting
[Envoy (Epoch 0)] [2020-01-02 06:53:19.392][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:711] [C89] connecting to 127.0.0.1:3000
[Envoy (Epoch 0)] [2020-01-02 06:53:19.393][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:720] [C89] connection in progress
[Envoy (Epoch 0)] [2020-01-02 06:53:19.393][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:109] queueing request due to no available connections
[Envoy (Epoch 0)] [2020-01-02 06:53:19.393][28][debug][conn_handler] [external/envoy/source/server/connection_handler_impl.cc:333] [C88] new connection
[Envoy (Epoch 0)] [2020-01-02 06:53:19.393][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:559] [C89] connected
[Envoy (Epoch 0)] [2020-01-02 06:53:19.393][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:285] [C89] assigning connection
[Envoy (Epoch 0)] [2020-01-02 06:53:19.393][28][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:541] TCP:onUpstreamEvent(), requestedServerName: 
[Envoy (Epoch 0)] [2020-01-02 06:53:19.393][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:143] Called tcp filter completeCheck: OK
[Envoy (Epoch 0)] [2020-01-02 06:53:19.395][28][debug][filter] [src/istio/control/client_context_base.cc:139] Report attributes: attributes {
  key: "connection.event"
  value {
    string_value: "open"
  }
}
attributes {
  key: "connection.id"
  value {
    string_value: "38a9b348-1730-4e0b-9664-fbbaeedd9215-88"
  }
[Envoy (Epoch 0)] [2020-01-02 06:53:19.395][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:100] [C88] Called tcp filter onRead bytes: 664
[Envoy (Epoch 0)] [2020-01-02 06:53:19.396][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:123] [C88] Called tcp filter onWrite bytes: 28
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:123] [C88] Called tcp filter onWrite bytes: 0
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:100] [C88] Called tcp filter onRead bytes: 34
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:527] [C88] remote close
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:193] [C88] closing socket: 0
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:174] [C88] Called tcp filter onEvent: 0 upstream 127.0.0.1:3000
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][filter] [src/istio/control/client_context_base.cc:139] Report attributes: attributes {
  key: "connection.duration"
  value {
    duration_value {
      nanos: 6151000
    }
  }
}
attributes {
  key: "connection.event"
  value {
    string_value: "close"
  }
}
at
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:104] [C89] closing data_to_write=34 type=0
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][conn_handler] [external/envoy/source/server/connection_handler_impl.cc:88] [C88] adding to cleanup list
[Envoy (Epoch 0)] [2020-01-02 06:53:19.398][28][debug][filter] [src/envoy/tcp/mixer/filter.cc:35] Called tcp filter : ~Filter
[Envoy (Epoch 0)] [2020-01-02 06:53:19.399][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:589] [C89] write flush complete
[Envoy (Epoch 0)] [2020-01-02 06:53:19.399][28][debug][connection] [external/envoy/source/common/network/connection_impl.cc:193] [C89] closing socket: 1
[Envoy (Epoch 0)] [2020-01-02 06:53:19.399][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:124] [C89] client disconnected
[Envoy (Epoch 0)] [2020-01-02 06:53:19.399][28][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:238] [C89] connection destroyed

根据日志,入口网关似乎正在将 ssl 请求转发到 svc.(如果我错了,请在这里纠正)

With the logs, it seems like ingress gateway is forwarding ssl request to svc. (please correct here if i am wrong)

最后,谁能帮我在入口网关处终止 ssl,并将普通的 http 请求转发到 svc?

So in the end, can anyone please help me out getting ssl terminated at ingress gateway, and forward plain http request to svc?

推荐答案

基于这个github issue 和这个 istio 文档

命名服务端口:必须命名服务端口.端口名称键/值对必须具有以下语法:名称:[-].有关更多详细信息,请参阅协议选择.

Named service ports: Service ports must be named. The port name key/value pairs must have the following syntax: name: [-]. See Protocol Selection for more details.

基于 mockserver 服务

ports:
    - name: serviceport

我建议将其更改为 http/https 就像提到的 那里并由在 github.

I would recommend to change it to http/https like mentioned there and confirmed by community member who had same issue on github.

手动协议选择

可以通过命名服务端口名称来手动指定协议:[-].支持以下协议:

Protocols can be specified manually by naming the Service port name: [-]. The following protocols are supported:

  • grpc
  • grpc
  • 网页
  • http
  • http2
  • https
  • 蒙哥
  • mysql*
  • redis*
  • tcp
  • tls
  • udp
  • *默认情况下禁用这些协议以避免意外启用实验性功能.要启用它们,请配置相应的 Pilot 环境变量.

    *These protocols are disabled by default to avoid accidentally enabling experimental features. To enable them, configure the corresponding Pilot environment variables.

    这篇关于如何在 istio 的入口网关处终止 ssl?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆