在匹配的主机上找不到Istio DestinationRule子集标签 [英] Istio DestinationRule subset label not found on matching host

查看:216
本文介绍了在匹配的主机上找不到Istio DestinationRule子集标签的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试配置Istio VirtialService/DestinationRule,以便将来自标记为datacenter=chi5的容器的grpc调用路由到标记为datacenter=chi5的容器的grpc服务器.

I'm trying to configure an Istio VirtialService / DestinationRule so that a grpc call to the service from a pod labeled datacenter=chi5 is routed to a grpc server on a pod labeled datacenter=chi5.

我在运行Kubernetes 1.15的群集上安装了Istio 1.4.

I have Istio 1.4 installed on a cluster running Kubernetes 1.15.

istio-sidecar特使配置中为chi5子集创建的不是 ,并且流量在每个服务端点之间轮流路由,而与Pod标签无关.

A route is not getting created in istio-sidecar envoy config for the chi5 subset and traffic is being routed round robin between each service endpoint regardless of pod label.

Kiali报告DestinationRule配置中的错误:在任何匹配的主机中都找不到此子集的标签".

Kiali is reporting an error in the DestinationRule config: "this subset's labels are not found in any matching host".

我是否误解了这些Istio流量管理对象的功能,或者我的配置中有错误?

Do I misunderstand the functionality of these Istio traffic management objects or is there an error in my configuration?

我相信我的豆荚标签正确:

I believe my pod's are correctly labeled:

$ (dev) kubectl get pods -n istio-demo --show-labels
NAME                            READY   STATUS    RESTARTS   AGE    LABELS
ticketclient-586c69f77d-wkj5d   2/2     Running   0          158m   app=ticketclient,datacenter=chi6,pod-template-hash=586c69f77d,run=client-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-bqnqb   2/2     Running   0          158m   app=ticketserver,datacenter=chi5,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-pms25   2/2     Running   0          158m   app=ticketserver,datacenter=chi6,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio

我的k8s服务对象上的端口名正确地加上了grpc协议前缀:

The port-name on my k8s Service object is correctly prefixed with the grpc protocol:

$ (dev) kubectl describe service -n istio-demo ticket-service
Name:              ticket-service
Namespace:         istio-demo
Labels:            app=ticketserver
Annotations:       <none>
Selector:          run=ticket-service
Type:              ClusterIP
IP:                10.234.14.53
Port:              grpc-ticket  10000/TCP
TargetPort:        6001/TCP
Endpoints:         10.37.128.37:6001,10.44.0.0:6001
Session Affinity:  None
Events:            <none>

我已将以下Istio对象部署到Kubernetes:

I've deployed the following Istio objects to Kubernetes:

Kind:         VirtualService
Name:         ticket-destinationrule
Namespace:    istio-demo
Labels:       app=ticketserver
Annotations:  <none>
API Version:  networking.istio.io/v1alpha3
Kind:         DestinationRule
Spec:
  Host:  ticket-service.istio-demo.svc.cluster.local
  Subsets:
    Labels:
      Datacenter:  chi5
    Name:          chi5
    Labels:
      Datacenter:  chi6
    Name:          chi6
Events:            <none>
---
Name:         ticket-virtualservice
Namespace:    istio-demo
Labels:       app=ticketserver
Annotations:  <none>
API Version:  networking.istio.io/v1alpha3
Kind:         VirtualService
Spec:
  Hosts:
    ticket-service.istio-demo.svc.cluster.local
  Http:
    Match:
      Name:  ticket-chi5
      Port:  10000
      Source Labels:
        Datacenter:  chi5
    Route:
      Destination:
        Host:    ticket-service.istio-demo.svc.cluster.local
        Subset:  chi5
Events:          <none>

推荐答案

我用2个Nginx Pod复制了您的问题.

I have made reproduction of your issue with 2 nginx pods.

您可以通过 sourceLabel来实现,请查看下面的示例,我认为它可以解释所有内容.

What you want to have can be achieved with sourceLabel,check below example, I think it explain everything.

首先,我制作了2个ubuntu容器,其中1个具有标记app:ubuntu 和1个没有任何标签的

For start I made 2 ubuntu pods, 1 with label app:ubuntu and 1 without any labels.

apiVersion: v1
kind: Pod
metadata:
  name: ubu2
  labels:
    app: ubuntu
spec:
  containers:
  - name: ubu2
    image: ubuntu
    command: ["/bin/sh"]
    args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]


apiVersion: v1
kind: Pod
metadata:
  name: ubu1
spec:
  containers:
  - name: ubu1
    image: ubuntu
    command: ["/bin/sh"]
    args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]

然后进行2个带服务的部署.

Then 2 deployments with service.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx1
spec:
  selector:
    matchLabels:
      run: nginx1
  replicas: 1
  template:
    metadata:
      labels:
        run: nginx1
        app: frontend
    spec:
      containers:
      - name: nginx1
        image: nginx
        ports:
        - containerPort: 80
        lifecycle:
          postStart:
            exec:
              command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]


apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx2
spec:
  selector:
    matchLabels:
      run: nginx2
  replicas: 1
  template:
    metadata:
      labels:
        run: nginx2
        app: frontend
    spec:
      containers:
      - name: nginx2
        image: nginx
        ports:
        - containerPort: 80
        lifecycle:
          postStart:
            exec:
              command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]


apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: frontend
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: frontend


另一件事是具有网状网关的虚拟服务它仅在具有2个匹配项的网格中起作用,其中1个与sourceLabel匹配,从带有app:ubuntu标签的pod转到具有v1子集的nginx pod,而另一个匹配项到具有v2子集的nginx pod.


Another thing is virtual service with mesh gateway, so it works only in the mesh, with 2 matches, 1 with sourceLabel which goes from pods with app: ubuntu label to nginx pod with v1 subset, and another match which goes to the nginx pod with v2 subset.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: nginxvirt
spec:
  gateways:
  - mesh
  hosts:
  - nginx.default.svc.cluster.local
  http:
  - name: match-myuid
    match:
    - sourceLabels:
        app: ubuntu
    route:
    - destination:
        host: nginx.default.svc.cluster.local
        port:
          number: 80
        subset: v1
  - name: default
    route:
    - destination:
        host: nginx.default.svc.cluster.local
        port:
          number: 80
        subset: v2


最后一件事是 DestinationRule ,它从虚拟服务中获取子集并将其发送到标签为nginx1或nginx2的适当的nginx pod


And the last thing is DestinationRule which take subsets from virtual service and sent it to proper nginx pod with label either nginx1 or nginx2

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: nginxdest
spec:
  host: nginx.default.svc.cluster.local
  subsets:
  - name: v1
    labels:
      run: nginx1
  - name: v2
    labels:
      run: nginx2


kubectl获取广告连播--show-labels

NAME                      READY   STATUS    RESTARTS   AGE     LABELS
nginx1-5c5b84567c-tvtzm   2/2     Running   0          23m     app=frontend,run=nginx1,security.istio.io/tlsMode=istio
nginx2-5d95c8b96-6m9zb    2/2     Running   0          23m     app=frontend,run=nginx2,security.istio.io/tlsMode=istio
ubu1                      2/2     Running   4          3h19m   security.istio.io/tlsMode=istio
ubu2                      2/2     Running   2          10m     app=ubuntu,security.istio.io/tlsMode=istio


来自Ubuntu豆荚的结果

curl nginx/
Hello nginx1


没有标签的Ubuntu

curl nginx/
Hello nginx2


让我知道是否有帮助.


Let me know if that help.

这篇关于在匹配的主机上找不到Istio DestinationRule子集标签的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆