如何从同一项目中的另一个Kubernetes集群调用Kubernetes集群公开的服务 [英] How to call a service exposed by a Kubernetes cluster from another Kubernetes cluster in same project

查看:155
本文介绍了如何从同一项目中的另一个Kubernetes集群调用Kubernetes集群公开的服务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个服务,集群K1中的S1和集群K2中的S2。他们有不同的硬件要求。服务S1需要与S2通话。



由于安全原因,我不想公开S2。使用K2群集的计算实例上的NodePort和网络负载平衡可以带来灵活性,因为每次在K2中添加/删除节点时,我都必须在目标池中添加/移除K2的计算实例。

是否有像service-selector这样的自动更新目标池?如果没有,是否还有其他更好的方法来处理这个用例?

解决方案

我可以想到几种访问方式通过连接到同一个GCP专用网络的多个群集提供服务:


  1. 所有k2服务的堡垒路由转换为k2:



    找到k2群集的 SERVICE_CLUSTER_IP_RANGE 。在GKE上,它将是cluster describe输出中的 servicesIpv4Cidr 字段:

      $ gcloud beta集群集群描述k2 
    ...
    servicesIpv4Cidr:10.143.240.0/20
    ...

    添加一个高级路由规则以获取指定范围内的流量并将其路由到以k2为单位的节点:

      $ gcloud compute routes create --destination-range 10.143.240.0/20 --next-hop-instance k2-node-0 

    这将导致 k2-node-0 代理来自专用网络的任何k2服务的请求。这给了 k2-node-0 额外工作的明显缺点,但它很简单。


  2. 在k1的所有节点上安装k2的kube-proxy。



    查看k2中任何节点上正在运行的kube-proxy:

      $ ps aux | grep kube-proxy 
    ... / usr / local / bin / kube-proxy --master = https:// k2-master-ip --kubeconfig = / var / lib / kube-proxy / kubeconfig - v = 2

    将k2的kubeconfig文件复制到k1中的每个节点(比如 / var / lib / kube-proxy / kubeconfig-v2 ),并在每个节点上启动第二个kube-proxy:

      $ / usr / local / bin / kube-proxy --master = https:// k2-master-ip --kubeconfig = / var / lib / kube-proxy / kubeconfig -k2 --healthz-port = 10247 

    现在,k1中的每个节点在本地处理代理到k2。设置起来有点困难,但具有更好的缩放属性。

  3. 正如您所看到的,两种解决方案都不是优雅。正在讨论如何在Kubernetes中理想地使用这种类型的设置。您可以查看群集联合会提案文档(特别是跨群集服务发现部分),然后加入讨论开放问题/发送PR。

    I have two service, S1 in cluster K1 and S2 in cluster K2. They have different hardware requirements. Service S1 needs to talk to S2.

    I don't want to expose Public IP for S2 due to security reasons. Using NodePorts on K2 cluster's compute instances with network load-balancing takes the flexibility out as I would have to add/remove K2's compute instances in target pool each time a node is added/removed in K2.

    Is there something like "service-selector" for automatically updating target-pool? If not, is there any other better approach for this use-case?

    解决方案

    I can think of a couple of ways to access services across multiple clusters connected to the same GCP private network:

    1. Bastion route into k2 for all of k2's services:

      Find the SERVICE_CLUSTER_IP_RANGE for the k2 cluster. On GKE, it will be the servicesIpv4Cidr field in the output of cluster describe:

      $ gcloud beta container clusters describe k2
      ...
      servicesIpv4Cidr: 10.143.240.0/20
      ...
      

      Add an advanced routing rule to take traffic destined for that range and route it to a node in k2:

      $ gcloud compute routes create --destination-range 10.143.240.0/20 --next-hop-instance k2-node-0
      

      This will cause k2-node-0 to proxy requests from the private network for any of k2's services. This has the obvious downside of giving k2-node-0 extra work, but it is simple.

    2. Install k2's kube-proxy on all nodes in k1.

      Take a look at the currently running kube-proxy on any node in k2:

      $ ps aux | grep kube-proxy
      ... /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
      

      Copy k2's kubeconfig file to each node in k1 (say /var/lib/kube-proxy/kubeconfig-v2) and start a second kube-proxy on each node:

      $ /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig-k2 --healthz-port=10247
      

      Now, each node in k1 handles proxying to k2 locally. A little tougher to set up, but has better scaling properties.

    As you can see, neither solution is all that elegant. Discussions are happening about how this type of setup should ideally work in Kubernetes. You can take a look at the Cluster Federation proposal doc (specifically the Cross Cluster Service Discovery section), and join the discussion by opening up issues/sending PRs.

    这篇关于如何从同一项目中的另一个Kubernetes集群调用Kubernetes集群公开的服务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆