如何从同一项目中的另一个 Kubernetes 集群调用一个 Kubernetes 集群公开的服务 [英] How to call a service exposed by a Kubernetes cluster from another Kubernetes cluster in same project

查看:29
本文介绍了如何从同一项目中的另一个 Kubernetes 集群调用一个 Kubernetes 集群公开的服务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个服务,集群 K1 中的 S1 和集群 K2 中的 S2.他们有不同的硬件要求.服务 S1 需要与 S2 对话.

I have two service, S1 in cluster K1 and S2 in cluster K2. They have different hardware requirements. Service S1 needs to talk to S2.

出于安全原因,我不想公开 S2 的公共 IP.在 K2 集群的计算实例上使用 NodePorts 和网络负载平衡会降低灵活性,因为每次在 K2 中添加/删除节点时,我都必须在目标池中添加/删除 K2 的计算实例.

I don't want to expose Public IP for S2 due to security reasons. Using NodePorts on K2 cluster's compute instances with network load-balancing takes the flexibility out as I would have to add/remove K2's compute instances in target pool each time a node is added/removed in K2.

是否有类似服务选择器"之类的东西来自动更新目标池?如果没有,对于这个用例还有其他更好的方法吗?

Is there something like "service-selector" for automatically updating target-pool? If not, is there any other better approach for this use-case?

推荐答案

我可以想到几种方法来跨连接到同一个 GCP 专用网络的多个集群访问服务:

I can think of a couple of ways to access services across multiple clusters connected to the same GCP private network:

  1. 用于所有 k2 服务的堡垒路由到 k2:

  1. Bastion route into k2 for all of k2's services:

找到 k2 集群的 SERVICE_CLUSTER_IP_RANGE.在 GKE 上,它将是集群描述输出中的 servicesIpv4Cidr 字段:

Find the SERVICE_CLUSTER_IP_RANGE for the k2 cluster. On GKE, it will be the servicesIpv4Cidr field in the output of cluster describe:

$ gcloud beta container clusters describe k2
...
servicesIpv4Cidr: 10.143.240.0/20
...

添加高级路由规则,以获取发往该范围的流量并将其路由到 k2 中的节点:

Add an advanced routing rule to take traffic destined for that range and route it to a node in k2:

$ gcloud compute routes create --destination-range 10.143.240.0/20 --next-hop-instance k2-node-0

这将导致 k2-node-0 代理来自私有网络的请求,用于 k2 的任何服务.这有一个明显的缺点,即给 k2-node-0 额外的工作,但它很简单.

This will cause k2-node-0 to proxy requests from the private network for any of k2's services. This has the obvious downside of giving k2-node-0 extra work, but it is simple.

在 k1 的所有节点上安装 k2 的 kube-proxy.

Install k2's kube-proxy on all nodes in k1.

查看当前在 k2 中任意节点上运行的 kube-proxy:

Take a look at the currently running kube-proxy on any node in k2:

$ ps aux | grep kube-proxy
... /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2

将 k2 的 kubeconfig 文件复制到 k1 中的每个节点(比如 /var/lib/kube-proxy/kubeconfig-v2)并在每个节点上启动第二个 kube-proxy:

Copy k2's kubeconfig file to each node in k1 (say /var/lib/kube-proxy/kubeconfig-v2) and start a second kube-proxy on each node:

$ /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig-k2 --healthz-port=10247

现在,k1 中的每个节点都在本地处理对 k2 的代理.设置起来有点困难,但具有更好的缩放特性.

Now, each node in k1 handles proxying to k2 locally. A little tougher to set up, but has better scaling properties.

如您所见,这两种解决方案都不是那么优雅.正在讨论这种类型的设置应该如何在 Kubernetes 中理想地工作.您可以查看集群联盟提案文档(特别是跨集群服务发现部分),并加入通过打开问题/发送 PR 进行讨论.

As you can see, neither solution is all that elegant. Discussions are happening about how this type of setup should ideally work in Kubernetes. You can take a look at the Cluster Federation proposal doc (specifically the Cross Cluster Service Discovery section), and join the discussion by opening up issues/sending PRs.

这篇关于如何从同一项目中的另一个 Kubernetes 集群调用一个 Kubernetes 集群公开的服务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆