在 K8s 上使用单个 LoadBalancer 公开多个 TCP/UDP 服务 [英] Exposing multiple TCP/UDP services using a single LoadBalancer on K8s

查看:208
本文介绍了在 K8s 上使用单个 LoadBalancer 公开多个 TCP/UDP 服务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

试图弄清楚如何在 Kubernetes 上使用单个 LoadBalancer 公开多个 TCP/UDP 服务.假设服务是 ftpsrv1.com 和 ftpsrv2.com,每个都在端口 21 上提供服务.

Trying to figure out how to expose multiple TCP/UDP services using a single LoadBalancer on Kubernetes. Let's say the services are ftpsrv1.com and ftpsrv2.com each serving at port 21.

以下是我能想到的选项及其局限性:

Here are the options that I can think of and their limitations :

  • 每个 svc 一磅:太贵了.
  • Nodeport:想要使用 30000-32767 范围之外的端口.
  • K8s Ingress:目前不支持 TCP 或 UDP 服务.
  • 使用 Nginx Ingress 控制器:再次 将是一对一映射:
  • 发现这个自定义实现 : 但是好像没有更新,上次更新快一年了之前.
  • One LB per svc: too expensive.
  • Nodeport : Want to use a port outside the 30000-32767 range.
  • K8s Ingress : does not support TCP or UDP services as of now.
  • Using Nginx Ingress controller : which again will be one on one mapping:
  • Found this custom implementation : But it doesn't seem to updated, last update was almost an year ago.

任何输入将不胜感激.

推荐答案

其实是可能使用 NGINX Ingress 来做到这一点.

It's actually possible to do it using NGINX Ingress.

Ingress 不支持 TCP 或 UDP 服务.为此,该 Ingress 控制器使用标志 --tcp-services-configmap--udp-services-configmap 指向现有的配置映射,其中键是要使用的外部端口,该值使用以下格式指示要公开的服务:::[PROXY]:[PROXY].

Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY].

本指南描述了如何使用 minikube 来实现这在本地 kubernetes 上是不同的,需要更多的步骤.

This guide is describing how it can be achieved using minikube but doing this on a on-premises kubernetes is different and requires a few more steps.

缺乏描述如何在非 minikube 系统上完成的文档,这就是我决定在这里完成所有步骤的原因.本指南假设您有一个未安装 NGINX Ingress 的全新集群.

There is lack of documentation describing how it can be done on a non-minikube system and that's why I decided to go through all the steps here. This guide assumes you have a fresh cluster with no NGINX Ingress installed.

我使用的是 GKE 集群,所有命令都在我的 Linux 工作站上运行.也可以在裸机 K8S 集群上完成.

I'm using a GKE cluster and all commands are running from my Linux Workstation. It can be done on a Bare Metal K8S Cluster also.

创建示例应用程序和服务

在这里,我们将创建应用程序,它是稍后使用我们的入口公开它的服务.

Here we are going to create and application and it's service to expose it later using our ingress.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deployment
  namespace: default
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis
        imagePullPolicy: Always
        name: redis
        ports:
        - containerPort: 6379
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  namespace: default
spec:
  selector:
    app: redis
  type: ClusterIP
  ports:
    - name: tcp-port
      port: 6379
      targetPort: 6379
      protocol: TCP
---      
apiVersion: v1
kind: Service
metadata:
  name: redis-service2
  namespace: default
spec:
  selector:
    app: redis
  type: ClusterIP
  ports:
    - name: tcp-port
      port: 6380
      targetPort: 6379
      protocol: TCP      

请注意,我们正在为同一个应用程序创建 2 个不同的服务.这仅用作概念证明.我不想向后者表明仅使用一个 Ingress 就可以映射许多端口.

Notice that we are creating 2 different services for the same application. This is only to work as a proof of concept. I wan't to show latter that many ports can be mapped using only one Ingress.

使用 Helm 安装 NGINX Ingress:

安装 helm 3:

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

添加 NGINX Ingress 仓库:

Add NGINX Ingress repo:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

在 kube-system 命名空间上安装 NGINX Ingress:

Install NGINX Ingress on kube-system namespace:

$ helm install -n kube-system ingress-nginx ingress-nginx/ingress-nginx

准备我们新的 NGINX 入口控制器部署

我们必须在spec.template.spec.containers.args下添加以下几行:

We have to add the following lines under spec.template.spec.containers.args:

        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services

所以我们必须使用以下命令进行

So we have to edit using the following command:

$ kubectl edit deployments -n kube-system ingress-nginx-controller

让它看起来像这样:

...
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --publish-service=kube-system/ingress-nginx-controller
        - --election-id=ingress-controller-leader
        - --ingress-class=nginx
        - --configmap=kube-system/ingress-nginx-controller
        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
...

创建 tcp/udp 服务配置映射

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: kube-system

apiVersion: v1
kind: ConfigMap
metadata:
  name: udp-services
  namespace: kube-system

由于这些配置映射是集中式的并且可能包含配置,所以最好我们只修补它们而不是每次添加服务时完全覆盖它们:

Since these configmaps are centralized and may contain configurations, it is best if we only patch them rather than completely overwrite them every time you add a service:

$ kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6379":"default/redis-service:6379"}}'

$ kubectl patch configmap tcp-services -n kube-system --patch '{"data":{"6380":"default/redis-service2:6380"}}'

地点:

  • 6379 :你的服务应该从 minikube 虚拟机外部监听的端口
  • default :您的服务安装在的命名空间
  • redis-service : 服务名称
  • 6379 : the port your service should listen to from outside the minikube virtual machine
  • default : the namespace that your service is installed in
  • redis-service : the name of the service

我们可以使用以下命令验证我们的资源是否已打补丁:

We can verify that our resource was patched with the following command:

$ kubectl get configmap tcp-services -n kube-system -o yaml

apiVersion: v1
data:
  "6379": default/redis-service:6379
  "6380": default/redis-service2:6380
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"tcp-services","namespace":"kube-system"}}
  creationTimestamp: "2020-04-27T14:40:41Z"
  name: tcp-services
  namespace: kube-system
  resourceVersion: "7437"
  selfLink: /api/v1/namespaces/kube-system/configmaps/tcp-services
  uid: 11b01605-8895-11ea-b40b-42010a9a0050

您需要验证的唯一值是 data 属性下有一个如下所示的值:

The only value you need to validate is that there is a value under the data property that looks like this:

  "6379": default/redis-service:6379
  "6380": default/redis-service2:6380

向 NGINX 入口控制器部署添加端口

我们需要修补我们的 nginx 入口控制器,以便它侦听端口 6379/6380 并可以将流量路由到您的服务.

We need to patch our nginx ingress controller so that it is listening on ports 6379/6380 and can route traffic to your service.

spec:
  template:
    spec:
      containers:
      - name: controller
        ports:
         - containerPort: 6379
           hostPort: 6379
         - containerPort: 6380
           hostPort: 6380 

创建一个名为 nginx-ingress-controller-patch.yaml 的文件并粘贴上面的内容.

Create a file called nginx-ingress-controller-patch.yaml and paste the contents above.

接下来使用以下命令应用更改:

Next apply the changes with the following command:

$ kubectl patch deployment ingress-nginx-controller -n kube-system --patch "$(cat nginx-ingress-controller-patch.yaml)"

向 NGINX 入口控制器服务添加端口

与为 minikube 提供的解决方案不同,我们必须修补我们的 NGINX Ingress Controller Service,因为它负责暴露这些端口.

Differently from the solution presented for minikube, we have to patch our NGINX Ingress Controller Service as it is the responsible for exposing these ports.

spec:
  ports:
  - nodePort: 31100
    port: 6379
    name: redis
  - nodePort: 31101
    port: 6380
    name: redis2

创建一个名为 nginx-ingress-svc-controller-patch.yaml 的文件并粘贴上面的内容.

Create a file called nginx-ingress-svc-controller-patch.yaml and paste the contents above.

接下来使用以下命令应用更改:

Next apply the changes with the following command:

$ kubectl patch service ingress-nginx-controller -n kube-system --patch "$(cat nginx-ingress-svc-controller-patch.yaml)"

检查我们的服务

$ kubectl get service -n kube-system ingress-nginx-controller
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                                                    AGE
ingress-nginx-controller   LoadBalancer   10.15.251.203   34.89.108.48   6379:31100/TCP,6380:31101/TCP,80:30752/TCP,443:30268/TCP   38m

请注意,我们的 ingress-nginx-controller 正在侦听端口 6379/6380.

Notice that our ingress-nginx-controller is listening to ports 6379/6380.

测试您是否可以通过以下命令使用 telnet 访问您的服务:

Test that you can reach your service with telnet via the following command:

$ telnet 34.89.108.48 6379

您应该看到以下输出:

Trying 34.89.108.48...
Connected to 34.89.108.48.
Escape character is '^]'.

要退出 telnet,同时输入 Ctrl 键和 ] 键.然后输入 quit 并按回车键.

To exit telnet enter the Ctrl key and ] at the same time. Then type quit and press enter.

我们也可以测试6380端口:

We can also test port 6380:

$ telnet 34.89.108.48 6380
Trying 34.89.108.48...
Connected to 34.89.108.48.
Escape character is '^]'.

如果您无法连接,请查看上述步骤.

If you were not able to connect please review your steps above.

相关文章

这篇关于在 K8s 上使用单个 LoadBalancer 公开多个 TCP/UDP 服务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆