Kubernetes集群上的粘性会话 [英] Sticky sessions on Kubernetes cluster

查看:129
本文介绍了Kubernetes集群上的粘性会话的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当前,我正在尝试在Google Cloud上创建一个Kubernetes集群,该集群具有两个负载均衡器:一个用于后端(在Spring Boot中),另一个用于前端(在Angular中),其中每个服务(负载平衡器)与2个副本(pod)进行通信。为此,我创建了以下入口:

  apiVersion:extensions / v1beta1 
种类:入口
元数据:
名称:sample-ingress
规范:
规则:
-http:
路径:
-路径:/ rest / v1 / *
后端:
serviceName:样本后端
servicePort:8082
-路径:/ *
后端:
serviceName:样本前端
servicePort :80

上面提到的入口可以使前端应用程序与后端提供的REST API进行通信应用程式。但是,我必须创建粘性会话,以便每个用户都可以通过后端提供的身份验证机制与同一个POD通信。为了澄清,如果一个用户在POD#1中进行了身份验证,则POD#2将不会识别该cookie。



要解决此问题,我读到 Nginx-ingress 设法处理了这种情况,我通过此处提供的步骤进行安装:



使用以下服务(我将仅粘贴其中一项服务,另一项类似):

  api版本:v1 
类型:服务
元数据:
名称:sample-backend
spec:
选择器:
应用程序:示例
层:后端
端口:
-协议:TCP
端口:8082
targetPort:8082
类型:LoadBalancer

我声明了以下入口:

  apiVersion:extensions / v1beta1 
类型:Ingress
元数据:
名称:sample-nginx-ingress
注释:
kubernetes.io/ingress.class:nginx
nginx.ingress.kubernetes.io/affinity:Cookie
nginx.ingress.kubernetes.io/亲和模式:永久
nginx.ingress.kubernetes.io/session-cookie-hash:sha1
nginx.ingress.kubernetes.io/session-cookie-name: sample-cookie
规范:
规则:
-http:
路径:
-路径:/ rest / v1 / *
后端:
serviceName:sample-endend
servicePort:8082
-路径:/ *
backend:
serviceName:sample-frontend
servicePort:80

之后,我运行 kubectl apply -f sample-nginx-ingress.yaml 应用该入口,它已创建并且其状态为OK。但是,当我访问端点列中显示的URL时,浏览器无法连接到该URL。
我做错什么了吗?



编辑1



**服务和入口配置已更新* *



在获得一些帮助之后,我设法通过Ingress Nginx访问了服务。在上面,您可以进行以下配置:



Nginx Ingress



路径中不应包含 ,这与默认的Kubernetes入口不同,后者必须强制使用 来路由我想要的路径。

  apiVersion:extensions / v1beta1 
类型:Ingress
元数据:
名称:sample-ingress
批注:
kubernetes.io/ingress.class:nginx$b $ b nginx.ingress.kubernetes.io/affinity:cookie
nginx.ingress.kubernetes.io/session-cookie-name:sample-cookie
nginx.ingress.kubernetes.io/ session-cookie-expires: 172800
nginx.ingress.kubernetes.io/session-cookie-max-age:\"172800\"

规范:
规则:
-http:
路径:
-路径:/ rest / v1 /
后端:
serviceName:sample-backend
servicePort:8082
-路径:/
后端:
服务名称:sample-frontend
servicePort:80



服务



此外,服务的类型也不应该是 LoadBalancer,而是 ClusterIP ,如下所示:

  apiVersion:v1 
类型:服务
元数据:
名称:sample-backend
spec:
选择器:
应用程序:sample
层:后端
端口:
-协议:TCP
端口:8082
targetPort:8082
类型:ClusterIP

但是,我仍然无法在Kubernetes Cluster中实现粘性会话,一旦我仍然获得403,甚至cookie名称都没有被替换,那么我猜这些注释无法正常工作。

解决方案

我调查了此事,并找到了解决您问题的方法。



要实现两条路径的粘性会话,您将需要两个入口定义。



我创建了示例配置来向您显示整个过程:



重现步骤:




  • 应用入口定义

  • 创建部署

  • 创建服务

  • 创建入口

  • 测试



我假设已配置了集群并且可以正常工作。



应用入口定义



关注此


Currently, I'm trying to create a Kubernetes cluster on Google Cloud with two load balancers: one for backend (in Spring boot) and another for frontend (in Angular), where each service (load balancer) communicates with 2 replicas (pods). To achieve that, I created the following ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-ingress
spec:
  rules:
    - http:
        paths:
          - path: /rest/v1/*
            backend:
              serviceName: sample-backend
              servicePort: 8082
          - path: /*
            backend:
              serviceName: sample-frontend
              servicePort: 80

The ingress above mentioned can make the frontend app communicate with the REST API made available by the backend app. However, I have to create sticky sessions, so that every user communicates with the same POD because of the authentication mechanism provided by the backend. To clarify, if one user authenticates in POD #1, the cookie will not be recognized by POD #2.

To overtake this issue, I read that the Nginx-ingress manages to deal with this situation and I installed through the steps available here: https://kubernetes.github.io/ingress-nginx/deploy/ using Helm.

You can find below the diagram for the architecture I'm trying to build:

With the following services (I will just paste one of the services, the other one is similar):

apiVersion: v1
kind: Service
metadata:
  name: sample-backend
spec:
  selector:
    app: sample
    tier: backend
  ports:
    - protocol: TCP
      port: 8082
      targetPort: 8082
  type: LoadBalancer

And I declared the following ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-nginx-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
    nginx.ingress.kubernetes.io/session-cookie-name: sample-cookie
spec:
  rules:
    - http:
        paths:
          - path: /rest/v1/*
            backend:
              serviceName: sample-backend
              servicePort: 8082
          - path: /*
            backend:
              serviceName: sample-frontend
              servicePort: 80

After that, I run kubectl apply -f sample-nginx-ingress.yaml to apply the ingress, it is created and its status is OK. However, when I access the URL that appears in "Endpoints" column, the browser can't connect to the URL. Am I doing anything wrong?

Edit 1

** Updated service and ingress configurations **

After some help, I've managed to access the services through the Ingress Nginx. Above here you have the configurations:

Nginx Ingress

The paths shouldn't contain the "", unlike the default Kubernetes ingress that is mandatory to have the "" to route the paths I want.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: sample-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "sample-cookie"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"

spec:
  rules:
    - http:
        paths:
          - path: /rest/v1/
            backend:
              serviceName: sample-backend
              servicePort: 8082
          - path: /
            backend:
              serviceName: sample-frontend
              servicePort: 80

Services

Also, the services shouldn't be of type "LoadBalancer" but "ClusterIP" as below:

apiVersion: v1
kind: Service
metadata:
  name: sample-backend
spec:
  selector:
    app: sample
    tier: backend
  ports:
    - protocol: TCP
      port: 8082
      targetPort: 8082
  type: ClusterIP

However, I still can't achieve sticky sessions in my Kubernetes Cluster, once I'm still getting 403 and even the cookie name is not replaced, so I guess the annotations are not working as expected.

解决方案

I looked into this matter and I have found solution to your issue.

To achieve sticky session for both paths you will need two definitions of ingress.

I created example configuration to show you the whole process:

Steps to reproduce:

  • Apply Ingress definitions
  • Create deployments
  • Create services
  • Create Ingresses
  • Test

I assume that the cluster is provisioned and is working correctly.

Apply Ingress definitions

Follow this Ingress link to find if there are any needed prerequisites before installing Ingress controller on your infrastructure.

Apply below command to provide all the mandatory prerequisites:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Run below command to apply generic configuration to create a service:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml

Create deployments

Below are 2 example deployments to respond to the Ingress traffic on specific services:

hello.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
spec:
  selector:
    matchLabels:
      app: hello
      version: 1.0.0
  replicas: 5
  template:
    metadata:
      labels:
        app: hello
        version: 1.0.0
    spec:
      containers:
      - name: hello
        image: "gcr.io/google-samples/hello-app:1.0"
        env:
        - name: "PORT"
          value: "50001"

Apply this first deployment configuration by invoking command:

$ kubectl apply -f hello.yaml

goodbye.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: goodbye
spec:
  selector:
    matchLabels:
      app: goodbye
      version: 2.0.0
  replicas: 5
  template:
    metadata:
      labels:
        app: goodbye
        version: 2.0.0
    spec:
      containers:
      - name: goodbye 
        image: "gcr.io/google-samples/hello-app:2.0"
        env:
        - name: "PORT"
          value: "50001"

Apply this second deployment configuration by invoking command:

$ kubectl apply -f goodbye.yaml

Check if deployments configured pods correctly:

$ kubectl get deployments

It should show something like that:

NAME      READY   UP-TO-DATE   AVAILABLE   AGE
goodbye   5/5     5            5           2m19s
hello     5/5     5            5           4m57s

Create services

To connect to earlier created pods you will need to create services. Each service will be assigned to one deployment. Below are 2 services to accomplish that:

hello-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: hello-service
spec:
  type: NodePort
  selector:
    app: hello
    version: 1.0.0
  ports:
  - name: hello-port
    protocol: TCP
    port: 50001
    targetPort: 50001

Apply first service configuration by invoking command:

$ kubectl apply -f hello-service.yaml

goodbye-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: goodbye-service
spec:
  type: NodePort
  selector:
    app: goodbye
    version: 2.0.0
  ports:
  - name: goodbye-port
    protocol: TCP
    port: 50001
    targetPort: 50001

Apply second service configuration by invoking command:

$ kubectl apply -f goodbye-service.yaml

Take in mind that in both configuration lays type: NodePort

Check if services were created successfully:

$ kubectl get services

Output should look like that:

NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)           AGE
goodbye-service   NodePort    10.0.5.131   <none>        50001:32210/TCP   3s
hello-service     NodePort    10.0.8.13    <none>        50001:32118/TCP   8s

Create Ingresses

To achieve sticky sessions you will need to create 2 ingress definitions.

Definitions are provided below:

hello-ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "hello-cookie"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
  rules:
  - host: DOMAIN.NAME
    http:
      paths:
      - path: /
        backend:
          serviceName: hello-service
          servicePort: hello-port

goodbye-ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: goodbye-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "goodbye-cookie"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
  rules:
  - host: DOMAIN.NAME
    http:
      paths:
      - path: /v2/
        backend:
          serviceName: goodbye-service
          servicePort: goodbye-port

Please change DOMAIN.NAME in both ingresses to appropriate to your case. I would advise to look on this Ingress Sticky session link. Both Ingresses are configured to HTTP only traffic.

Apply both of them invoking command:

$ kubectl apply -f hello-ingress.yaml

$ kubectl apply -f goodbye-ingress.yaml

Check if both configurations were applied:

$ kubectl get ingress

Output should be something like this:

NAME              HOSTS        ADDRESS          PORTS   AGE
goodbye-ingress   DOMAIN.NAME   IP_ADDRESS      80      26m
hello-ingress     DOMAIN.NAME   IP_ADDRESS      80      26m

Test

Open your browser and go to http://DOMAIN.NAME Output should be like this:

Hello, world!
Version: 1.0.0
Hostname: hello-549db57dfd-4h8fb

Hostname: hello-549db57dfd-4h8fb is the name of the pod. Refresh it a couple of times.

It should stay the same.

To check if another route is working go to http://DOMAIN.NAME/v2/ Output should be like this:

Hello, world!
Version: 2.0.0
Hostname: goodbye-7b5798f754-pbkbg

Hostname: goodbye-7b5798f754-pbkbg is the name of the pod. Refresh it a couple of times.

It should stay the same.

To ensure that cookies are not changing open developer tools (probably F12) and navigate to place with cookies. You can reload the page to check if they are not changing.

这篇关于Kubernetes集群上的粘性会话的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆