具有sessionAffinity的OpenShift服务将流量转发到两个Pod [英] OpenShift service with sessionAffinity forwards traffic to two pods

查看:45
本文介绍了具有sessionAffinity的OpenShift服务将流量转发到两个Pod的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

OpenShift容器平台3.11

OpenShift Container Platform 3.11

假定在同一名称空间中具有一个客户端Pod和三个相等的服务器Pod的设置.服务器Pod可通过以下服务获得:

Assume a setup with one client pod and three equal server pods in the same namespace. The server pods are available via a service:

  apiVersion: v1
  kind: Service
  metadata:
    name: server
  spec:
    ports:
    - name: "8200"
      port: 8200
      targetPort: 8200
    selector:
      test.service: server
    sessionAffinity: ClientIP
    sessionAffinityConfig:
      clientIP:
        timeoutSeconds: 10800 # default

sessionAffinity:ClientIP 指出,只要客户端具有相同的IP,其请求就会转发到相同的服务器Pod(达到timeoutSeconds时除外).该设置按预期工作了几个月,直到突然将请求分配到两个服务器Pod之间.重新启动客户端窗格可以暂时解决该问题,并且请求仅在一段时间内转发到一个服务器窗格.但是,几天后,同样的问题再次发生.

The sessionAffinity: ClientIP states that as long as the client has the same IP its requests are forwarded to the same server pod (except when the timeoutSeconds are reached). This setup worked as expected for months, until suddenly the requests were distributed between two server pods. Restarting the client pod temporarily solved the problem and the requests were forwarded to one server pod only for some time. However, after a few days, the same problem occurred again.

我的问题:关于OpenShift服务和 sessionAffinity:ClientIP 的任何事情都可以解释为什么来自相同IP不变的客户端的请求可能会突然"出现.分布在两个服务器Pod之间?

My question: Is there anything regarding OpenShift services and sessionAffinity: ClientIP that explaines why requests from the same client with an unchanged IP might be "suddenly" distributed between two server pods?

一些其他上下文:

客户端Pod连接到服务器Pod时会收到会话令牌(不是cookie).会话令牌被缓存在服务器容器内部,但不能在服务器容器之间共享.因此,当客户端连接到其他服务器时,它将收到拒绝该会话令牌的权限.然后,客户端请求一个新的会话令牌.如果客户端的请求转发到相同的服务器Pod,并且仅在某些情况下服务器发生了更改(例如,由于第一台服务器崩溃),则上述设置可以正常进行.但是,如果客户端的请求分布在两个或更多服务器之间,则会话令牌对于每第二个或第三个请求都是无效的.

The client pod receives a session token (not a cookie) when it connects to a server pod. The session token is cached inside the server pod, but is not shared between server pods. Therefore, when the client connects to a different server, it would receive a permission denied for the session token. The client then requests a new session token. If the client's requests are forwarded to the same server pod and only sometimes the server changes (e.g. because the first server crashed) the above setup works fine. However, if the client's requests are distributed between two or more servers, the session token will be invalid with every second or third request.

推荐答案

查看Kubernetes

Looking at the Kubernetes proxysocket source, we assume that a long connection time (above 250 ms) triggers the selection of a new endpoint.

现在,我们不再通过OpenShift服务在服务器之间分配客户端连接,而是在客户端和服务器之间使用了一个额外的nginx pod.

Instead of distributing client connections between the servers via an OpenShift service, we now use an additional nginx pod between client and servers.

这篇关于具有sessionAffinity的OpenShift服务将流量转发到两个Pod的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆