Kubernetes集群中只有1个Pod可以处理所有请求 [英] Only 1 pod handles all requests in Kubernetes cluster

查看:68
本文介绍了Kubernetes集群中只有1个Pod可以处理所有请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是minikube Kubernetes的清单文件,用于部署和服务:

Here is a manifest file for minikube Kubernetes, for a deployment and a service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-deployment
spec:
  selector:
    matchLabels:
      app: hello
  replicas: 3
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello
        image: hello_hello
        imagePullPolicy: Never
        ports:
        - containerPort: 4001
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: hello
spec:
  selector:
    app: hello
  ports:
  - port: 4001
    nodePort: 30036
    protocol: TCP
  type: NodePort

还有一个用Golang编写的简单HTTP服务器

And a simple HTTP-server written in Golang

package main
import (
    http "net/http"

    "github.com/gin-gonic/gin"
)

func main() {
    r := gin.Default()
    r.GET("/ping", func(c *gin.Context) {
        c.JSON(200, gin.H{
            "message": "pong",
        })
    })

    server := &http.Server{
        Addr:    ":4001",
        Handler: r,
    }

    server.ListenAndServe()
}

当我向 IP:30036/ping 发出多个请求,然后打开Pod的日志时,我看到3个Pod中只有1个可以处理所有请求.如何使其他Pod响应请求?

When I make several requests to IP:30036/ping and then open pod's logs, I can see that only 1 of 3 pods handles all requests. How to make other pods response on requests?

推荐答案

您正在使用NodePort公开服务,因此没有反向代理,但您直接连接到Pod.首先,这是一个不错的选择.(稍后您可能要使用Ingress)

You are exposing a service using a NodePort, so there is no reverse proxy in place, but you directly connect to your Pod(s). This is a good choice to start with. (Later you might want to use an Ingress)

您看到的是只有一个Pod可以处理您的请求.您希望每个请求都负载均衡到一个不同的Pod.并且您的假设是正确的,但是负载平衡不是在HTTP请求层上发生,而是在TCP层上发生.

What you are seeing is that only one Pod handles your requests. You expect that each request is load balanced to a different pod. And your assumption is correct, but the load balancing does not happen on HTTP request layer, but on the TCP layer.

因此,当您拥有持久的TCP连接并重新使用它时,将不会遇到预期的负载平衡.由于建立TCP连接的延迟相当昂贵,因此通常会进行优化以避免重复打开新的TCP连接:HTTP保持活动.

So when you have a persistent TCP connection and re-use it, you will not experience the load balancing that you expect. Since establishing a TCP connection is rather expensive latency wise usually an optimization is in place to avoid repeatedly opening new TCP connections: HTTP keep-alive.

默认情况下,大多数框架和客户端都启用了保持活动",Go也是如此.尝试 s.SetKeepAlivesEnabled(false),看看是否可以解决您的问题.(建议仅用于测试!)

Keep alive is by default enabled in most frameworks and clients, this is true for Go as well. Try s.SetKeepAlivesEnabled(false) and see if that fixes your issue. (Recommended only for testing!)

您还可以使用多个不同的客户端,例如从命令行使用curl或在Postman中禁用keep-alive.

You can also use multiple different clients, f.e. from the command line with curl or disable keep-alive in Postman.

这篇关于Kubernetes集群中只有1个Pod可以处理所有请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆