kubectl port-forward 会忽略 loadBalance 服务吗? [英] Does kubectl port-forward ignore loadBalance services?

查看:16
本文介绍了kubectl port-forward 会忽略 loadBalance 服务吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的环境:带有最新 Minikube/Docker 的 Mac 开发机器

My Environment: Mac dev machine with latest Minikube/Docker

我用一个简单的 Django REST APIhello world"构建(本地)一个简单的 docker 镜像.我正在运行一个有 3 个副本的部署.这是我定义它的 yaml 文件:

I built (locally) a simple docker image with a simple Django REST API "hello world".I'm running a deployment with 3 replicas. This is my yaml file for defining it:

apiVersion: v1
kind: Service
metadata:
  name: myproj-app-service
  labels:
    app: myproj-be
spec:
  type: LoadBalancer
  ports:
    - port: 8000
  selector:
    app: myproj-be
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myproj-app-deployment
  labels:
    app: myproj-be
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myproj-be
  template:
    metadata:
      labels:
        app: myproj-be
    spec:
      containers:
        - name: myproj-app-server
          image: myproj-app-server:4
          ports:
            - containerPort: 8000
          env:
            - name: DATABASE_URL
              value: postgres://myname:@10.0.2.2:5432/myproj2
            - name: REDIS_URL
              value: redis://10.0.2.2:6379/1

当我应用此 yaml 时,它会正确生成内容.- 一次部署- 一项服务- 三个豆荚

When I apply this yaml it generates things correctly. - one deployment - one service - three pods

部署:

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
myproj-app-deployment   3/3     3            3           79m

服务:

NAME               TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
kubernetes         ClusterIP      10.96.0.1     <none>        443/TCP          83m
myproj-app-service   LoadBalancer   10.96.91.44   <pending>     8000:31559/TCP   79m

豆荚:

NAME                                   READY   STATUS    RESTARTS   AGE
myproj-app-deployment-77664b5557-97wkx   1/1     Running   0          48m
myproj-app-deployment-77664b5557-ks7kf   1/1     Running   0          49m
myproj-app-deployment-77664b5557-v9889   1/1     Running   0          49m

有趣的是,当我 SSH 进入 Minikube,并使用 curl 10.96.91.44:8000 访问服务时,它尊重LoadBalancer 类型的服务,并在我一次又一次地访问端点时在所有三个 Pod 之间轮换.我可以在返回的结果中看到这一点,我确保包含 pod 的 HOSTNAME.

The interesting thing is that when I SSH into the Minikube, and hit the service using curl 10.96.91.44:8000 it respects the LoadBalancer type of the service and rotates between all three pods as I hit the endpoints time and again. I can see that in the returned results which I have made sure to include the HOSTNAME of the pod.

但是,当我尝试从我的 Hosting Mac 访问服务时 -- 使用 kubectl port-forward service/myproj-app-service 8000:8000 -- 每次我到达端点时,我获得相同的 pod 来响应.它没有负载平衡.当我 kubectl logs -f <pod> 到所有三个 pod 时,我可以清楚地看到这一点,并且只有其中一个处理命中,而另外两个则空闲......

However, when I try to access the service from my Hosting Mac -- using kubectl port-forward service/myproj-app-service 8000:8000 -- Every time I hit the endpoint, I get the same pod to respond. It doesn't load balance. I can see that clearly when I kubectl logs -f <pod> to all three pods and only one of them is handling the hits, as the other two are idle...

这是 kubectl port-forward 限制还是问题?还是我在这里遗漏了一些更重要的东西?

Is this a kubectl port-forward limitation or issue? or am I missing something greater here?

推荐答案

原因是我的 pod 随机处于崩溃状态,原因是容器中遗留了 Python *.pyc 文件.当 Django 在多 Pod Kubernetes 部署中运行时,这会导致问题.一旦我删除了这个问题并且所有 pod 都成功运行,循环开始工作.

The reason was that my pods were randomly in a crashing state due to Python *.pyc files that were left in the container. This causes issues when Django is running in a multi-pod Kubernetes deployment. Once I removed this issue and all pods ran successfully, the round-robin started working.

这篇关于kubectl port-forward 会忽略 loadBalance 服务吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆