Google容器引擎Kubernetes服务LoadBalancer是否将流量发送给无响应主机? [英] Is the Google Container Engine Kubernetes Service LoadBalancer sending traffic to unresponsive hosts?

查看:129
本文介绍了Google容器引擎Kubernetes服务LoadBalancer是否将流量发送给无响应主机?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

问题:由Kubernetes(通过Google容器引擎)创建的Google Cloud网络LoadBalancer是否将流量发送给未侦听的主机? 这个目标池没有健康检查,所以流量将被发送到所有实例,无论其状态如何。



我有一个服务(NGINX反向代理)豆荚,并使TCP:80,443可用。在我的例子中,只有1个NGINX pod在实例池中运行。服务类型是LoadBalancer。使用Google Container Engine,将创建一个指定目标池和特定VM实例的新LoadBalancer(LB)。然后创建一个临时的LB外部IP地址和一个允许传入流量的相关防火墙规则。



我的问题是Kubernetes自动生成的防火墙规则描述是 KubernetesAutoGenerated_OnlyAllowTrafficForDestinationIP_1.1.1.1(IP是LB外部IP)。在测试中,我注意到即使每个VM实例都有一个外部IP地址,我也无法通过任何一个实例IP地址上的端口80或443与其联系,只能访问LB IP。这对外部用户流量并不坏,但是当我尝试为我的LB创建运行状况检查时,我发现它总是在单独检查每个VM实例时看到服务不可用。



我有适当的防火墙规则,以便任何IP地址可以在我池中的任何实例上联系TCP 443,80,所以这不是问题。



有人可以向我解释这一点,因为它让我认为LB将HTTP请求传递给两个实例,尽管只有其中一个实例运行NGINX pod。


$ b

是由Kubernetes(通过Google Container Engine)创建的Google Cloud网络LoadBalancer将流量发送给不是主机监听?

所有主机(当前运行功能kube代理进程)都能够接收和处理传入的外部请求服务。这些请求将落在群集中的任意节点虚拟机上,匹配iptables规则并被(通过kube-proxy进程)转发到具有与服务匹配的标签选择器的窗格。

因此,健康检查程序会阻止请求被丢弃的情况是,如果您有一个正在运行的节点虚拟机处于故障状态。 VM仍然具有与转发规则相匹配的目标标记,但不能处理传入的数据包。

lockquote

在测试中,我注意到即使每个VM实例都有一个外部IP地址,我也无法在端口80或443上联系它的实例IP地址,只有LB IP。


这是按预期工作的。每个服务都可以使用任何期望的端口,这意味着多个服务可以使用端口80和443.如果数据包在端口80上到达主机IP上,主机无法知道使用端口中的哪些(可能是多个)服务80数据包应该被转发到。用于服务的iptables规则处理发往虚拟内部群集服务IP和外部服务IP的数据包,但不处理主机IP。 b
$ b


这对外部用户流量并不坏,但当我尝试为我的LB创建健康检查时,我发现它总是看到服务在单独检查每个VM实例时不可用。


如果要设置健康检查以验证节点是否正常工作,您可以通过安装防火墙规则来健康检查运行在端口 10250 上的kubelet进程:

  $ gcloud compute防火墙规则创建kubelet-healthchecks \ 
--source-ranges 130.211.0.0/22 \
--target-tags $ TAG \
--allow tcp:10250

(请查看 Container Engine HTTP Load Balancer 文档,以帮助您找到 $ TAG应该使用的内容)。



健康检查kube-proxy进程会更好直接,但它只是绑定到localhost ,而kubelet进程绑定到所有接口,以便运行状况检查程序可以访问它,它应该作为一个很好的指示器,表明该节点足够健康,可以为您的服务提供请求。

Question: Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? "This target pool has no health check, so traffic will be sent to all instances regardless of their status."

I have a service (NGINX reverse proxy) that targets specific pods and makes TCP: 80, 443 available. In my example only 1 NGINX pod is running within the instance pool. The Service type is "LoadBalancer". Using Google Container Engine this creates a new LoadBalancer (LB) that specifies target pools, specific VM Instances. Then a ephemeral external IP address for the LB and an associated Firewall rule that allows incoming traffic is created.

My issue is that the Kubernetes auto-generated firewall rule description is "KubernetesAutoGenerated_OnlyAllowTrafficForDestinationIP_1.1.1.1" (IP is the LB external IP). In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP. This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually.

I have proper firewall rules so that any IP address may contact TCP 443, 80 on any instance within my pool, so that's not the issue.

Can someone explain this to me because it makes me think that the LB is passing HTTP requests to both instances despite only one of those instances having the NGINX pod running on it.

解决方案

Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening?

All hosts (that are currently running a functional kube-proxy process) are capable of receiving and handling incoming requests for the externalized service. The requests will land on an arbitrary node VM in your cluster, match an iptables rule and be forwarded (by kube-proxy process) to a pod that has a label selector that matches the service.

So the case where a healthchecker would prevent requests from being dropped is if you had a node VM that was running in a broken state. The VM would still have the target tag matching the forwarding rule but wouldn't be able to handle the incoming packets.

In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP.

This is working as intended. Each service can use any port that is desires, meaning that multiple services can use ports 80 and 443. If a packet arrives on the host IP on port 80, the host has no way to know which of the (possibly many) services using port 80 the packet should be forwarded to. The iptables rules for services handle packets that are destined to the virtual internal cluster service IP and the external service IP, but not the host IP.

This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually.

If you want to set up a healthcheck to verify that a node is working properly, you can healthcheck the kubelet process that is running on port 10250 by installing a firewall rule:

$ gcloud compute firewall-rules create kubelet-healthchecks \
  --source-ranges 130.211.0.0/22 \
  --target-tags $TAG \
  --allow tcp:10250

(check out the Container Engine HTTP Load Balancer documentation to help find what you should be using for $TAG).

It would be better to health check the kube-proxy process directly, but it only binds to localhost, whereas the kubelet process binds to all interfaces so it is reachable by the health checkers and it should serve as a good indicator that the node is healthy enough to serve requests to your service.

这篇关于Google容器引擎Kubernetes服务LoadBalancer是否将流量发送给无响应主机?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆