iptables模式下的kube-proxy无法正常工作 [英] kube-proxy in iptables mode is not working
问题描述
我有
- Kubernetes:v.1.1.1
- iptables v1.4.21
- 内核:Ubuntu wily随附的4.2.0-18-generic
- 通过终结于交换机的L2 VLAN进行网络连接
- 没有云提供商
我做什么
我正在尝试使用iptables模式进行kube-proxy.我已经使用--proxy_mode=iptables
参数启用了它.似乎缺少一些规则:
I'm experimenting with iptables mode for kube-proxy. I have enabled it with --proxy_mode=iptables
argument. It seems some rule is missing:
iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 8 packets, 459 bytes)
pkts bytes target prot opt in out source destination
2116 120K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 2 packets, 120 bytes)
pkts bytes target prot opt in out source destination
718 45203 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain POSTROUTING (policy ACCEPT 5 packets, 339 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp dpt:31195 MARK set 0x4d415351
0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp dpt:31195
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp dpt:30873 MARK set 0x4d415351
0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp dpt:30873
Chain KUBE-SEP-5IXMK7UWPGVTWOJ7 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.8 0.0.0.0/0 /* mngbox/jumpbox:ssh */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp to:10.116.160.8:22
Chain KUBE-SEP-BNPLX5HQYOZINWEQ (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ tcp to:10.116.161.6:8086
Chain KUBE-SEP-CJMHKLXPTJLTE3OP (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.254.2 0.0.0.0/0 /* default/kubernetes: */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes: */ tcp to:10.116.254.2:6443
Chain KUBE-SEP-GSM3BZTEXEBWDXPN (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.7 0.0.0.0/0 /* kube-system/kube-dns:dns */ MARK set 0x4d415351
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.116.160.7:53
Chain KUBE-SEP-OAYOAJINXRPUQDA3 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.7 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.116.160.7:53
Chain KUBE-SEP-PJJZDQNXDGWM7MU6 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.5 0.0.0.0/0 /* default/docker-registry-fe:tcp */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp to:10.116.160.5:443
Chain KUBE-SEP-RWODGLKOVWXGOHUR (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ tcp to:10.116.161.6:8083
Chain KUBE-SEP-WE3Z7KMHA6KPJWKK (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-grafana: */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-grafana: */ tcp to:10.116.161.6:8080
Chain KUBE-SEP-YBQVM4LA4YMMZIWH (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.161.3 0.0.0.0/0 /* kube-system/monitoring-heapster: */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-heapster: */ tcp to:10.116.161.3:8082
Chain KUBE-SEP-YMZS7BLP4Y6MWTX5 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.9 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ tcp to:10.116.160.9:5000
Chain KUBE-SEP-ZDOOYAKDERKR43R3 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.10 0.0.0.0/0 /* default/kibana-logging: */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kibana-logging: */ tcp to:10.116.160.10:5601
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-JRXTEHDDTAFMSEAS tcp -- * * 0.0.0.0/0 10.116.0.48 /* kube-system/monitoring-grafana: cluster IP */ tcp dpt:80
0 0 KUBE-SVC-CK6HVV5A27TDFNIA tcp -- * * 0.0.0.0/0 10.116.0.188 /* kube-system/monitoring-influxdb:api cluster IP */ tcp dpt:8086
0 0 KUBE-SVC-DKEW3YDJFV3YJLS2 tcp -- * * 0.0.0.0/0 10.116.0.6 /* infra/docker-registry-backend:docker-registry-backend cluster IP */ tcp dpt:5000
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.116.0.2 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-WEHLQ23XZWSA5ZX3 tcp -- * * 0.0.0.0/0 10.116.0.188 /* kube-system/monitoring-influxdb:http cluster IP */ tcp dpt:8083
0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.1.142 /* default/docker-registry-fe:tcp cluster IP */ tcp dpt:443
0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 MARK set 0x4d415351
0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 ADDRTYPE match dst-type LOCAL
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.116.0.2 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-7IHGTXJ4CF2KVXJZ tcp -- * * 0.0.0.0/0 10.116.1.126 /* kube-system/monitoring-heapster: cluster IP */ tcp dpt:80
0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.1.175 /* mngbox/jumpbox:ssh cluster IP */ tcp dpt:2345
0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 MARK set 0x4d415351
0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 ADDRTYPE match dst-type LOCAL
0 0 KUBE-SVC-6N4SJQIF3IX3FORG tcp -- * * 0.0.0.0/0 10.116.0.1 /* default/kubernetes: cluster IP */ tcp dpt:443
0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.0.233 /* default/kibana-logging: cluster IP */ tcp dpt:8888
0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 MARK set 0x4d415351
0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 ADDRTYPE match dst-type LOCAL
0 0 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-CJMHKLXPTJLTE3OP all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes: */
Chain KUBE-SVC-7IHGTXJ4CF2KVXJZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YBQVM4LA4YMMZIWH all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-heapster: */
Chain KUBE-SVC-B6ZEWWY2BII6JG2L (3 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-ZDOOYAKDERKR43R3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kibana-logging: */
Chain KUBE-SVC-CK6HVV5A27TDFNIA (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-BNPLX5HQYOZINWEQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */
Chain KUBE-SVC-DKEW3YDJFV3YJLS2 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YMZS7BLP4Y6MWTX5 all -- * * 0.0.0.0/0 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-OAYOAJINXRPUQDA3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-GLKZVFIDXOFHLJLC (4 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-5IXMK7UWPGVTWOJ7 all -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */
Chain KUBE-SVC-JRXTEHDDTAFMSEAS (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-WE3Z7KMHA6KPJWKK all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-grafana: */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-GSM3BZTEXEBWDXPN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain KUBE-SVC-WEHLQ23XZWSA5ZX3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-RWODGLKOVWXGOHUR all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */
Chain KUBE-SVC-XZFGDLM7GMJHZHOY (4 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-PJJZDQNXDGWM7MU6 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */
当我向服务ip请求时,在我的情况下是10.116.0.2,我收到了一个错误
When I do request to the service ip, in my case it's 10.116.0.2 I got an error
;; connection timed out; no servers could be reached
当我确实向10.116.160.7服务器请求时,它工作正常. 我可以看到流量根本不针对kube-proxy规则,因此可能缺少某些内容.
while when I do request to the 10.116.160.7 server it's working fine. I can see that traffic is not directed to kube-proxy rules at all, so there is something missing probably.
我非常感谢有关规则缺失的任何提示
I will highly appreciate any hint about missing rule
编辑 我已经用thokin请求的缺少信息更新了我的初始请求,他指出了调试kube-proxy iptables规则的真正好方法,我可以通过以下方式确定我的问题:
EDIT Ive updated my initial request with missing information requested by thokin, he pointed to the really good way to debug the iptables rules for kube-proxy, and I could identify my problem with:
for c in PREROUTING OUTPUT POSTROUTING; do iptables -t nat -I $c -d 10.116.160.7 -j LOG --log-prefix "DBG@$c: "; done
for c in PREROUTING OUTPUT POSTROUTING; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done
然后我执行了以下命令: #nslookup kubernetes.default.svc.psc01.cluster 10.116.160.7 服务器:10.116.160.7 地址:10.116.160.7#53
Then I've executed following commands: # nslookup kubernetes.default.svc.psc01.cluster 10.116.160.7 Server: 10.116.160.7 Address: 10.116.160.7#53
Name: kubernetes.default.svc.psc01.cluster
Address: 10.116.0.1
# nslookup kubernetes.default.svc.psc01.cluster 10.116.0.2
;; connection timed out; no servers could be reached
结果,我得到了不同的源"地址和传出接口:
As a result I've got different "source" address and outgoing interface:
[701768.263847] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=12436 PROTO=UDP SPT=54501 DPT=53 LEN=62
[702620.454211] DBG@OUTPUT: IN= OUT=docker0 SRC=10.116.176.1 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=22733 PROTO=UDP SPT=28704 DPT=53 LEN=62
[702620.454224] DBG@POSTROUTING: IN= OUT=docker0 SRC=10.116.176.1 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=22733 PROTO=UDP SPT=28704 DPT=53 LEN=62
[702626.318258] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
[702626.318263] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
[702626.318266] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
[702626.318270] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
[702626.318284] DBG@POSTROUTING: IN= OUT=docker0 SRC=10.116.250.252 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
因此,通过添加路线
ip route add 10.116.0.0/23 dev docker0
现在一切正常!
推荐答案
对于将来,iptables-save
的结果更容易阅读(无论如何对我来说).
For future, the results of iptables-save
are much easier to read (to me anyway).
我在这里没有发现任何遗物.
I don't see anything missing here.
KUBE-SERVICES
捕获10.116.0.2端口53/UDP并将其传递给KUBE-SVC-TCOU7JCQXEZGVUNU
KUBE-SERVICES
traps 10.116.0.2 port 53/UDP and passes it to KUBE-SVC-TCOU7JCQXEZGVUNU
KUBE-SVC-TCOU7JCQXEZGVUNU
只有一个端点,因此跳转到KUBE-SEP-GSM3BZTEXEBWDXPN
KUBE-SVC-TCOU7JCQXEZGVUNU
has just one endpoint so jumps to KUBE-SEP-GSM3BZTEXEBWDXPN
KUBE-SEP-GSM3BZTEXEBWDXPN
DNAT到10.116.160.7端口53/UDP
KUBE-SEP-GSM3BZTEXEBWDXPN
DNATs to 10.116.160.7 port 53/UDP
如果您断言10.116.160.7有效,而10.116.0.2无效,那确实很奇怪.这表明iptables规则根本没有触发.您是从节点本身还是从容器进行测试?
If you assert that 10.116.160.7 works while 10.116.0.2 does not, that is strange indeed. It suggests that the iptables rules are not triggering at all. Are you testing from the node itself or from a container?
您正在使用什么网络? L3(衬里?)绒布? OVS?还有吗?
What networking are you using? L3 (underlay?) Flannel? OVS? Something else?
什么云提供商(如果有)?
What cloud provider (if any)?
调试的第一步:运行:for c in PREROUTING OUTPUT; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done
First step to debug: run: for c in PREROUTING OUTPUT; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done
这会将iptables看到的所有数据包记录到您的服务IP.现在来看dmesg
.
That will log any packets that iptables sees to your service IP. Now look at dmesg
.
这篇关于iptables模式下的kube-proxy无法正常工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!