Kubernetes入口-第二个节点端口没有响应 [英] Kubernetes Ingress - Second Node Port is not responding

查看:64
本文介绍了Kubernetes入口-第二个节点端口没有响应的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在内部部署K8S集群(云中什么都没有),其中有一个K8S主节点和两个工作节点.

I am running K8S cluster on-premise (Nothing in the cloud) with one K8S Master and two worker nodes.

  • k8s-master:192.168.100.100
  • worker-node-1:192.168.100.101
  • worker-node-2:192.168.100.102

我使用kubernetes/ingress-nginx将流量路由到我的简单应用程序.这些是我在两个worker节点上运行的pod:

I used kubernetes / ingress-nginx for routing traffic to my simple App. These are my pods running on both workers nodes:

[root@k8s-master ingress]# kubectl get pods -A -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE   IP                NODE            NOMINATED NODE   READINESS GATES
default                hello-685445b9db-b7nql                       1/1     Running   0          44m   10.5.2.7          worker-node-2   <none>           <none>
default                hello-685445b9db-ckndn                       1/1     Running   0          44m   10.5.2.6          worker-node-2   <none>           <none>
default                hello-685445b9db-vd6h2                       1/1     Running   0          44m   10.5.1.18         worker-node-1   <none>           <none>
default                ingress-nginx-controller-56c75d774d-p7whv    1/1     Running   1          30h   10.5.1.14         worker-node-1   <none>           <none>
kube-system            coredns-74ff55c5b-s8zss                      1/1     Running   12         16d   10.5.0.27         k8s-master      <none>           <none>
kube-system            coredns-74ff55c5b-w6rsh                      1/1     Running   12         16d   10.5.0.26         k8s-master      <none>           <none>
kube-system            etcd-k8s-master                              1/1     Running   12         16d   192.168.100.100   k8s-master      <none>           <none>
kube-system            kube-apiserver-k8s-master                    1/1     Running   12         16d   192.168.100.100   k8s-master      <none>           <none>
kube-system            kube-controller-manager-k8s-master           1/1     Running   14         16d   192.168.100.100   k8s-master      <none>           <none>
kube-system            kube-flannel-ds-76mt8                        1/1     Running   1          30h   192.168.100.102   worker-node-2   <none>           <none>
kube-system            kube-flannel-ds-bfnjw                        1/1     Running   10         16d   192.168.100.101   worker-node-1   <none>           <none>
kube-system            kube-flannel-ds-krgzg                        1/1     Running   13         16d   192.168.100.100   k8s-master      <none>           <none>
kube-system            kube-proxy-6bq6n                             1/1     Running   1          30h   192.168.100.102   worker-node-2   <none>           <none>
kube-system            kube-proxy-df8fn                             1/1     Running   13         16d   192.168.100.100   k8s-master      <none>           <none>
kube-system            kube-proxy-z8q2z                             1/1     Running   10         16d   192.168.100.101   worker-node-1   <none>           <none>
kube-system            kube-scheduler-k8s-master                    1/1     Running   12         16d   192.168.100.100   k8s-master      <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-799cd98cf6-zh8xs   1/1     Running   9          16d   192.168.100.101   worker-node-1   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-74d688b6bc-hvxgm        1/1     Running   10         16d   10.5.1.17         worker-node-1   <none>           <none>

这些是在群集上运行的服务:

And these are the services running on my cluster:

[root@k8s-master ingress]# kubectl get svc
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
hello                                NodePort    10.105.236.241   <none>        80:31999/TCP                 30h
ingress-nginx-controller             NodePort    10.110.141.41    <none>        80:30428/TCP,443:32682/TCP   30h
ingress-nginx-controller-admission   ClusterIP   10.109.15.31     <none>        443/TCP                      30h
kubernetes                           ClusterIP   10.96.0.1        <none>        443/TCP                      16d

这是入口的描述:

[root@k8s-master ingress]# kubectl describe  ingress ingress-hello
Name:             ingress-hello
Namespace:        default
Address:          10.110.141.41
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *
              /hello   hello:80 (10.5.1.18:80,10.5.2.6:80,10.5.2.7:80)
Annotations:  kubernetes.io/ingress.class: nginx
              nginx.ingress.kubernetes.io/rewrite-target: /

问题在于,通过使用Ingress Controller Port = 30428 http://192.168.100.102:30428 .我也尝试执行telnet命令(在工作节点2内),也没有运气:

The issue is when accessing the first node by visiting worker-node-1 IP Address with Ingress Controller Port = 30428, http://192.168.100.101:30428, its working fine with no problems. While accessing worker-node-2 by visiting IP with same ingress port 30428, its NOT RESPONDING from out side the node and also from inside the node too by accessing the URL: http://192.168.100.102:30428 . I also tried executing telnet command (inside the worker node 2), no luck also:

[root@worker-node-2 ~]# telnet 192.168.100.102 30428
Trying 192.168.100.102...

最有趣的是端口显示在netstat命令中,因为我正在从Node-2内部执行此命令,显示入口端口:30428处于 LISTEN 状态:

The most interesting thing is the port is shows up in netstat command, as I am executing this command from inside the Node-2 , showing ingress Port:30428 is in LISTEN state:

[root@worker-node-2 ~]# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      1284/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      2578/kube-proxy
tcp        0      0 0.0.0.0:32682           0.0.0.0:*               LISTEN      2578/kube-proxy
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1856/dnsmasq
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1020/sshd
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1016/cupsd
tcp        0      0 127.0.0.1:41561         0.0.0.0:*               LISTEN      1284/kubelet
tcp        0      0 0.0.0.0:30428           0.0.0.0:*               LISTEN      2578/kube-proxy
tcp        0      0 0.0.0.0:31999           0.0.0.0:*               LISTEN      2578/kube-proxy
tcp6       0      0 :::10250                :::*                    LISTEN      1284/kubelet
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd
tcp6       0      0 :::10256                :::*                    LISTEN      2578/kube-proxy
tcp6       0      0 :::22                   :::*                    LISTEN      1020/sshd
tcp6       0      0 ::1:631                 :::*                    LISTEN      1016/cupsd
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           929/avahi-daemon: r
udp        0      0 0.0.0.0:44997           0.0.0.0:*                           929/avahi-daemon: r
udp        0      0 192.168.122.1:53        0.0.0.0:*                           1856/dnsmasq
udp        0      0 0.0.0.0:67              0.0.0.0:*                           1856/dnsmasq
udp        0      0 0.0.0.0:111             0.0.0.0:*                           1/systemd

根据我的理解,所有工作节点都必须为入口控制器"公开NodePort.端口哪个= 30428?

based on my understanding , all worker node must expose NodePort for "ingress controller" port which=30428??

我发现"ingress-nginx-controller-56c75d774d-p7whv" 仅部署在节点1上.我是否需要确保ingress-nginx控制器正在所有节点上运行?如果这个说法是正确的,怎么实现呢?

Edited: I found that "ingress-nginx-controller-56c75d774d-p7whv" is deployed only on node-1. Do I need to make sure that the ingress-nginx controller is running on all nodes? how to achieve that if this statement is true?

推荐答案

Kubernetes网络(更具体地讲是kube-proxy)使用iptables来控制Pod和节点之间的网络连接.由于Centos 8使用 nftables 代替 iptables ,因此会导致网络问题.

Kubernetes networking (kube-proxy to be more specific) uses iptables to control the network connections between pods and nodes. Since Centos 8 uses nftables instead iptables this cause networking issues.

v.3.8.1 +中的Calico包括对在NFT模式下使用iptables的主机的支持.解决方案是设置 FELIX_IPTABLESBACKEND = NFT 选项.这将告诉Calico使用支持的nftable.

Calico in v.3.8.1+ included support for hosts which uses iptables in NFT mode. The solution is to set FELIX_IPTABLESBACKEND=NFT option. This will tell Calico to use nftables backed.

此参数控制二进制Felix使用的iptables变体.将此设置为 Auto 可以自动检测后端.如果具体需要后端,然后对使用netfilter后端的主机使用 NFT 或其他 Legacy .[默认值:旧版]

This parameter controls which variant of iptables binary Felix uses. Set this to Auto for auto detection of the backend. If a specific backend is needed then use NFT for hosts using a netfilter backend or Legacy for others. [Default: Legacy]

请访问该印花布页面,以查看如何配置felix .欲了解更多信息,请访问此github问题.

Please visit this calico page to check how to configure felix. For more reading please visit this github issues.

这篇关于Kubernetes入口-第二个节点端口没有响应的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆