Kubernetes无法从节点上的Pod内部访问kube-apiserver [英] Kubernetes unable to access the kube-apiserver from inside pod on node

查看:234
本文介绍了Kubernetes无法从节点上的Pod内部访问kube-apiserver的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经配置了一个无业游民支持的kubernetes集群,但是我无法从在节点上运行的pod内访问在主服务器上运行的kube-apiserver.我试图通过api在pod内查找服务,但看起来api一直在断开连接.

I have configured a vagrant backed kubernetes cluster but I am unable to access the kube-apiserver running on master from within pods running on nodes. I am trying to look up a service from within a pod via the api but it looks like the api keeps dropping the connection.

使用吊舱内的卷发,我得到以下输出

Using curl from within the pod I get the following output

root@itest-pod-2:/# curl -v \
--insecure -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://$KUBERNETES_SERVICE_HOST:443/api/v1/namespaces/default/services?labelSelector=name%3Dtest-server
* Hostname was NOT found in DNS cache
*   Trying 10.245.0.1...
* Connected to 10.245.0.1 (10.245.0.1) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to 10.245.0.1:443 
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to 10.245.0.1:443 
root@itest-pod-2:/# 

但是,如果我通过简单地将所有节点组件安装到主节点上来配置单个机器环境,则可以从吊舱内与api联系.

However if I configure a single machine environment by simply installing all the node components onto the master I am able to contact the api from within a pod.

root@itest-pod-3:/# curl -v --insecure \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://$KUBERNETES_SERVICE_HOST:443/api/v1/namespaces/default/services?labelSelector=name%3Dtest-server
* Hostname was NOT found in DNS cache
*   Trying 10.245.0.1...
* Connected to 10.245.0.1 (10.245.0.1) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-SHA
* Server certificate:
*    subject: CN=10.0.2.15@1452869292
*    start date: 2016-01-15 14:48:12 GMT
*    expire date: 2017-01-14 14:48:12 GMT
*    issuer: CN=10.0.2.15@1452869292
*    SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /api/v1/namespaces/default/services?labelSelector=name%3Dtest-server HTTP/1.1
> User-Agent: curl/7.38.0
> Host: 10.245.0.1
> Accept: */*
> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tdDY3cXUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImIxNGI4YWE3LWJiOTgtMTFlNS1iNjhjLTA4MDAyN2FkY2NhZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.HhPnit7Sfv-yUkMW6Cy9ZVbuiV2wt5PLaPSP-uZtaByOPagkb8d-8zBQE8Lx53lqxMFwBmjjxSWl-vKtSGa0ka6rEkq_xWtFJb8uDDlxz_R63R6IJBWB8YhwB7SzPVWgtHohj55D3pL8-r8NOQSQVXFAHaiGTlzmtwiE3CmJv3yBzBLALG0yvtW2YgwrO9jlxCGdFIOKae-5eduiOyZHUimxAgfBkbwSNfSzXYZTJNryfPiDBKZybh9c3Wd-pNsSZyw9gbBhbGFM7EiK9pWsdViQ__fZA2JbxX78YbajWE6CeL4FWLKFu4MuIlnmhLOvOXia_9WXz1B8XJ-MlzclZQ
> 
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Fri, 15 Jan 2016 16:37:40 GMT
< Content-Length: 171
< 
{
  "kind": "ServiceList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/default/services",
    "resourceVersion": "1518"
  },
  "items": []
}
* Connection #0 to host 10.245.0.1 left intact

让我感到困惑的是,两种情况下的配置都是相同的,只是节点组件已安装到主服务器中,这使我认为这并不是对ssl/https的错误配置,而是与之相关的kubernetes网络配置.

What's confusing me is that the configuration is the same in both cases except that the node components have been installed into the master, which makes me think it is not a misconfiguration of ssl/https so much as it is something to do with the kubernetes network configuration.

我已经查看了apiserver的日志,但看不到与这些已删除的连接有关的任何内容.

I've looked into the logs of the apiserver but I can't see anything related to these dropped connections.

任何帮助将不胜感激.

推荐答案

问题是我们尚未为 apiserver(我们设置了不安全的绑定地址,但没有设置--bind-address) 我们认为这不会有问题,因为默认情况下,apiserver会在所有接口上进行绑定.

The problem was that we had not set the bind address for the apiserver (we had set insecure bind address but not --bind-address) we thought this would not be a problem since by default the apiserver binds on all interfaces.

在所有接口上绑定时,对/api/v1/endpoints的调用将返回apiserver安全端口的eth0 IP.在大多数情况下,这可能很好 但是由于我们在virtualbox vm eth0上运行kubernetes,因此 由virtualbox创建的NAT接口,只能通过以下方式访问 VBoxHeadless正在侦听的主机端口.

When bound on all interfaces calls to /api/v1/endpoints return the eth0 IP for the apiserver secure port. In most cases this would probably be fine but since we were running kubernetes on a virtualbox vm eth0 is the NAT interface created by by virtualbox that can onl be reached through host ports on which VBoxHeadless is listening.

当传出流量离开Pod时,它会遇到一组iptables规则 匹配群集服务ips并重定向到代理上的端口 然后,代理将请求转发到 集群.

When outgoing traffic leaves a pod it hits a set of iptables rules matching cluster service ips and redirecting to a port on the proxy the proxy then forwards the request to the actual machine in the cluster.

在这种情况下,kube-proxy没有可用的 apiservice的外部可访问IP,但它具有virtualbox使用的eth0地址.

In this case kube-proxy did not have available the externally reachable ip for the apiservice instead it had unreachable the eth0 address used by virtualbox.

奇怪的是,似乎代理随后尝试与 api在其不安全的端口上(它知道 请求(由于它创建的iptables规则).根据我们的要求 在这种情况下,https是apiserver在第一个客户端之后将其删除 你好.

Oddly though it seems as if the proxy then attempts to contact the api on its insecure port (it knows the intended destination for the request due to the iptables rules which it creates). Since our request in this case is https the apiserver drops it after the first client hello.

通常在卷曲时看起来像这样

Normally in curl this looks like this

root@app-master-0:/home/vagrant# curl -v --insecure \
https://10.235.1.2:8080/api/v1/namespaces/default/services?labelSelector=name%3Dtest-server
* Hostname was NOT found in DNS cache
*   Trying 10.235.1.2...
* Connected to 10.235.1.2 (10.235.1.2) port 8080 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
* Closing connection 0
curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
protocol

但是当通过kube代理进行代理时,它看起来像这样

But when being proxied through the kube proxy it looks like this

root@itest-pod-2:/# curl -v --insecure \
https://$KUBERNETES_SERVICE_HOST:443/api/v1/namespaces/default/services?labelSelector=name%3Dtest-server
* Hostname was NOT found in DNS cache
*   Trying 10.245.0.1...
* Connected to 10.245.0.1 (10.245.0.1) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to 10.245.0.1:443
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to 10.245.0.1:443

通过将--bind-address=xxxx与外部可访问的eth1 ip添加到apiserver的args中,我们可以解决此问题.

by adding --bind-address=xxxx with the externally reachable eth1 ip to the apiserver's args we were able to fix this.

这篇关于Kubernetes无法从节点上的Pod内部访问kube-apiserver的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆