通过Pod自身的服务从pod请求到localhost无效 [英] Request to localhost from pod via its own service does not work
问题描述
我有一个名为foo
的服务,带有一个foo
pod的选择器:
apiVersion: v1
kind: Service
metadata:
labels:
name: foo
name: foo
namespace: bar
spec:
clusterIP: 172.20.166.230
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
name: foo
sessionAffinity: None
type: ClusterIP
我有一个名为foo
且带有标签foo
的部署/吊舱:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "3"
generation: 3
labels:
name: foo
name: foo
namespace: bar
spec:
selector:
matchLabels:
name: foo
template:
metadata:
labels:
name: foo
spec:
containers:
image: my/image:tag
imagePullPolicy: Always
name: foo
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: ClusterFirst
我从foo
窗格向foo
主机发出了请求,主机已解决,但请求没有通过:
$ curl -vvv foo:8080
* Rebuilt URL to: foo:8080/
* Trying 172.20.166.230...
* TCP_NODELAY set
这应该像Kubernetes那样工作吗?
从相同名称空间的其他Pod请求foo
时,我没有任何问题.
我之所以不简单使用localhost:8080
(它可以正常工作)的原因是,我具有相同的配置文件,其中主机使用了不同的Pod,因此我不想为每个Pod编写特定的逻辑. /p>
Kubernetes 1.6.4,单节点群集,iptables模式.
使用iptables作为代理模式时,这似乎是默认行为.
I have a service named foo
with a selector to foo
pod:
apiVersion: v1
kind: Service
metadata:
labels:
name: foo
name: foo
namespace: bar
spec:
clusterIP: 172.20.166.230
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
name: foo
sessionAffinity: None
type: ClusterIP
I have a deployment/pod named foo
with a label foo
:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "3"
generation: 3
labels:
name: foo
name: foo
namespace: bar
spec:
selector:
matchLabels:
name: foo
template:
metadata:
labels:
name: foo
spec:
containers:
image: my/image:tag
imagePullPolicy: Always
name: foo
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: ClusterFirst
I make a request from foo
pod to foo
host, host resolved but requests just don't pass through:
$ curl -vvv foo:8080
* Rebuilt URL to: foo:8080/
* Trying 172.20.166.230...
* TCP_NODELAY set
Is this supposed to work like that in Kubernetes?
I don't have any problems requesting foo
from other pods from the same namespace.
The reason why I don't simply use localhost:8080
(which works fine) is that I have the same config file with hosts used by different pods, so I don't want to write a specific logic per pod.
Kubernetes 1.6.4, single-node cluster, iptables mode.
It looks like this is a default behavior when using iptables as a proxy mode.
这篇关于通过Pod自身的服务从pod请求到localhost无效的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!