从同一节点上的同一容器与不同容器上运行kubectl代理-有什么区别? [英] Running kubectl proxy from same pod vs different pod on same node - what's the difference?

查看:146
本文介绍了从同一节点上的同一容器与不同容器上运行kubectl代理-有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试

在群集上执行此操作时,得到了预期的行为.但是,我将运行还需要kubectl proxy的其他服务,因此我想将其合理化为它自己的守护程序集,以确保其在所有节点上运行.因此,我删除了kube-proxy容器并部署了以下守护程序集:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-proxy
  labels:
    app: kube-proxy
spec:
  template:
    metadata:
      labels:
        app: kube-proxy
    spec:
      containers:
      - name: kube-proxy
        image: buoyantio/kubectl:v1.8.5
        args:
        - "proxy"
        - "-p"
        - "8001"

换句话说,容器配置与以前相同,但是现在在每个节点上的独立容器中运行,而不是在同一容器中运行.通过这种配置,东西不再工作了" **.

我意识到解决方案(至少到目前为止)是仅在需要它的任何pod中运行kube-proxy容器,但是我想知道为什么为什么.为什么仅在后台驻留程序中运行它还不够?

我试图找到有关像这样运行kubectl proxy的更多信息,但是我的搜索结果淹没了有关运行它以从本地环境访问远程集群的结果,即根本不了解我. /p>


我之所以提供这些详细信息,并不是因为我认为它们是相关的,而是因为即使我确信它们无关紧要,它们也可能是这样的:

*)一个Linkerd入口控制器,但我认为这无关紧要

**),在这种情况下,有效"状态是入口控制器抱怨目的地不明,因为没有匹配的入口规则,而无效"状态是网络超时.

解决方案

即在从Pod中运行kubectl代理与在其他Pod中运行它之间.

假设您的群集具有软件定义的网络,例如法兰绒或印花棉布,则Pod具有其自己的IP,并且Pod中的所有容器共享同一网络空间.因此:

containers:
- name: c0
  command: ["curl", "127.0.0.1:8001"]
- name: c1
  command: ["kubectl", "proxy", "-p", "8001"]

将起作用,而在DaemonSet中,根据定义它们不在同一Pod中,因此上述假设的c0将需要使用DaemonSet的Pod的IP与8001联系.该故事变得更加复杂.由于默认情况下kubectl proxy 会监听127.0.0.1,因此您需要更改DaemonSet的Pod的kubectl proxy以包含--address='0.0.0.0' --accept-hosts='.*'甚至允许这种跨Pod通信.我相信您还需要在DaemonSet配置中声明ports:数组,因为您现在正在将该端口公开到集群中,但是我必须仔细检查ports:只是礼貌还是实际需要.

I'm experimenting with this, and I'm noticing a difference in behavior that I'm having trouble understanding, namely between running kubectl proxy from within a pod vs running it in a different pod.

The sample configuration run kubectl proxy and the container that needs it* in the same pod on a daemonset, i.e.

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
# ...
spec:
  template:
    metadata:
    # ...
    spec:
      containers:

      # this container needs kubectl proxy to be running:
      - name: l5d
        # ...

      # so, let's run it:
      - name: kube-proxy
        image: buoyantio/kubectl:v1.8.5
        args:
         - "proxy"
         - "-p"
         - "8001"

When doing this on my cluster, I get the expected behavior. However, I will run other services that also need kubectl proxy, so I figured I'd rationalize that into its own daemon set to ensure it's running on all nodes. I thus removed the kube-proxy container and deployed the following daemon set:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-proxy
  labels:
    app: kube-proxy
spec:
  template:
    metadata:
      labels:
        app: kube-proxy
    spec:
      containers:
      - name: kube-proxy
        image: buoyantio/kubectl:v1.8.5
        args:
        - "proxy"
        - "-p"
        - "8001"

In other words, the same container configuration as previously, but now running in independent pods on each node instead of within the same pod. With this configuration "stuff doesn't work anymore"**.

I realize the solution (at least for now) is to just run the kube-proxy container in any pod that needs it, but I'd like to know why I need to. Why isn't just running it in a daemonset enough?

I've tried to find more information about running kubectl proxy like this, but my search results drown in results about running it to access a remote cluster from a local environment, i.e. not at all what I'm after.


I include these details not because I think they're relevant, but because they might be even though I'm convinced they're not:

*) a Linkerd ingress controller, but I think that's irrelevant

**) in this case, the "working" state is that the ingress controller complains that the destination is unknown because there's no matching ingress rule, while the "not working" state is a network timeout.

解决方案

namely between running kubectl proxy from within a pod vs running it in a different pod.

Assuming your cluster has an software defined network, such as flannel or calico, a Pod has its own IP and all containers within a Pod share the same networking space. Thus:

containers:
- name: c0
  command: ["curl", "127.0.0.1:8001"]
- name: c1
  command: ["kubectl", "proxy", "-p", "8001"]

will work, whereas in a DaemonSet, they are by definition not in the same Pod and thus the hypothetical c0 above would need to use the DaemonSet's Pod's IP to contact 8001. That story is made more complicated by the fact that kubectl proxy by default only listens on 127.0.0.1, so you would need to alter the DaemonSet's Pod's kubectl proxy to include --address='0.0.0.0' --accept-hosts='.*' to even permit such cross-Pod communication. I believe you also need to declare the ports: array in the DaemonSet configuration, since you are now exposing that port into the cluster, but I'd have to double-check whether ports: is merely polite, or is actually required.

这篇关于从同一节点上的同一容器与不同容器上运行kubectl代理-有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆