如何让一个 Pod 与 Kubernetes 中的另一个 Pod 联网?(简单的) [英] How do I get one pod to network to another pod in Kubernetes? (SIMPLE)

查看:38
本文介绍了如何让一个 Pod 与 Kubernetes 中的另一个 Pod 联网?(简单的)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的头断断续续地撞在这堵墙上有一段时间了.网络上有大量关于 Kubernetes 的信息,但所有这些都假设知识如此丰富,以至于像我这样的 n00b 没有太多可做的.

那么,任何人都可以分享以下的简单示例(作为 yaml 文件)吗?我想要的是

  • 两个豆荚
  • 假设一个 pod 有一个后端(我不知道 - node.js),一个有一个前端(比如 React).
  • 一种在他们之间建立联系的方式.

然后是一个从后往前调用api调用的例子.

我开始研究这类事情,突然间我点击了这个页面 - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this.这超级无用.我不想要或不需要高级网络策略,也没有时间遍历映射在 kubernetes 之上的几个不同的服务层.我只想找出一个网络请求的简单例子.

希望如果这个例子存在于 stackoverflow 上,它也能为其他人服务.

任何帮助将不胜感激.谢谢.

EDIT; 看起来最简单的例子可能是使用 Ingress 控制器.

编辑编辑;

我正在努力尝试部署一个最小的示例 - 我将在这里介绍一些步骤并指出我的问题.

下面是我的 yaml 文件:

apiVersion: apps/v1beta1种类:部署元数据:名称:前端标签:应用程序:前端规格:复制品:3选择器:匹配标签:应用程序:前端模板:元数据:标签:应用程序:前端规格:容器:- 名称:nginx图片:患者鸭嘴兽/frontend_example端口:- 容器端口:3000---api版本:v1种类:服务元数据:名称:前端规格:类型:负载均衡器选择器:应用程序:前端端口:- 协议:TCP端口:80目标端口:3000---apiVersion: 应用程序/v1beta1种类:部署元数据:名称:后端标签:应用程序:后端规格:复制品:3选择器:匹配标签:应用程序:后端模板:元数据:标签:应用程序:后端规格:容器:- 名称:nginx图片:患者鸭嘴兽/backend_example端口:- 容器端口:5000---api版本:v1种类:服务元数据:名称:后端规格:类型:负载均衡器选择器:应用程序:后端端口:- 协议:TCP端口:80目标端口:5000---apiVersion: 扩展/v1beta1种类:入口元数据:名称:前端规格:规则:- 主机:www.kubeplaytime.example网址:路径:- 小路:/后端:服务名称:前端服务端口:80- 路径:/api后端:服务名称:后端服务端口:80

我相信这是在做的是

  • 部署前端和后端应用程序 - 我将 patientplatypus/frontend_examplepatientplatypus/backend_example 部署到 dockerhub,然后将图像拉下来.我有一个悬而未决的问题是,如果我不想从 docker hub 拉取图像而只想从我的本地主机加载,这可能吗?在这种情况下,我会将我的代码推送到生产服务器,在服务器上构建 docker 镜像,然后上传到 kubernetes.好处是如果我希望我的图像是私有的,我不必依赖 dockerhub.

  • 它正在创建两个服务端点,将外部流量从 Web 浏览器路由到每个部署.这些服务属于 loadBalancer 类型,因为它们正在我在部署中拥有的(在本例中为 3 个)副本集之间平衡流量.

  • 最后,我有一个入口控制器,应该允许我的服务通过 www.kubeplaytime.examplewww 相互路由.kubeplaytime.example/api.但是这不起作用.

当我运行它时会发生什么?

patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml已创建 deployment.apps前端"服务前端"已创建已创建 deployment.apps后端"服务后端"已创建ingress.extensions前端"创建

  • 所以首先,它似乎创建了我需要的所有部分,没有错误.

    patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

    后端 LoadBalancer 10.0.18.174 80:31649/TCP 1m

    frontend LoadBalancer 10.0.100.65 80:32635/TCP 1m

    kubernetes ClusterIP 10.0.0.1 443/TCP 10d

    前端 LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m

    后端 LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m

  • 其次,如果我观看这些服务,我最终会获得可用于在浏览器中导航到这些站点的 IP 地址.上述每个 IP 地址分别用于将我路由到前端和后端.

无论如何

当我尝试使用入口控制器时遇到问题 - 它似乎已部署,但我不知道如何到达那里.

patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses名称主机地址端口时代前端 www.kubeplaytime.example 80 16m

  • 所以我没有可以使用的地址,而且 www.kubeplaytime.example 似乎不起作用.

为了获得 IP 地址,我似乎需要做的是路由到我刚刚创建的入口扩展,使用服务和部署在 it 上,但这开始看起来令人难以置信很快就复杂了.

例如,看看这篇中等文章:https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e.

似乎为服务路由到入口(即他所谓的入口控制器)添加的必要代码似乎是这样的:

---种类:服务api版本:v1元数据:名称:入口-nginx规格:类型:负载均衡器选择器:应用程序:入口-nginx端口:- 名称:http端口:80目标端口:http- 名称:https端口:443目标端口:https---种类:部署apiVersion: 扩展/v1beta1元数据:名称:入口-nginx规格:复制品:1模板:元数据:标签:应用程序:入口-nginx规格:终止GracePeriodSeconds:60容器:- 图片:gcr.io/google_containers/nginx-ingress-controller:0.8.3名称:入口-nginximagePullPolicy:始终端口:- 名称:http集装箱港口:80协议:TCP- 名称:https容器端口:443协议:TCP活性探针:获取:路径:/healthz端口:10254方案:HTTP初始延迟秒数:30超时秒数:5环境:- 名称:POD_NAME值来自:字段引用:字段路径:metadata.name- 名称:POD_NAMESPACE值来自:字段引用:字段路径:metadata.namespace参数:-/nginx-ingress-controller- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend---种类:服务api版本:v1元数据:名称:nginx-default-backend规格:端口:- 端口:80目标端口:http选择器:应用程序:nginx-default-backend---种类:部署apiVersion: 扩展/v1beta1元数据:名称:nginx-default-backend规格:复制品:1模板:元数据:标签:应用程序:nginx-default-backend规格:终止GracePeriodSeconds:60容器:- 名称:default-http-backend图片:gcr.io/google_containers/defaultbackend:1.0活性探针:获取:路径:/healthz端口:8080方案:HTTP初始延迟秒数:30超时秒数:5资源:限制:中央处理器:10m内存:20米要求:中央处理器:10m内存:20米端口:- 名称:http容器端口:8080协议:TCP

这似乎需要附加到我上面的其他 yaml 代码中,以便为我的入口路由获取服务入口点,并且它似乎提供了一个 ip:

patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services名称 类型 CLUSTER-IP EXTERNAL-IP PORT(S) AGE后端 LoadBalancer 10.0.31.209 <pending>80:32428/TCP 4m前端 LoadBalancer 10.0.222.47 <pending>80:32482/TCP 4m入口-nginx LoadBalancer 10.0.28.157 <pending>80:30573/TCP,443:30802/TCP 4mkubernetes ClusterIP 10.0.0.1 <无>443/TCP 10dnginx-default-backend ClusterIP 10.0.71.121 <none>80/TCP 4m前端负载均衡器 10.0.222.47 40.121.7.66 80:32482/TCP 5m入口-nginx LoadBalancer 10.0.28.157 40.121.6.179 80:30573/TCP,443:30802/TCP 6m后端负载均衡器 10.0.31.209 40.117.248.73 80:32428/TCP 7m

所以 ingress-nginx 似乎是我想要访问的站点.导航到 40.121.6.179 会返回一条默认的 404 消息(default backend - 404)——它不会作为 /<转到 frontend/code> 没有路由./api 返回相同.导航到我的主机命名空间 www.kubeplaytime.example 从浏览器返回 404 - 没有错误处理.

问题

  • 入口控制器是绝对必要的吗?如果是,是否有更简单的版本?

  • 我觉得我很亲近,我做错了什么?

完整的 YAML

在此处提供:https://gist.github.com/patientplatypus/fa07648339ee65368628c828c>

感谢您的帮助!

编辑编辑编辑

我尝试使用 HELM.从表面上看,它似乎是一个简单的界面,因此我尝试将其旋转起来:

patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress名称:昔日甲虫最后部署:2018 年 5 月 6 日星期日 12:13:30命名空间:默认状态:已部署资源:==>v1/配置映射姓名数据时代昔日甲壳虫 Nginx 入口控制器 1 1s==>v1/服务名称 类型 CLUSTER-IP EXTERNAL-IP PORT(S) AGE以前的甲壳虫-nginx-ingress-controller LoadBalancer 10.0.216.38 <pending>80:31494/TCP,443:32118/TCP 1s昔日甲壳虫-nginx-ingress-default-backend ClusterIP 10.0.55.224 <none>80/TCP 1s==>v1beta1/部署姓名 所需的当前最新可用年龄昔日甲壳虫 Nginx 入口控制器 1 1 1 0 1s昔日甲壳虫-nginx-ingress-default-backend 1 1 1 0 1s==>v1beta1/PodDisruptionBudget名称 MIN AVAILABLE MAX UNAVAILABLE 允许的中断年龄昔日甲壳虫 Nginx 入口控制器 1 不适用 0 1s昔日甲壳虫-nginx-ingress-default-backend 1 N/A 0 1s==>v1/Pod(相关)名称就绪状态重新开始年龄昔日甲壳虫-nginx-ingress-controller-7df9b78b64-24hwz 0/1 ContainerCreating 0 1s昔日甲壳虫-nginx-ingress-default-backend-849b8df477-gzv8w 0/1 ContainerCreating 0 1s笔记:nginx-ingress 控制器已安装.LoadBalancer IP 可能需要几分钟才能可用.您可以通过运行kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller"来查看状态使用控制器的示例 Ingress:apiVersion: 扩展/v1beta1种类:入口元数据:注释:kubernetes.io/ingress.class: nginx名称:示例命名空间:foo规格:规则:- 主机:www.example.com网址:路径:- 后端:服务名称:exampleService服务端口:80小路:/# 本节仅在 Ingress 启用 TLS 时才需要网址:- 主持人:- www.example.com秘密名称:example-tls如果为 Ingress 启用了 TLS,则还必须提供包含证书和密钥的 Secret:api版本:v1种类:秘密元数据:名称:example-tls命名空间:foo数据:tls.crt:<base64 编码的证书>tls.key:<base64 编码的密钥>类型:kubernetes.io/tls

看起来这真的很好 - 它使所有内容都旋转起来并给出了如何添加入口的示例.因为我在一个空白的 kubectl 中启动了 helm,所以我使用了以下 yaml 文件来添加我认为需要的内容.

文件:

apiVersion: apps/v1beta1种类:部署元数据:名称:前端标签:应用程序:前端规格:复制品:3选择器:匹配标签:应用程序:前端模板:元数据:标签:应用程序:前端规格:容器:- 名称:nginx图片:患者鸭嘴兽/frontend_example端口:- 容器端口:3000---api版本:v1种类:服务元数据:名称:前端规格:类型:负载均衡器选择器:应用程序:前端端口:- 协议:TCP端口:80目标端口:3000---apiVersion: 应用程序/v1beta1种类:部署元数据:名称:后端标签:应用程序:后端规格:复制品:3选择器:匹配标签:应用程序:后端模板:元数据:标签:应用程序:后端规格:容器:- 名称:nginx图片:患者鸭嘴兽/backend_example端口:- 容器端口:5000---api版本:v1种类:服务元数据:名称:后端规格:类型:负载均衡器选择器:应用程序:后端端口:- 协议:TCP端口:80目标端口:5000---apiVersion: 扩展/v1beta1种类:入口元数据:注释:kubernetes.io/ingress.class: nginx规格:规则:- 主机:www.example.com网址:路径:- 路径:/api后端:服务名称:后端服务端口:80- 小路:/前端:服务名称:前端服务端口:80

将其部署到集群但会遇到此错误:

patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml已创建 deployment.apps前端"服务前端"已创建已创建 deployment.apps后端"服务后端"已创建错误:验证kube-deploy.yaml"时出错:验证数据时出错:[ValidationError(Ingress.spec.rules[0].http.paths[1]):io.k8s.api.extensions 中的未知字段frontend".v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): io.k8s.api.extensions.v1beta1.HTTPIngressPath] 中缺少必填字段后端";如果您选择忽略这些错误,请使用 --validate=false 关闭验证

那么,问题就变成了,废话,我该如何调试?如果你吐出 helm 生成的代码,它基本上是一个人无法读取的 - 没有办法进入那里并弄清楚发生了什么.

检查一下:https://gist.github.com/patientplatypus/0e281bf61307e02a1d-超过 1000 行!

如果有人有更好的方法来调试 helm 部署,请将其添加到未解决的问题列表中.

编辑编辑编辑编辑

为了简化极端,我尝试仅使用命名空间从一个 Pod 调用另一个 Pod.

这里是我发出 http 请求的 React 代码:

axios.get('http://backend/test').then(响应=>{console.log('从后端返回并响应:', response);}).catch(错误=>{console.log('从后端返回并报错:', error);})

我也尝试过使用 http://backend.exampledeploy.svc.cluster.local/test ,但没有成功.

这是我处理 get 的节点代码:

router.get('/test', function(req, res, next) {res.json({"test":"test"})});

这是我上传到 kubectl 集群的 yaml 文件:

apiVersion: apps/v1beta1种类:部署元数据:名称:前端命名空间:exampledeploy标签:应用程序:前端规格:复制品:3选择器:匹配标签:应用程序:前端模板:元数据:标签:应用程序:前端规格:容器:- 名称:nginx图片:患者鸭嘴兽/frontend_example端口:- 容器端口:3000---api版本:v1种类:服务元数据:名称:前端命名空间:exampledeploy规格:类型:负载均衡器选择器:应用程序:前端端口:- 协议:TCP端口:80目标端口:3000---apiVersion: 应用程序/v1beta1种类:部署元数据:名称:后端命名空间:exampledeploy标签:应用程序:后端规格:复制品:3选择器:匹配标签:应用程序:后端模板:元数据:标签:应用程序:后端规格:容器:- 名称:nginx图片:患者鸭嘴兽/backend_example端口:- 容器端口:5000---api版本:v1种类:服务元数据:名称:后端命名空间:exampledeploy规格:类型:负载均衡器选择器:应用程序:后端端口:- 协议:TCP端口:80目标端口:5000

上传到集群似乎工作正常,就像我在终端中看到的那样:

patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy名称就绪状态重新开始年龄pod/backend-584c5c59bc-5wkb4 1/1 运行 0 15mpod/backend-584c5c59bc-jsr4m 1/1 运行 0 15mpod/backend-584c5c59bc-txgw5 1/1 运行 0 15mpod/frontend-647c99cdcf-2mmvn 1/1 运行 0 15mpod/frontend-647c99cdcf-79sq5 1/1 运行 0 15mpod/frontend-647c99cdcf-r5bvg 1/1 运行 0 15m名称 类型 CLUSTER-IP EXTERNAL-IP PORT(S) AGE服务/后端负载均衡器 10.0.112.160 168.62.175.155 80:31498/TCP 15m服务/前端负载均衡器 10.0.246.212 168.62.37.100 80:31139/TCP 15m姓名 所需的当前最新可用年龄部署.扩展/后端 3 3 3 3 15m部署.扩展/前端 3 3 3 3 15m姓名 期望的当前准备年龄replicaset.extensions/backend-584c5c59bc 3 3 3 15mreplicaset.extensions/frontend-647c99cdcf 3 3 3 15m姓名 所需的当前最新可用年龄部署.apps/后端 3 3 3 3 15m部署.apps/前端 3 3 3 3 15m姓名 期望的当前准备年龄replicaset.apps/backend-584c5c59bc 3 3 3 15mreplicaset.apps/frontend-647c99cdcf 3 3 3 15m

但是,当我尝试提出请求时,出现以下错误:

从后端返回和错误:错误:网络错误堆栈跟踪:createError@http://168.62.37.100/static/js/bundle.js:1555:15handleError@http://168.62.37.100/static/js/bundle.js:1091:14App.js:14

由于 axios 调用是从浏览器进行的,我想知道是否根本不可能使用这种方法来调用后端,即使后端和前端不同豆荚.我有点迷茫,因为我认为这是将 pod 联网在一起的最简单的方法.

编辑 X5

我已经确定可以通过像这样执行到 pod 来从命令行卷曲后端:

patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test* 在 DNS 缓存中未找到主机名* 正在尝试 10.0.249.147...* 连接到后端 (10.0.249.147) 端口 80 (#0)>获取/测试 HTTP/1.1>用户代理:curl/7.38.0>主持人:后台>接受: */*><HTTP/1.1 200 正常<X-Powered-By: Express<内容类型:应用程序/json;字符集=utf-8<内容长度:15<ETag:W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"<日期:2018 年 5 月 6 日星期日 20:25:49 GMT<连接:保持连接<* 与主机后端的连接 #0 保持不变{测试一下"}

毫无疑问,这意味着前端代码正在浏览器中执行,它需要 Ingress 才能进入 Pod,因为来自前端的 http 请求正在破坏简单的 Pod 网络.我不确定这一点,但这意味着 Ingress 是必要的.

事实证明我把事情复杂化了.这是可以执行我想要的操作的 Kubernetes 文件.您可以使用两个部署(前端和后端)和一个服务入口点来执行此操作.据我所知,一个服务可以负载均衡到许多(不仅仅是 2 个)不同的部署,这意味着对于实际开发来说,这应该是微服务开发的一个良好开端.入口方法的好处之一是允许使用路径名而不是端口号,但鉴于其难度,它在开发中似乎并不实用.

这是yaml文件:

apiVersion: apps/v1beta1种类:部署元数据:名称:前端标签:应用程序:exampleapp规格:复制品:3选择器:匹配标签:应用程序:exampleapp模板:元数据:标签:应用程序:exampleapp规格:容器:- 名称:nginx图片:患者鸭嘴兽/kubeplayfrontend端口:- 容器端口:3000---apiVersion: 应用程序/v1beta1种类:部署元数据:名称:后端标签:应用程序:exampleapp规格:复制品:3选择器:匹配标签:应用程序:exampleapp模板:元数据:标签:应用程序:exampleapp规格:容器:- 名称:nginx图片:患者鸭嘴兽/kubeplaybackend端口:- 容器端口:5000---api版本:v1种类:服务元数据:名称:入口规格:类型:负载均衡器端口:- 名称:后端端口:8080目标端口:5000- 名称:前端端口:81目标端口:3000选择器:应用程序:exampleapp

这里是我用来让它启动的 bash 命令(你可能需要添加一个登录命令 - docker login - 推送到 dockerhub):

#!/bin/bash# 停止所有容器echo 停止所有容器码头工人停止 $(码头工人 ps -aq)# 删除所有容器echo 删除所有容器docker rm $(docker ps -aq)# 删除所有图片回声删除所有图像docker rmi $(docker 图像 -q)回声建筑后端cd ./后端docker build -tpatientplatypus/kubeplaybackend.echo 将后端推送到 dockerhub码头工人推送患者鸭嘴兽/kubeplaybackend:最新回声建筑前端cd ../前端docker build -tpatientplatypus/kubeplayfrontend.echo 将后端推送到 dockerhubdocker push 患者鸭嘴兽/kubeplayfrontend:最新echo 现在在 kubectl 上工作光盘..echo 删除之前的变量kubectl 删除 Pod、部署、服务入口点后端前端echo 创建部署kubectl create -f kube-deploy.yaml回声监视服务启动kubectl 获取服务 --watch

实际代码只是一个前端反应应用程序,在起始应用程序页面的 componentDidMount 上对后端节点路由进行 axios http 调用.

您还可以在此处查看工作示例:https://github.com/patientplatypus/KubernetesMultiPodCommunication

再次感谢大家的帮助.

I've been banging my head against this wall on and off for a while. There is a ton of information on Kubernetes on the web, but it's all assuming so much knowledge that n00bs like me don't really have much to go on.

So, can anyone share a simple example of the following (as a yaml file)? All I want is

  • two pods
  • let's say one pod has a backend (I don't know - node.js), and one has a frontend (say React).
  • A way to network between them.

And then an example of calling an api call from the back to the front.

I start looking into this sort of thing, and all of a sudden I hit this page - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this. This is super unhelpful. I don't want or need advanced network policies, nor do I have the time to go through several different service layers that are mapped on top of kubernetes. I just want to figure out a trivial example of a network request.

Hopefully if this example exists on stackoverflow it will serve other people as well.

Any help would be appreciated. Thanks.

EDIT; it looks like the easiest example may be using the Ingress controller.

EDIT EDIT;

I'm working to try and get a minimal example deployed - I'll walk through some steps here and point out my issues.

So below is my yaml file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/frontend_example
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: LoadBalancer
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/backend_example
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  type: LoadBalancer
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
spec:      
  rules:
  - host: www.kubeplaytime.example
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80
      - path: /api
        backend:
          serviceName: backend
          servicePort: 80

What I believe this is doing is

  • Deploying a frontend and backend app - I deployed patientplatypus/frontend_example and patientplatypus/backend_example to dockerhub and then pull the images down. One open question I have is, what if I don't want to pull the images from docker hub and rather would just like to load from my localhost, is that possible? In this case I would push my code to the production server, build the docker images on the server and then upload to kubernetes. The benefit is that I don't have to rely on dockerhub if I want my images to be private.

  • It is creating two service endpoints that route outside traffic from a web browser to each of the deployments. These services are of type loadBalancer because they are balancing the traffic among the (in this case 3) replicasets that I have in the deployments.

  • Finally, I have an ingress controller which is supposed to allow my services to route to each other through www.kubeplaytime.example and www.kubeplaytime.example/api. However this is not working.

What happens when I run this?

patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
ingress.extensions "frontend" created

  • So first, it appears to create all the parts that I need fine with no errors.

    patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

    backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m

    frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m

    kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d

    frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m

    backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m

  • Second, if I watch the services, I eventually get IP addresses that I can use to navigate in my browser to these sites. Each of the above IP addresses works in routing me to the frontend and backend respectively.

HOWEVER

I reach an issue when I try and use the ingress controller - it seemingly deployed, but I don't know how to get there.

patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses
NAME       HOSTS                      ADDRESS   PORTS     AGE
frontend   www.kubeplaytime.example             80        16m

  • So I have no address I can use, and www.kubeplaytime.example does not appear to work.

What it appears that I have to do to route to the ingress extension I just created is to use a service and deployment on it in order to get an IP address, but this starts to look incredibly complicated very quickly.

For example, take a look at this medium article: https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e.

It would appear that the necessary code to add for just the service routing to the Ingress (ie what he calls the Ingress Controller) appears to be this:

---
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
spec:
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: http
  - name: https
    port: 443
    targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: ingress-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
        name: ingress-nginx
        imagePullPolicy: Always
        ports:
          - name: http
            containerPort: 80
            protocol: TCP
          - name: https
            containerPort: 443
            protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
---
kind: Service
apiVersion: v1
metadata:
  name: nginx-default-backend
spec:
  ports:
  - port: 80
    targetPort: http
  selector:
    app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nginx-default-backend
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-default-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP

This would seemingly need to be appended to my other yaml code above in order to get a service entry point for my ingress routing, and it does appear to give an ip:

patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services
NAME                    TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
backend                 LoadBalancer   10.0.31.209   <pending>     80:32428/TCP                 4m
frontend                LoadBalancer   10.0.222.47   <pending>     80:32482/TCP                 4m
ingress-nginx           LoadBalancer   10.0.28.157   <pending>     80:30573/TCP,443:30802/TCP   4m
kubernetes              ClusterIP      10.0.0.1      <none>        443/TCP                      10d
nginx-default-backend   ClusterIP      10.0.71.121   <none>        80/TCP                       4m
frontend   LoadBalancer   10.0.222.47   40.121.7.66   80:32482/TCP   5m
ingress-nginx   LoadBalancer   10.0.28.157   40.121.6.179   80:30573/TCP,443:30802/TCP   6m
backend   LoadBalancer   10.0.31.209   40.117.248.73   80:32428/TCP   7m

So ingress-nginx appears to be the site I want to get to. Navigating to 40.121.6.179 returns a default 404 message (default backend - 404) - it does not go to frontend as / aught to route. /api returns the same. Navigating to my host namespace www.kubeplaytime.example returns a 404 from the browser - no error handling.

QUESTIONS

  • Is the Ingress Controller strictly necessary, and if so is there a less complicated version of this?

  • I feel I am close, what am I doing wrong?

FULL YAML

Available here: https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938

Thanks for the help!

EDIT EDIT EDIT

I've attempted to use HELM. On the surface it appears to be a simple interface, and so I tried spinning it up:

patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress
NAME:   erstwhile-beetle
LAST DEPLOYED: Sun May  6 12:13:30 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                                       DATA  AGE
erstwhile-beetle-nginx-ingress-controller  1     1s

==> v1/Service
NAME                                            TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)                     AGE
erstwhile-beetle-nginx-ingress-controller       LoadBalancer  10.0.216.38  <pending>    80:31494/TCP,443:32118/TCP  1s
erstwhile-beetle-nginx-ingress-default-backend  ClusterIP     10.0.55.224  <none>       80/TCP                      1s

==> v1beta1/Deployment
NAME                                            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
erstwhile-beetle-nginx-ingress-controller       1        1        1           0          1s
erstwhile-beetle-nginx-ingress-default-backend  1        1        1           0          1s

==> v1beta1/PodDisruptionBudget
NAME                                            MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
erstwhile-beetle-nginx-ingress-controller       1              N/A              0                    1s
erstwhile-beetle-nginx-ingress-default-backend  1              N/A              0                    1s

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz       0/1    ContainerCreating  0         1s
erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w  0/1    ContainerCreating  0         1s


NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

Seemingly this is really nice - it spins everything up and gives an example of how to add an ingress. Since I spun up helm in a blank kubectl I used the following yaml file to add in what I thought would be required.

The file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/frontend_example
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  type: LoadBalancer
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/backend_example
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  type: LoadBalancer
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 5000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
    - host: www.example.com
      http:
        paths:
          - path: /api
            backend:
              serviceName: backend
              servicePort: 80
          - path: /
            frontend:
              serviceName: frontend
              servicePort: 80

Deploying this to the cluster however runs into this error:

patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml
deployment.apps "frontend" created
service "frontend" created
deployment.apps "backend" created
service "backend" created
error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false

So, the question then becomes, well crap how do I debug this? If you spit out the code that helm produces, it's basically non-readable by a person - there's no way to go in there and figure out what's going on.

Check it out: https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - over a 1000 lines!

If anyone has a better way to debug a helm deploy add it to the list of open questions.

EDIT EDIT EDIT EDIT

To simplify in the extreme I attempt to make a call from one pod to another only using namespace.

So here is my React code where I make the http request:

axios.get('http://backend/test')
.then(response=>{
  console.log('return from backend and response: ', response);
})
.catch(error=>{
  console.log('return from backend and error: ', error);
})

I've also attempted to use http://backend.exampledeploy.svc.cluster.local/test without luck.

Here is my node code handling the get:

router.get('/test', function(req, res, next) {
  res.json({"test":"test"})
});

Here is my yaml file that I uploading to the kubectl cluster:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  namespace: exampledeploy
  labels:
    app: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/frontend_example
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: exampledeploy
spec:
  type: LoadBalancer
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  namespace: exampledeploy
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: nginx
        image: patientplatypus/backend_example
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: exampledeploy
spec:
  type: LoadBalancer
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 5000

The uploading to the cluster appears to work as I can see in my terminal:

patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy 
NAME                            READY     STATUS    RESTARTS   AGE
pod/backend-584c5c59bc-5wkb4    1/1       Running   0          15m
pod/backend-584c5c59bc-jsr4m    1/1       Running   0          15m
pod/backend-584c5c59bc-txgw5    1/1       Running   0          15m
pod/frontend-647c99cdcf-2mmvn   1/1       Running   0          15m
pod/frontend-647c99cdcf-79sq5   1/1       Running   0          15m
pod/frontend-647c99cdcf-r5bvg   1/1       Running   0          15m

NAME               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
service/backend    LoadBalancer   10.0.112.160   168.62.175.155   80:31498/TCP   15m
service/frontend   LoadBalancer   10.0.246.212   168.62.37.100    80:31139/TCP   15m

NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/backend    3         3         3            3           15m
deployment.extensions/frontend   3         3         3            3           15m

NAME                                        DESIRED   CURRENT   READY     AGE
replicaset.extensions/backend-584c5c59bc    3         3         3         15m
replicaset.extensions/frontend-647c99cdcf   3         3         3         15m

NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/backend    3         3         3            3           15m
deployment.apps/frontend   3         3         3            3           15m

NAME                                  DESIRED   CURRENT   READY     AGE
replicaset.apps/backend-584c5c59bc    3         3         3         15m
replicaset.apps/frontend-647c99cdcf   3         3         3         15m

However, when I attempt to make the request I get the following error:

return from backend and error:  
Error: Network Error
Stack trace:
createError@http://168.62.37.100/static/js/bundle.js:1555:15
handleError@http://168.62.37.100/static/js/bundle.js:1091:14
App.js:14

Since the axios call is being made from the browser, I'm wondering if it is simply not possible to use this method to call the backend, even though the backend and the frontend are in different pods. I'm a little lost, as I thought this was the simplest possible way to network pods together.

EDIT X5

I've determined that it is possible to curl the backend from the command line by exec'ing into the pod like this:

patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test
* Hostname was NOT found in DNS cache
*   Trying 10.0.249.147...
* Connected to backend (10.0.249.147) port 80 (#0)
> GET /test HTTP/1.1
> User-Agent: curl/7.38.0
> Host: backend
> Accept: */*
> 
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 15
< ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM"
< Date: Sun, 06 May 2018 20:25:49 GMT
< Connection: keep-alive
< 
* Connection #0 to host backend left intact
{"test":"test"}

What this means is, without a doubt, because the front end code is being executed in the browser it needs Ingress to gain entry into the pod, as http requests from the front end are what's breaking with simple pod networking. I was unsure of this, but it means Ingress is necessary.

解决方案

As it turns out I was over-complicating things. Here is the Kubernetes file that works to do what I want. You can do this using two deployments (front end, and backend) and one service entrypoint. As far as I can tell, a service can load balance to many (not just 2) different deployments, meaning for practical development this should be a good start to micro service development. One of the benefits of an ingress method is allowing the use of path names rather than port numbers, but given the difficulty it doesn't seem practical in development.

Here is the yaml file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: exampleapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: exampleapp
  template:
    metadata:
      labels:
        app: exampleapp
    spec:
      containers:
      - name: nginx
        image: patientplatypus/kubeplayfrontend
        ports:
        - containerPort: 3000
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: backend
  labels:
    app: exampleapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: exampleapp
  template:
    metadata:
      labels:
        app: exampleapp
    spec:
      containers:
      - name: nginx
        image: patientplatypus/kubeplaybackend
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: entrypt
spec:
  type: LoadBalancer
  ports:
  - name: backend
    port: 8080
    targetPort: 5000
  - name: frontend
    port: 81
    targetPort: 3000
  selector:
    app: exampleapp

Here are the bash commands I use to get it to spin up (you may have to add a login command - docker login - to push to dockerhub):

#!/bin/bash

# stop all containers
echo stopping all containers
docker stop $(docker ps -aq)
# remove all containers
echo removing all containers
docker rm $(docker ps -aq)
# remove all images
echo removing all images
docker rmi $(docker images -q)

echo building backend
cd ./backend
docker build -t patientplatypus/kubeplaybackend .
echo push backend to dockerhub
docker push patientplatypus/kubeplaybackend:latest

echo building frontend
cd ../frontend
docker build -t patientplatypus/kubeplayfrontend .
echo push backend to dockerhub
docker push patientplatypus/kubeplayfrontend:latest

echo now working on kubectl
cd ..
echo deleting previous variables
kubectl delete pods,deployments,services entrypt backend frontend
echo creating deployment
kubectl create -f kube-deploy.yaml
echo watching services spin up
kubectl get services --watch

The actual code is just a frontend react app making an axios http call to a backend node route on componentDidMount of the starting App page.

You can also see a working example here: https://github.com/patientplatypus/KubernetesMultiPodCommunication

Thanks again everyone for your help.

这篇关于如何让一个 Pod 与 Kubernetes 中的另一个 Pod 联网?(简单的)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆