Alpine 在 Kubernetes 中是否有已知的 DNS 问题? [英] Does Alpine have known DNS issue within Kubernetes?

查看:89
本文介绍了Alpine 在 Kubernetes 中是否有已知的 DNS 问题?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

最近,在尝试解决大"问题时,我们在 EKS 上遇到了一些基于 Alpine 映像(节点:12.18.1-alpine)的微服务的 DNS 问题.DNS 查询(当答案大于 512M 时).

Lately, we've faced some DNS issues with micro-services based on Alpine image (node:12.18.1-alpine) on EKS when trying to resolve "big" DNS queries (When the answer is larger than 512M).

所以我尝试运行这个脚本来测试 DNS 解析:

So I've tried running this script for testing the DNS resolution:

var dns = require('dns');
var w3 = dns.lookup('hugedns.test.dziemba.net', function (err, addresses, family) {
  console.log(addresses);
});

每个图像有 2 个不同的场景

with 2 different scenarios for each image

  1. node:12.18.1-alpine

  • 在我的笔记本电脑上运行图像 - 已成功解决
  • 在 EKS 1.16 上运行映像 - 无法解决
    1. node:12.18.1-slim

    • 在我的笔记本电脑上运行图像 - 已成功解决
    • 在 EKS 1.16 上运行映像 - 已成功解决
    • 据我所知,Alpine 使用 musl(它不支持 DNS 来使用 TCP?)库而不是 glibc,因为 DNS 协议使用的是 UDP,并且仅在查询大于 512M 时才尝试回退到 TCP.所以我的理论是这是根本原因,但由于它对我有用并且在 EKS 上失败让我想知道问题在哪里可以中继......

      From what I saw, Alpine is using musl (which doesn't support DNS to use TCP?) libraries instead of glibc, since the DNS protocol is using UDP and tries falling back to TCP only when the query is larger than 512M. So my theory was that this is the root cause, but since it is working on my end and failing on EKS made me wonder where can the issue relay...

      有什么想法吗?

      EKS v1.16coredns:v1.6.6

      EKS v1.16 coredns:v1.6.6

      顺便说一句,这是我的第一篇文章,如果需要任何信息,请告诉我

      BTW, this is my first post, let me know if any information is needed

      推荐答案

      是的,众所周知,Alpine 镜像在 Kubernetes 集群中存在与 DNS 查询相关的问题.

      Yes, the Alpine images are known to be problematic in Kubernetes cluster concerning DNS queries.

      即使不清楚该错误是否已在任何当前版本的 Alpine 中得到有效修复,以下是一些相关链接:

      Even if it is not clear if the bug has been effectively fixed in any current version of Alpine, here are some related links:

      截至 2021 年 1 月,我在使用最新的 Alpine 3.12 映像的 Kubernetes 集群中遇到了这个问题,因此我认为它没有解决.

      I encountered this problem on my side in my Kubernetes clusters as of January 2021 with up-to-date Alpine 3.12 images, so I would assume it is not fixed.

      核心问题似乎是 musl 库停止在 /etc/resolv.confsearch 指令中指定的可能域中搜索> 对于给定的名称,如果有任何意外响应(基本上不是明确表明无法找到或已找到 FQDN 的内容).

      The core problem seems to be that the musl library stop searching among possible domains specified in the search directive of /etc/resolv.conf for a given name if any response is unexpected (basically not something clearly indicating that the FQDN could not be found, or has been found).

      这与 Kubernetes 中关于 pod 中名称解析的策略不匹配.

      This does not play well with the Kubernetes strategy about name resolution in pods.

      确实,可以看到 example 命名空间中 pod 的典型 /etc/resolv.conf 如下所示:

      Indeed, one can see that the typical /etc/resolv.conf of a pod in the example namespace is the following:

      nameserver 10.3.0.10
      search example.svc.cluster.local svc.cluster.local cluster.local
      options ndots:5
      

      该策略是针对名称的解析,例如 my-servicewww.google.com,将针对search 指令:对于示例,它是 FQDN 链 my-service.example.svc.cluster.local,my-service.svc.cluster.localmy-service.cluster.localmy-servicewww.google.com.example.svc.cluster.local,www.google.com.svc.cluster.local,www.google.com.cluster.local,www.google.com.这里显然是第一条链的第一个 FQDN (my-service.example.svc.cluster.local) 和第二条链的最后一个 FQDN (www.google.com) 将被正确解析.

      The strategy is that the resolution of a name, for instance my-service or www.google.com, will be tested against each of the domains specified in the search directive: here for the examples, it would be the FQDN chains my-service.example.svc.cluster.local,my-service.svc.cluster.local,my-service.cluster.local,my-service and www.google.com.example.svc.cluster.local,www.google.com.svc.cluster.local,www.google.com.cluster.local,www.google.com. Here obviously it would be the 1st FQDN of the first chain (my-service.example.svc.cluster.local) and the last FQDN of the second chain (www.google.com) that would be resolved correctly.

      可以看出,该策略是为了优化集群内部名称的解析,以允许像my-servicemy-service.my-namespacemy-service.my-namespace.svc 可以很好地立即解决.

      One can see that this strategy is made to optimize resolution of internal name of the cluster, in a way that allows names like my-service, my-service.my-namespace or my-service.my-namespace.svc to be nicely resolved out-of-the-box.

      options 指令中的 ndots 参数定义了名称中的最小点数,以考虑名称实际上是 FQDN,因此应跳过搜索链支持直接 DNS 解析尝试.使用 ndots:2www.google.com 将被视为 FQDN,而 my-service.my-namespace 将通过搜索链.

      The ndots parameter in the options directive defines the minimum number of dots in a name to consider that a name is actually a FQDN and so the search chain should be skiped in favor of a direct DNS resolution attempt. With ndots:2, www.google.com will be considered as a FQDN while my-service.my-namespace will go through the search chain.

      鉴于 search 选项超过 3 个可能的域,由于 ndots:5 和搜索循环的中断,任何明显的 URL 都不会被视为 FQDN在 Alpine docker 的 musl 库中,所有这些都大大增加了在 Kubernetes 中运行的 Docker Alpine 中主机解析失败的可能性.如果您的主机解析是某种定期运行的循环的一部分,您将遇到许多需要处理的故障.

      Given that the search option over 3 possible domains, that any obvious URL will not be considered as a FQDN because of ndots:5 and the break of the search loop in musl library in Alpine docker, all of this dramatically increases the probability of a host resolution failure in a Docker Alpine running in Kubernetes. If your host resolution is part of some kind of a loop running regularly, you will encounter a lot of failures that need to be handled.

      怎么办?

      • 您可以使用 dnsPolicy 来减少 ndots 并将较短的名称视为 FQDN 并跳过搜索循环(请参阅 https://pracucci.com/kubernetes-dns-resolution-ndots-options-and-why-it-may-affect-application-performances.html)
      • 您可以为您的图像生成入口点脚本,根据您的需要修改/etc/resolv.conf,例如cat/etc/resolv.conf |sed -r "s/^(search.*|options.*)/#\1/";>/tmp/resolv &&cat/tmp/resolv >/etc/resolv.conf 将删除关于 searchoptions 的所有内容,如果您在您的进程中不依赖集群的任何内部名称
      • /etc/host 的直接版本,将一些 FQDN 硬编码到它们的已知 IP 中
      • 远离 Alpine 镜像,转而使用基于 Ubi 或 Debian/Ubuntu 的镜像
      • you can use a dnsPolicy to reduce ndots and consider shorter names as FQDN and skip the search loop (see https://pracucci.com/kubernetes-dns-resolution-ndots-options-and-why-it-may-affect-application-performances.html)
      • you can generate a entrypoint script for your image that would modify the /etc/resolv.conf accordingly to your needs, for instance cat /etc/resolv.conf | sed -r "s/^(search.*|options.*)/#\1/" > /tmp/resolv && cat /tmp/resolv > /etc/resolv.conf will remove all the stuff about search and options, if you do not rely on any internal name of the cluster in your process
      • direct edition of /etc/host to hardcode some FQDN to their known IPs
      • move away from Alpine images, and go for Ubi or Debian/Ubuntu based images

      我个人是从 Alpine 开始的,就像 Docker 工业化初期的我们很多人一样,因为其他完整的 OS 映像非常大.对于 Ubuntu 或 Debian,甚至像 Ubi 这样以 Kubernetes 为中心的计划,经过严格测试的超薄映像大多不再是这种情况.这就是为什么我通常选择最后一个选项(远离 Alpine 图像).

      Personally I started with Alpine, like a lot of us in the early days of Docker industrialization because the other full OS images were insanely big. This is mostly not the case anymore with strongly tested slim images for Ubuntu or Debian, or even Kubernetes-centric initiatives like Ubi. That is why I usually choose the last alternative (move away from Alpine images).

      这篇关于Alpine 在 Kubernetes 中是否有已知的 DNS 问题?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆