docker swarm 中的 Prometheus dns 服务发现 [英] Prometheus dns service discovery in docker swarm

查看:34
本文介绍了docker swarm 中的 Prometheus dns 服务发现的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在寻找一些监视器 &我的服务的警报解决方案.我发现了以下不错的相关作品.

I'm searching for some monitor & alert solutions for my services. I found following nice related works.

两个作品都使用 dns 服务发现来监控服务的多个副本.

Both works use dns service discovery to monitor multiple replicas of services.

我尝试重播这些工作,但我发现我只能获得单个后端容器 ip.

I've tried to replay these work, but I found I can only get single backend container ip.

# dig A node-exporter

; <<>> DiG 9.10.4-P8 <<>> A node-exporter
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18749
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;node-exporter.         IN  A

;; ANSWER SECTION:
node-exporter.      600 IN  A   10.0.0.42

;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Mon Jan 29 02:57:51 UTC 2018
;; MSG SIZE  rcvd: 60

查看服务时,发现node-exporter的端点模式是vip.

When I inspect the service, I found the endpoint mode of node-exporter is vip.

> docker inspect 242pn4obqsly
...
"Endpoint": {
"Spec": {
    "Mode": "vip"
},
"VirtualIPs": [
    {
        "NetworkID": "61fn8hmgwg0n7rhg49ju2fdld",
        "Addr": "10.0.0.3/24"
    }
]
...

这意味着在接触dns时,prometheus只能得到一个delegate service ip.然后内部 lbs 策略会将收入请求路由到不同的后端实例.

This means when contact with dns, prometheus can only get a single delegate service ip. Then the inner lbs strategy will route the income request to different backend instances.

那么相关的工作是如何成功的呢?

Then how does the related works succeeded?

谢谢!

推荐答案

对于 Prometheus DNS 服务发现,您不想使用 docker swarm 内部负载平衡使用虚拟IP(VIP).

For Prometheus DNS Service Discovery, you don't want to use docker swarm internal load balancing using Virtual IP (VIP).

您正在寻找的是每个任务服务的 DNS.要获取集群中每个服务的 IP 地址,只需使用 tasks.前缀 docker swarm 服务名称的 DNS.

What you're looking for is a per task service DNS. To get IP adresses of every service in your swarm, just prefix the DNS of your docker swarm service name with tasks..

例如,在一个有 3 个节点的集群中,我得到:

For instance, in a swarm with 3 nodes, I get:

$ nslookup tasks.node-exporter
Server:    127.0.0.11
Address 1: 127.0.0.11

Name:      tasks.node-exporter
Address 1: 10.210.0.x node-exporter.xxx.mynet
Address 2: 10.210.0.y node-exporter.yyy.mynet
Address 3: 10.210.0.z node-exporter.zzz.mynet

但是当我查询没有前缀的服务名称时,我得到了一个 IP(将请求负载均衡到每个容器的 VIP):

But when I query the service name with no prefix, I get one IP (the VIP one that load balances requests to every container):

$ nslookup node-exporter
Server:    127.0.0.11
Address 1: 127.0.0.11

Name:      node-exporter
Address 1: 10.210.0.w ip-x-x-x-x

你可以看看这个Q/A on SO 展示了在 docker swarm 中获得 DNS 解析的 3 种不同方式.基本上,对于 docker swarm 中名为 myservice 的服务:

You can have a look at this Q/A on SO showing 3 different way of getting a DNS resolution in docker swarm. Basically, for a service named myservice in docker swarm:

  • myservice 解析为该服务的 Virtual IP (VIP),该服务在内部负载均衡到各个任务 IP 地址.

  • myservice resolves to the Virtual IP (VIP) of that service which is internally load balanced to the individual task IP addresses.

tasks.myservice 解析为部署在 swarm 中的每个容器的每个私有 IP.

tasks.myservice resolves to each Private IP of each container deployed in the swarm.

docker.com 不作为服务名称存在,因此请求被转发到配置的默认 DNS 服务器(您可以自定义).

docker.com does not exist as a service name and so the request is forwarded to the configured default DNS server (that you can customize).

注意:容器名称也会解析,尽管直接解析为它们的 IP 地址.

Note: Container names resolve as well, albeit directly to their IP addresses.

查看您提供的链接,node-exporter 配置使用了task 访问服务的方式:

Looking at the links you provided, node-exporter configuration uses the task way of reaching services:

使用 exporters 服务名称,您可以配置 DNS 发现:

Using the exporters service name, you can configure DNS discovery:

scrape_configs:
- job_name: 'node-exporter'
  dns_sd_configs:
  - names:
    - 'tasks.node-exporter'
    type: 'A'
    port: 9100

希望这有帮助!

这篇关于docker swarm 中的 Prometheus dns 服务发现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆