docker swarm中的Prometheus DNS服务发现 [英] Prometheus dns service discovery in docker swarm

查看:266
本文介绍了docker swarm中的Prometheus DNS服务发现的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在搜索显示器和放大器;我的服务的警报解决方案。我发现以下不错的相关作品。





两个工厂都使用dns服务发现来监视服务的多个副本。



我尝试重播这些工作,但发现只能获得单个后端容器ip。



<$ p d> #挖一个节点出口者

; <<>> DiG 9.10.4-P8-。节点出口者
;;全局选项:+ cmd
;;得到答案:
;; ->> HEADER<--操作码:QUERY,状态:NOERROR,ID:18749
;;标志:qr rd ra;查询:1,答案:1,权限:0,附加:0

;;问题:
; node-exporter。在A

中;解答部分:
node-exporter。 10.0.0.42

中的600英寸;查询时间:0毫秒
;;服务器:127.0.0.11#53(127.0.0.11)
;;时间:星期一1月29日02:57:51 UTC 2018
;; MSG大小rcvd:60

检查服务时,我发现node-exporter的端点模式是

 &v;码头工人检查242pn4obqsly 
...
端点:{
规范:{
模式: vip
},
VirtualIPs:[
{
NetworkID: 61fn8hmgwg0n7rhg49ju2fdld,
Addr: 10.0.0.3/24
}
]
...

这意味着当与dns联系时,普罗米修斯只能获得一个委托服务IP。然后,内部磅策略将收入请求路由到不同的后端实例。



那么相关工作如何成功?



Thx!

解决方案

对于 Prometheus DNS服务发现,您不想使用虚拟IP(VIP)使用 docker swarm 内部负载平衡



您要查找的是每任务服务DNS。要获取群集中每个服务的IP地址,只需前缀带有任务的Docker群集服务名称的DNS。



例如,在具有3个节点的群集中,我得到:

  $ nslookup任务.node-exporter 
服务器:127.0.0.11
地址1:127.0.0.11

名称:Tasks.node-exporter
地址1:10.210.0.x node-exporter.xxx.mynet
地址2:10.210.0.y node-exporter.yyy.mynet
地址3:10.210.0.z node-exporter.zzz.mynet

但是当我查询不带前缀的服务名称时,我会得到一个IP(VIP一个将请求负载均衡到每个容器的IP ):

  $ nslookup节点导出器
服务器:127.0.0.11
地址1:127.0.0.11

名称:node-exporter
地址1:10.210.0.w ip-xxxx

您可以查看此SO上的Q / a 显示了在 docker swarm 中获得DNS解析的3种不同方式。基本上,对于 docker swarm 中的名为 myservice 的服务:




  • myservice 解析为该服务的虚拟IP(VIP)在内部负载均衡到各个任务IP地址。


  • tasks.myservice 解析为群集中部署的每个容器的每个私有IP


  • docker.com 不作为服务名称存在,因此请求将转发到已配置的默认DNS服务器(您可以自定义)。




注意:容器名称也可以解析,尽管直接指向其IP地址。



查看您提供的链接, node-exporter 配置使用任务获取服务的方式:


使用出口商服务名称,您可以配置DNS发现:

  scrape_conf igs:
-作业名称:'node-exporter'
dns_sd_configs:
-名称:
-'tasks.node-exporter'
类型:'A'
端口:9100


希望这会有所帮助!


I'm searching for some monitor & alert solutions for my services. I found following nice related works.

Both works use dns service discovery to monitor multiple replicas of services.

I've tried to replay these work, but I found I can only get single backend container ip.

# dig A node-exporter

; <<>> DiG 9.10.4-P8 <<>> A node-exporter
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18749
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;node-exporter.         IN  A

;; ANSWER SECTION:
node-exporter.      600 IN  A   10.0.0.42

;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Mon Jan 29 02:57:51 UTC 2018
;; MSG SIZE  rcvd: 60

When I inspect the service, I found the endpoint mode of node-exporter is vip.

> docker inspect 242pn4obqsly
...
"Endpoint": {
"Spec": {
    "Mode": "vip"
},
"VirtualIPs": [
    {
        "NetworkID": "61fn8hmgwg0n7rhg49ju2fdld",
        "Addr": "10.0.0.3/24"
    }
]
...

This means when contact with dns, prometheus can only get a single delegate service ip. Then the inner lbs strategy will route the income request to different backend instances.

Then how does the related works succeeded?

Thx!

解决方案

For Prometheus DNS Service Discovery, you don't want to use docker swarm internal load balancing using Virtual IP (VIP).

What you're looking for is a per task service DNS. To get IP adresses of every service in your swarm, just prefix the DNS of your docker swarm service name with tasks..

For instance, in a swarm with 3 nodes, I get:

$ nslookup tasks.node-exporter
Server:    127.0.0.11
Address 1: 127.0.0.11

Name:      tasks.node-exporter
Address 1: 10.210.0.x node-exporter.xxx.mynet
Address 2: 10.210.0.y node-exporter.yyy.mynet
Address 3: 10.210.0.z node-exporter.zzz.mynet

But when I query the service name with no prefix, I get one IP (the VIP one that load balances requests to every container):

$ nslookup node-exporter
Server:    127.0.0.11
Address 1: 127.0.0.11

Name:      node-exporter
Address 1: 10.210.0.w ip-x-x-x-x

You can have a look at this Q/A on SO showing 3 different way of getting a DNS resolution in docker swarm. Basically, for a service named myservice in docker swarm:

  • myservice resolves to the Virtual IP (VIP) of that service which is internally load balanced to the individual task IP addresses.

  • tasks.myservice resolves to each Private IP of each container deployed in the swarm.

  • docker.com does not exist as a service name and so the request is forwarded to the configured default DNS server (that you can customize).

Note: Container names resolve as well, albeit directly to their IP addresses.

Looking at the links you provided, node-exporter configuration uses the task way of reaching services:

Using the exporters service name, you can configure DNS discovery:

scrape_configs:
- job_name: 'node-exporter'
  dns_sd_configs:
  - names:
    - 'tasks.node-exporter'
    type: 'A'
    port: 9100

Hope this helps!

这篇关于docker swarm中的Prometheus DNS服务发现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆