Docker群集和弹性搜索,使用约束将服务绑定到特定节点 [英] Docker swarm cluster and elasticsearch, using constraints to bind a service to a specific node

查看:239
本文介绍了Docker群集和弹性搜索,使用约束将服务绑定到特定节点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望有人在这里可能能够给我一些输入,我有一个问题。



我有一个具有3个节点的Docker群集,希望运行ELK堆栈,但我不知道如何存储我的数据。

 版本:'3'
服务:
master01:
image:elasticsearch:5.2.2
ports:
- 9200:9200
- 9300:9300
networks:
- es
卷:
- / es / data:/ usr / share / elasticsearch / data
命令:>
elasticsearch
-E network.host = _eth0_
-E node.master = true
-E discovery.zen.ping.unicast.hosts = es_master01,es_master02,es_master03
-E discovery.zen.minimum_master_nodes = 3
-E cluster.name = ElasticCluster
-E node.name = es_master01
-E transport.tcp.port = 9300
-E http.port = 9200
-E node.max_local_storage_nodes = 3
deploy:
replicas:1

master02:
image:elasticsearch:5.2 .2
ports:
- 9201:9200
- 9301:9300
网络:
- es
卷:
- / es /数据:/ usr / share / elasticsearch / data
命令:>
elasticsearch
-E network.host = _eth0_
-E node.master = true
-E discovery.zen.ping.unicast.hosts = es_master01,es_master02,es_master03
-E discovery.zen.minimum_master_nodes = 3
-E cluster.name = ElasticCluster
-E node.name = es_master02
-E transport.tcp.port = 9300
-E http.port = 9200
-E node.max_local_storage_nodes = 3
deploy:
replicas:1

master03:
image:elasticsearch:5.2 .2
ports:
- 9202:9200
- 9302:9300
网络:
- es
卷:
- / es /数据:/ usr / share / elasticsearch / data
命令:>
elasticsearch
-E network.host = _eth0_
-E node.master = true
-E discovery.zen.ping.unicast.hosts = es_master01,es_master02,es_master03
-E discovery.zen.minimum_master_nodes = 3
-E cluster.name = ElasticCluster
-E node.name = es_master03
-E transport.tcp.port = 9300
-E http.port = 9200
-E node.max_local_storage_nodes = 3
deploy:
replicas:1

logstash:
image:logstash:5.2 .2
ports:
- 5000:5000
networks:
- es
命令:>
logstash -e'input {tcp {port => 5000}} output {elasticsearch {hosts => master01:9200}}'
deploy:
replicas:1

kibana:
image:kibana:5.2.2
ports:
- 5601:5601
环境:
SERVER_NAME:kibana
SERVER_HOST:0
ELASTICSEARCH_URL:http:// elastic:changeme @ master01:9200
ELASTICSEARCH_USERNAME:elastic
ELASTICSEARCH_PASSWORD:changeme
XPACK_SECURITY_ENABLED:true
XPACK_MONITORING_ENABLED:true
networks:
- es
depends_on:
- master01
部署:
副本:1

网络:
es:
驱动程序:overlay

它实际上与我的master01,02,03是随机创建的,可以随机移动到3这意味着当他们在新节点上进行重新设计后,它们将无法找到新节点时,它们将将数据复制到新节点。
随着时间的推移,这意味着我的数据存在于x3。



我无法正确使用约束将3个弹性服务绑定到每个节点,我真的好像找不到有用的东西。



我试过使用环境:constraint:node == node1但是在使用我的撰写文件构建时似乎没有任何效果。



我已经搜索过,并发现了一些关于如何使用它的例子 docker服务创建,但我似乎找不到正常的语法。



首次发布在这里,所以如果我做错了,

解决方案

我可能已经找到了一个解决方案。



http://embaby.com/blog/using-glusterfs-docker -swarm-cluster /



Gluster可能是完美的解决方案,iw如果解决了我的问题,明天会发生不适当的结果。


i was hoping someone here might be able to give me some input with a problem i'm having.

I have a Docker swarm cluster with 3 nodes and want to run the ELK stack but i am not sure how to store my data.

version: '3'
services:
  master01:
    image: elasticsearch:5.2.2
    ports:
      - 9200:9200
      - 9300:9300
    networks:
      - es
    volumes:
      - /es/data:/usr/share/elasticsearch/data
    command: >
      elasticsearch
      -E network.host=_eth0_
      -E node.master=true
      -E discovery.zen.ping.unicast.hosts=es_master01,es_master02,es_master03
      -E discovery.zen.minimum_master_nodes=3
      -E cluster.name=ElasticCluster
      -E node.name=es_master01
      -E transport.tcp.port=9300
      -E http.port=9200
      -E node.max_local_storage_nodes=3
    deploy:
      replicas: 1

  master02:
    image: elasticsearch:5.2.2
    ports:
      - 9201:9200
      - 9301:9300
    networks:
      - es
    volumes:
      - /es/data:/usr/share/elasticsearch/data
    command: >
      elasticsearch
      -E network.host=_eth0_
      -E node.master=true
      -E discovery.zen.ping.unicast.hosts=es_master01,es_master02,es_master03
      -E discovery.zen.minimum_master_nodes=3
      -E cluster.name=ElasticCluster
      -E node.name=es_master02
      -E transport.tcp.port=9300
      -E http.port=9200
      -E node.max_local_storage_nodes=3
    deploy:
      replicas: 1

  master03:
    image: elasticsearch:5.2.2
    ports:
      - 9202:9200
      - 9302:9300
    networks:
      - es
    volumes:
      - /es/data:/usr/share/elasticsearch/data
    command: >
      elasticsearch
      -E network.host=_eth0_
      -E node.master=true
      -E discovery.zen.ping.unicast.hosts=es_master01,es_master02,es_master03
      -E discovery.zen.minimum_master_nodes=3
      -E cluster.name=ElasticCluster
      -E node.name=es_master03
      -E transport.tcp.port=9300
      -E http.port=9200
      -E node.max_local_storage_nodes=3
    deploy:
      replicas: 1

  logstash:
    image: logstash:5.2.2
    ports:
      - 5000:5000
    networks:
      - es
    command: >
      logstash -e 'input { tcp { port => 5000 } } output { elasticsearch { hosts => "master01:9200" } }'
    deploy:
      replicas: 1

  kibana:
    image: kibana:5.2.2
    ports:
      - 5601:5601
    environment:
      SERVER_NAME: "kibana"
      SERVER_HOST: "0"
      ELASTICSEARCH_URL: "http://elastic:changeme@master01:9200"
      ELASTICSEARCH_USERNAME: "elastic"
      ELASTICSEARCH_PASSWORD: "changeme"
      XPACK_SECURITY_ENABLED: "true"
      XPACK_MONITORING_ENABLED: "true"
    networks:
      - es
    depends_on:
      - master01
    deploy:
      replicas: 1

networks:
  es:
    driver: overlay

It actually works apart from the fact that my master01,02,03 are created randomly and can be moved around randomly on the 3 nodes meaning that they will replicate their data to the new node when they can't find it after being remade on a new node. Over time this means my data exists x3.

I haven't been able to use constraints properly to bind the 3 elastic services to a node each, and i can't really seem to find anything that works when searching.

I've tried using environment: "constraint:node==node1" but it doesn't seem to have any effect at all when building using my compose file.

I've searched around and found some examples on how to do it with docker service create, but i can't seem to find a functioning syntax.

First time posting here, so if i did something wrong, be gentle please.

解决方案

I may have found a solution myself.

http://embaby.com/blog/using-glusterfs-docker-swarm-cluster/

Gluster might be the perfect solution, i will post results tomorrow if it solves my problem.

这篇关于Docker群集和弹性搜索,使用约束将服务绑定到特定节点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆