如何为每个机器上有两个节点的集群设置两台机器 [英] How to set up two machines for a cluster with two nodes on each machine

查看:90
本文介绍了如何为每个机器上有两个节点的集群设置两台机器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两台用于ES(2.2.0)的专用机器。这两台机器具有相同的规格。每个在Windows Server 2012 R2上运行,并具有128GB内存。关于ES,我计划在每个机器上有两个节点用于集群。



我正在查看elasticsearch.yml,了解如何配置每个节点以形成集群。



同一网络上具有以下服务器名称和IP地址的两台计算机:

 code> SRC01,172.21.0.21 
SRC02,172.21.0.22

我是看着elasticsearch.yml,我不知道如何设置。我想我需要在elasticsearch.yml中为网络和发现部分设置适当的值:

 #------- ---------------------------网络---------------------- ------------- 

#将绑定地址设置为特定IP(IPv4或IPv6):

#network.host :192.168.0.1

#设置HTTP的自定义端口:

#http.port:9200

#---- -----------------------------发现-------------------- --------------

#默认情况下,弹性搜索节点将通过单播找到对方。

#启动新节点时,传递主机的初始列表以执行发现:
#默认的主机列表是[127.0.0.1,[:: 1] ]

#discovery.zen.ping.unicast.hosts:[host1,host2]

#通过配置大多数的节点(节点总数/ 2 + 1):

#discovery.zen.minimum_master_nodes:3

我在网络上搜索和SO,希望找到一个完整的配置示例,让我开始,但找不到一个。



任何输入或指针都非常感激。



更新



使用Val的帮助,这里是最小的elasticsearch.yml测试后我有四个节点(每台机器2台):

 #---------- SRC01,节点1 --------- 
cluster.name:弹性
node.name:elastic_src01_1
network.host:172.21.0.21
discovery.zen。 ping.unicast.hosts:[172.21.0.21,172.21.0.22]


#---------- SRC01,节点2 ---- -----
cluster.name:弹性
node.name:elastic_src01_2
network.host:172.21.0.21
discovery.zen.ping.unicast.hosts:[ 172.21.0.21,172.21.0.22]


#---------- SRC02,节点1 ---------
cluster.name:elastic
node.name:elastic_src02_1
network.host:172.21.0.22
discovery.zen.ping.unicast.hosts:[172.21.0.21,172.21。 0.22]


#---------- SRC02,节点2 ---------
cluster.name:弹性
node.name:elastic_src02_2
network.host:172.21.0.22
discovery.zen.ping.unicast.host s:[172.21.0.21,172.21.0.22]

这是我得到的问题:


  1. 我启动了Node elastic_src01_1,然后启动了Node elastic_src01_2,它们位于同一台机器上。启动elastic_src01_2时,我可以看到ES生成的邮件如下( detected_master

日志摘录:

  [2016-02-28 12:38:33,155] [INFO] [node] [elastic_src01_2] version [2.2。 0],pid [4620],build [8ff36d1 / 2016-01-27T13:32:39Z] 
[2016-02-28 12:38:33,155] [INFO] [node] [elastic_src01_2]初始化。
[2016-02-28 12:38:33,546] [INFO] [plugins] [elastic_src01_2] modules [lang-expression,lang-groovy],plugins [],sites []
[2016 -02-28 12:38:33,562] [INFO] [env] [elastic_src01_2]使用[1]数据路径,mounts [[Data(E :)]],net useful_space [241.7gb],
net total_space [249.9gb],旋转? [未知],类型[NTFS]
[2016-02-28 12:38:33,562] [INFO] [env] [elastic_src01_2]堆大小[1.9gb],压缩的普通对象指针[true]
[2016-02-28 12:38:35,077] [INFO] [node] [elastic_src01_2]初始化
[[...] ..
[2016-02-28 12:38:35,218] [INFO] [运输] [elastic_src01_2] publish_address {172.21.0.21:9302},bound_addresses {172.21.0.21:9302}
[2016 -02-28 12:38:35,218] [INFO] [discovery] [elastic_src01_2]弹性/ N8r-gD9WQSSvAYMOlJzmIg
[2016-02-28 12:38:39,796] [INFO] [cluster.service] [elastic_src01_2 ] detected_master {elastic_src01_1} {UWGAo0BKTQm2f650nyDKYg} {172.21.0.21} {1
72.21.0.21:9300},{{elastic_src01_1} {UWGAo0BKTQm2f650nyDKYg} {172.21.0.21} {172.21.0.21:9300},{elastic_src01_1} {qNDQjkmsRjiIVjZ88JsX4g} {172.21.0.2
1} {172.21.0.21:9301},},原因:zen-disco-receive(from主[{elastic_src01_1} {UWGAo0BKTQm2f650nyDKYg} {172.21.0.21} {172.21.0.21:9300}])
[2016-02-28 12:38:39,843] [INFO] [http] [elastic_src01_2] publish_address {172.21 .0.21:9202},bound_addresses {172.21.0.21:9202}
[2016-02-28 12:38:39,843] [INFO] [node] [elastic_src01_2]开始

但是,当我在SRC02机器上启动了节点1时,我没有看到 detected_master 消息。这是ES产生的:

  [2016-02-28 12:22:52,256] [INFO] [node] [elastic_src02_1 ] version [2.2.0],pid [6432],build [8ff36d1 / 2016-01-27T13:32:39Z] 
[2016-02-28 12:22:52,256] [INFO] [node] [ elastic_src02_1] initializing ...
[2016-02-28 12:22:52,662] [INFO] [plugins] [elastic_src02_1]模块[lang-expression,lang-groovy],plugins [],sites []
[[...]] [INFO] [env] [elastic_src02_1]使用[1]数据路径,mount [[Data(E :)]],net useful_space [241.6gb],net total_
空间[249.8gb],旋转? [未知],类型[NTFS]
[2016-02-28 12:22:52,693] [INFO] [env] [elastic_src02_1]堆大小[910.5mb],压缩的普通对象指针[true]
[2016-02-28 12:22:54,193] [INFO] [node] [elastic_src02_1]初始化
[ ..
[2016-02-28 12:22:54,334] [INFO] [运输] [elastic_src02_1] publish_address {172.21.0.22:9300},bound_addresses {172.21.0.22:9300}
[2016 -02-28 12:22:54,334] [INFO] [发现] [elastic_src02_1]弹性/ SNvuAfnxQV-RW430zLF6Vg
[2016-02-28 12:22:58,912] [INFO] [cluster.service] [elastic_src02_1 ] new_master {elastic_src02_1} {SNvuAfnxQV-RW430zLF6Vg} {172.21.0.22} {172.21.0.22:9300
},原因:zen-disco-join(已选择_as_master,[0]接收)
[2016- 02-28 12:22:58,943] [INFO] [gateway] [elastic_src02_1]将[0]索引恢复到cluster_state
[2016 -02-28 12:22:58,959] [INFO] [http] [elastic_src02_1] publish_address {172.21.0.22:9200},bound_addresses {172.21.0.22:9200}
[2016-02-28 12:22: 58,959] [INFO] [node] [elastic_src02_1]开始

SRC02机器上的节点是否真的形成在SRC01机器上有节点的集群?


  1. 在同一台机器(SRC01)上,如果我添加

discovery.zen.minimum_master_nodes:3



到节点的elasticsearch.yml文件elastic_src01_1,elastic_src01_2,
然后在机器SRC01上启动第二个节点elastic_src01_2时,我无法在ES生成的消息中看到 detected_master



这是否意味着elastic_src01_1和elastic_src01_2不能形成集群?



感谢您的帮助!



更新2



SRC01和SRC02机器可以看到对方。以下是SRC02到SRC01的ping结果:

  C:\Users\Administrator> ping 172.21.0.21 

平均172.21.0.21与32字节的数据:
从172.21.0.21回复:bytes = 32时间< 1ms TTL = 128
从172.21.0.21回复:bytes = 32时间< 1ms TTL = 128
从172.21.0.21回复:bytes = 32时间< 1ms TTL = 128
从172.21.0.21回复:bytes = 32时间< 1ms TTL = 128

更新3



问题已解决。我的设置以前没有工作的原因是服务器的防火墙阻止端口9300/9200进行通信。

解决方案

基本上,您只需配置网络设置即可确保所有节点都可以在网络上看到对方。另外,由于您在同一台计算机上运行两个节点,而且仍然需要高可用性,因此您希望防止主碎片及其副本降落在同一台物理机上。



最后,由于您的群集中总共有四个节点,所以您需要防止分裂大脑情况,因此您还需要设置 discovery.zen.minimum_master_nodes



SRC01上的节点1/2:

 #cluster name 
cluster.name:Name_of_your_cluster

#给每个节点一个不同的名称(可选,但是如果你不知道漫威字符,则是最佳做法)
node.name:SRC01_Node1 / 2

#该节点将绑定并发布的IP
network.host:172.21.0.21

#其他节点的IP
discovery.zen .ping.unicast.hosts:[172.21.0.22]

#prevent split br ain
discovery.zen.minimum_master_nodes:3

#以防止主/副本碎片在同一物理主机上
#看到为什么在http://stackoverflow.com/问题/ 35677741 /适用于-a-heap-size-for-a-dedicated-machine-with-two-nodes-in-a-cluster
cluster.routing.allocation.same_shard.host:true

#防止内存交换
bootstrap.mlockall:true

SRC02上的节点1/2:

 #集群名称
cluster.name:Name_of_your_cluster

#给每个节点一个不同的名称(可选但是很好的做法,如果你不知道Marvel字符)
node.name:SRC02_Node1 / 2

#该节点绑定到的IP并发布
network.host:172.21.0.22

#其他节点的IP
discovery.zen.ping.unicast.hosts:[172.21.0.21]

#防止拆分脑
discovery.zen.minimum_master_nodes:3

#防止主/副本碎片在sam e物理主机
#看到为什么在http://stackoverflow.com/questions/35677741/proper-value-of-es-heap-size-for-a-dedicated-machine-with-two-nodes-in -a-cluster
cluster.routing.allocation.same_shard.host:true

#防止内存交换
bootstrap.mlockall:true


I have two dedicated machines for ES (2.2.0). The two machines have the same specs. Each runs on a Windows Server 2012 R2 and has 128GB memory. Regarding ES, I plan to have TWO nodes on each machine for the cluster.

I am looking at elasticsearch.yml to see how to configure each node to form a cluster.

The two machines on the same network with the following server names and IP addresses:

SRC01, 172.21.0.21
SRC02, 172.21.0.22

I am looking at elasticsearch.yml and I am not sure how to set things up. I guess that I need set proper values for Network and Discovery sections in elasticsearch.yml:

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# --------------------------------- Discovery ----------------------------------
#
# Elasticsearch nodes will find each other via unicast, by default.
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#

I googled on the net and SO and hoped to find a complete config example for me to start, but failed to find one.

Any input or pointer is really appreciated.

UPDATE

With Val's help, here is the minimum elasticsearch.yml that I have on the four nodes (2 one each machine) after tests:

#----------SRC01, node 1---------
cluster.name: elastic
node.name: elastic_src01_1
network.host: 172.21.0.21
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]


#----------SRC01, node 2---------
cluster.name: elastic
node.name: elastic_src01_2
network.host: 172.21.0.21
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]


#----------SRC02, node 1---------
cluster.name: elastic
node.name: elastic_src02_1
network.host: 172.21.0.22
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]


#----------SRC02, node 2---------
cluster.name: elastic
node.name: elastic_src02_2
network.host: 172.21.0.22
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]

Here is the questions I got:

  1. I started Node elastic_src01_1 and then Node elastic_src01_2 and they are on the same machine. When starting elastic_src01_2, I am able to see ES-generated messages the following (detected_master)

Logs excerpt:

[2016-02-28 12:38:33,155][INFO ][node                     ] [elastic_src01_2] version[2.2.0], pid[4620], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-02-28 12:38:33,155][INFO ][node                     ] [elastic_src01_2] initializing ...
[2016-02-28 12:38:33,546][INFO ][plugins                  ] [elastic_src01_2] modules [lang-expression, lang-groovy], plugins [], sites []
[2016-02-28 12:38:33,562][INFO ][env                      ] [elastic_src01_2] using [1] data paths, mounts [[Data (E:)]], net usable_space [241.7gb],
net total_space [249.9gb], spins? [unknown], types [NTFS]
[2016-02-28 12:38:33,562][INFO ][env                      ] [elastic_src01_2] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-02-28 12:38:35,077][INFO ][node                     ] [elastic_src01_2] initialized
[2016-02-28 12:38:35,077][INFO ][node                     ] [elastic_src01_2] starting ...
[2016-02-28 12:38:35,218][INFO ][transport                ] [elastic_src01_2] publish_address {172.21.0.21:9302}, bound_addresses {172.21.0.21:9302}
[2016-02-28 12:38:35,218][INFO ][discovery                ] [elastic_src01_2] elastic/N8r-gD9WQSSvAYMOlJzmIg
[2016-02-28 12:38:39,796][INFO ][cluster.service          ] [elastic_src01_2] detected_master {elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{1
72.21.0.21:9300}, added {{elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{172.21.0.21:9300},{elastic_src01_1}{qNDQjkmsRjiIVjZ88JsX4g}{172.21.0.2
1}{172.21.0.21:9301},}, reason: zen-disco-receive(from master [{elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{172.21.0.21:9300}])
[2016-02-28 12:38:39,843][INFO ][http                     ] [elastic_src01_2] publish_address {172.21.0.21:9202}, bound_addresses {172.21.0.21:9202}
[2016-02-28 12:38:39,843][INFO ][node                     ] [elastic_src01_2] started

However, when I started Node 1 on SRC02 machine, I am not seeing detected_master message. Here is what ES generates:

[2016-02-28 12:22:52,256][INFO ][node                     ] [elastic_src02_1] version[2.2.0], pid[6432], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-02-28 12:22:52,256][INFO ][node                     ] [elastic_src02_1] initializing ...
[2016-02-28 12:22:52,662][INFO ][plugins                  ] [elastic_src02_1] modules [lang-expression, lang-groovy], plugins [], sites []
[2016-02-28 12:22:52,693][INFO ][env                      ] [elastic_src02_1] using [1] data paths, mounts [[Data (E:)]], net usable_space [241.6gb], net total_
space [249.8gb], spins? [unknown], types [NTFS]
[2016-02-28 12:22:52,693][INFO ][env                      ] [elastic_src02_1] heap size [910.5mb], compressed ordinary object pointers [true]
[2016-02-28 12:22:54,193][INFO ][node                     ] [elastic_src02_1] initialized
[2016-02-28 12:22:54,193][INFO ][node                     ] [elastic_src02_1] starting ...
[2016-02-28 12:22:54,334][INFO ][transport                ] [elastic_src02_1] publish_address {172.21.0.22:9300}, bound_addresses {172.21.0.22:9300}
[2016-02-28 12:22:54,334][INFO ][discovery                ] [elastic_src02_1] elastic/SNvuAfnxQV-RW430zLF6Vg
[2016-02-28 12:22:58,912][INFO ][cluster.service          ] [elastic_src02_1] new_master {elastic_src02_1}{SNvuAfnxQV-RW430zLF6Vg}{172.21.0.22}{172.21.0.22:9300
}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-02-28 12:22:58,943][INFO ][gateway                  ] [elastic_src02_1] recovered [0] indices into cluster_state
[2016-02-28 12:22:58,959][INFO ][http                     ] [elastic_src02_1] publish_address {172.21.0.22:9200}, bound_addresses {172.21.0.22:9200}
[2016-02-28 12:22:58,959][INFO ][node                     ] [elastic_src02_1] started

Does the node on the SRC02 machine really form a cluster with nodes on the SRC01 machine?

  1. On the same machine (SRC01), if I add

discovery.zen.minimum_master_nodes: 3

to the elasticsearch.yml file of Node elastic_src01_1, elastic_src01_2, then when starting the second node elastic_src01_2 on machine SRC01, I am unable to see detected_master in ES-generated messages.

Does this mean elastic_src01_1 and elastic_src01_2 not form a cluster?

Thanks for help!

UPDATE 2

The SRC01 and SRC02 machines can see each other. Here are ping results from SRC02 to SRC01:

C:\Users\Administrator>ping 172.21.0.21

Pinging 172.21.0.21 with 32 bytes of data:
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128

UPDATE 3

The problem is resolved. The reason that my setup was not working before is the server's firewall prevented port 9300/9200 for communication.

解决方案

Basically you simply need to configure the network settings to make sure that all nodes can see each other on the network. Additionally, since you're running two nodes on the same machine and you still want high availability, you want to prevent a primary shard and its replica to land on the same physical machine.

Finally, since you'll have a total of four nodes in your cluster, you'll want to prevent split brain situations, so you need to set discovery.zen.minimum_master_nodes as well.

Node 1/2 on SRC01:

# cluster name
cluster.name: Name_of_your_cluster

# Give each node a different name (optional but good practice if you don't know Marvel characters)
node.name: SRC01_Node1/2

# The IP that this node will bind to and publish
network.host: 172.21.0.21

# The IP of the other nodes
discovery.zen.ping.unicast.hosts: ["172.21.0.22"]

# prevent split brain
discovery.zen.minimum_master_nodes: 3    

# to prevent primary/replica shards to be on the same physical host 
# see why at http://stackoverflow.com/questions/35677741/proper-value-of-es-heap-size-for-a-dedicated-machine-with-two-nodes-in-a-cluster
cluster.routing.allocation.same_shard.host: true

# prevent memory swapping
bootstrap.mlockall: true

Node 1/2 on SRC02:

# cluster name
cluster.name: Name_of_your_cluster

# Give each node a different name (optional but good practice if you don't know Marvel characters)
node.name: SRC02_Node1/2

# The IP that this node will bind to and publish
network.host: 172.21.0.22

# The IP of the other nodes
discovery.zen.ping.unicast.hosts: ["172.21.0.21"]

# prevent split brain
discovery.zen.minimum_master_nodes: 3    

# to prevent primary/replica shards to be on the same physical host 
# see why at http://stackoverflow.com/questions/35677741/proper-value-of-es-heap-size-for-a-dedicated-machine-with-two-nodes-in-a-cluster
cluster.routing.allocation.same_shard.host: true

# prevent memory swapping
bootstrap.mlockall: true

这篇关于如何为每个机器上有两个节点的集群设置两台机器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆