如何使用弹性搜索修复群集运行状况黄色 [英] How to fix cluster health yellow with Elastic Search

查看:204
本文介绍了如何使用弹性搜索修复群集运行状况黄色的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经在服务器上安装了MongoDb和ElasticSearch。使用 https://github.com/richardwilly98/elasticsearch-river-mongodb 我已将ElasticSearch和MongoDb连接在一起。



我创建一个新的索引:

  curl -XPUT' http://127.0.0.1:9200/_river/mongodb/_meta'-d'{
type:mongodb,
mongodb:{
db: my_database,
collection:my_collection
},
index:{
name:mainindex,
type字符串,
批量:{
concurrent_requests:1
}
}
}'
pre>

执行命令后,我转到 http:// xxxx: 9200 / _plugin / head / 我看到消息:cluster health:yellow(1,6)。





解决方案

通过在弹性搜索配置中拥有相同的集群名称来配置集群



默认的elasticsearch.yml是可能使用这些设置在开始如下:

  ############## ####################集群############################# ###### 

#集群名称标识您的集群进行自动发现。如果您在同一网络上运行
#个多个集群,请确保使用唯一的名称。

#cluster.name:elasticsearch


###################### #############节点###################################### #

#节点名称在启动时动态生成,所以您可以手动配置
#。您可以将此节点绑定到一个特定的名称:

#node.name:Franz Kafka

这里你需要配置一个唯一的



cluster.name:MainCluster



并为每个机器和/或实例一个不同的唯一



node.name :LocalMachine1



您现在需要将这个 elasticsearch.yml 复制到另一个机器(在同一个网络中)或与例如 elasticsearch_2.yml 相同的地方编辑它:



node.name:LocalMachine2



您的群集已准备就绪



如果没有配置,popiscsearch将使用随机的Marvel字符(根据文档为3000),因此不要更改 node.name 应该可以还有一个


为了让两个节点在同一台机器上运行,你必须进行一个配置,例如 elasticsearch_2.yml 复制,具有以上更改。
此外,您还必须具有数据和日志路径
的副本。 (自制特定路径:)

  cp -r / usr / local / var / elasticsearch / usr / local / var / elasticsearch_2 
cp -r / usr / local / var / log / elasticsearch / usr / local / var / log / elasticsearch_2

可能看起来像

  ################# ################################################################################################### ##### 

#包含配置的目录的路径(此文件和logging.yml):

#path.conf:/ path / to / conf

#分配给此节点的索引数据的目录路径。

path.data:/ usr / local / var / elasticsearch_2 /

#可以选择包含多个位置,导致数据在
#文件级别上的位置(a RAID 0),有利于创建时空闲
#空闲空间的位置。例如:

#path.data:/ path / to / data1,/ path / to / data2

#到临时文件的路径:

#path.work:/ path / to / work

#日志文件路径:

path.logs:/ usr / local / var / log /弹性搜索/

确保您没有在localhost环回设备上运行elasicsearch



127.0.0.1



只是将其注释出来,以防万一不是(自制这个补丁是这样)

  #############################网络和HTTP ################################ 

#Elasticsearch默认将自身绑定到0.0 .0.0地址,并监听端口[9200-9300]上的
#的HTTP流量和端口[9300-9400]上的节点到节点
#通信。 (范围意味着如果端口正忙,它将自动
#尝试下一个端口)。

#设置绑定地址(IPv4或IPv6):

#network.bind_host:192.168.0.1

#设置地址其他节点将用于与该节点进行通信。如果不是
#设置,则会自动导出。它必须指向一个实际的IP地址。

#network.publish_host:192.168.0.1

#设置'bind_host'和'publish_host':

#network.host: 127.0.0.1

现在,您可以开始这样的弹性搜索:

  bin / elasticsearch -D es.config = / usr / local / Cellar / elasticsearch / 1.0.0.RC1 / config / elasticsearch.yml 

为第一个节点和主人(因为首先开始)



然后

  bin / elasticsearch -D es.config = / usr / local / Cellar / elasticsearch / 1.0.0.RC1 / config /elasticsearch_2.yml 

现在你应该有2个节点运行


I have setup on server, with MongoDb and ElasticSearch. Using https://github.com/richardwilly98/elasticsearch-river-mongodb I have connected ElasticSearch and MongoDb together.

I create a new index using:

curl -XPUT 'http://127.0.0.1:9200/_river/mongodb/_meta' -d '{ 
        "type": "mongodb", 
        "mongodb": { 
        "db": "my_database", 
        "collection": "my_collection"
    }, 
        "index": {
        "name": "mainindex", 
        "type": "string",
        "bulk": {
            "concurrent_requests": 1
        }
    }
}'

Once the command is executed and I go to http://x.x.x.x:9200/_plugin/head/ I see the message: cluster health: yellow (1, 6).

解决方案

A cluster is being configured by having the same cluster name inside the elastic search config

The default elasticsearch.yml you are probably using has these settings in the Beginning like this:

################################### Cluster ###################################

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
# cluster.name: elasticsearch


#################################### Node #####################################

# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
# node.name: "Franz Kafka"

here you would need to configure a unique

cluster.name: "MainCluster"

and for each machine and/or instance a different unique

node.name: "LocalMachine1"

you now need to copy this elasticsearch.yml to another machine (in the same Network), or to the same place as e.g.elasticsearch_2.yml edit it for:

node.name: "LocalMachine2"

and your cluster is ready to go

if not configured elastiscsearch will use a random Marvel Character (of 3000 according the the documentation), so not to change the node.name should be ok also

For having two nodes running on the same machine, you must make a configuration e.g. elasticsearch_2.yml copy, with above changes. Also you must have copies of the data and log path e.g. (homebrew specific paths:)

cp -r /usr/local/var/elasticsearch /usr/local/var/elasticsearch_2
cp -r /usr/local/var/log/elasticsearch /usr/local/var/log/elasticsearch_2

might look like

#################################### Paths ####################################

# Path to directory containing configuration (this file and logging.yml):
#
# path.conf: /path/to/conf

# Path to directory where to store index data allocated for this node.
#
path.data: /usr/local/var/elasticsearch_2/
#
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
# path.data: /path/to/data1,/path/to/data2

# Path to temporary files:
#
# path.work: /path/to/work

# Path to log files:
#
path.logs: /usr/local/var/log/elasticsearch_2/

make sure you do not have running elasicsearch on localhost loopback device

127.0.0.1

just comment it out in case it is not (homebrew does patch ist this way)

############################## Network And HTTP ###############################

# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).

# Set the bind address specifically (IPv4 or IPv6):
#
# network.bind_host: 192.168.0.1

# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
# network.publish_host: 192.168.0.1

# Set both 'bind_host' and 'publish_host':
#
# network.host: 127.0.0.1

now you can start elastic search like this:

bin/elasticsearch -D es.config=/usr/local/Cellar/elasticsearch/1.0.0.RC1/config/elasticsearch.yml

for the first node and master (because started first)

and then

bin/elasticsearch -D es.config=/usr/local/Cellar/elasticsearch/1.0.0.RC1/config/elasticsearch_2.yml

Now you should have got 2 Nodes running

这篇关于如何使用弹性搜索修复群集运行状况黄色的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆