如何将elasticsearch数据从一台服务器移动到另一台服务器 [英] how to move elasticsearch data from one server to another
问题描述
如何将 Elasticsearch 数据从一台服务器移动到另一台服务器?
我在一个具有多个索引的本地节点上有服务器 运行 Elasticsearch 1.1.1.我想将该数据复制到运行 Elasticsearch 1.3.4
I have server A running Elasticsearch 1.1.1 on one local node with multiple indices. I would like to copy that data to server B running Elasticsearch 1.3.4
目前的程序
- 关闭两台服务器上的 ES 并
- 将所有数据 scp 到新服务器上正确的数据目录.(数据似乎位于我的 debian 机器上的/var/lib/elasticsearch/)
- 将权限和所有权更改为 elasticsearch:elasticsearch
- 启动新的 ES 服务器
当我查看带有 ES head 插件的集群时,没有出现索引.
When I look at the cluster with the ES head plugin, no indices appear.
好像没有加载数据.我错过了什么吗?
It seems that the data is not loaded. Am I missing something?
推荐答案
选择的答案听起来比实际稍微复杂一些,以下是您需要的(首先在您的系统上安装 npm).
The selected answer makes it sound slightly more complex than it is, the following is what you need (install npm first on your system).
npm install -g elasticdump
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=mapping
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=data
如果映射保持不变,您可以跳过第一个 elasticdump 命令以进行后续副本.
You can skip the first elasticdump command for subsequent copies if the mappings remain constant.
我刚刚完成了从 AWS 到 Qbox.io 的迁移,上面没有任何问题.
I have just done a migration from AWS to Qbox.io with the above without any problems.
更多详情请访问:
https://www.npmjs.com/package/elasticdump
出于完整性考虑,包含帮助页面(截至 2016 年 2 月):
Help page (as of Feb 2016) included for completeness:
elasticdump: Import and export tools for elasticsearch
Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]
--input
Source location (required)
--input-index
Source index and type
(default: all, example: index/type)
--output
Destination location (required)
--output-index
Destination index and type
(default: all, example: index/type)
--limit
How many objects to move in bulk per operation
limit is approximate for file streams
(default: 100)
--debug
Display the elasticsearch commands being used
(default: false)
--type
What are we exporting?
(default: data, options: [data, mapping])
--delete
Delete documents one-by-one from the input as they are
moved. Will not delete the source index
(default: false)
--searchBody
Preform a partial extract based on search results
(when ES is the input,
(default: '{"query": { "match_all": {} } }'))
--sourceOnly
Output only the json contained within the document _source
Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
sourceOnly: {SOURCE}
(default: false)
--all
Load/store documents from ALL indexes
(default: false)
--bulk
Leverage elasticsearch Bulk API when writing documents
(default: false)
--ignore-errors
Will continue the read/write loop on write error
(default: false)
--scrollTime
Time the nodes will hold the requested search in order.
(default: 10m)
--maxSockets
How many simultaneous HTTP requests can we process make?
(default:
5 [node <= v0.10.x] /
Infinity [node >= v0.11.x] )
--bulk-mode
The mode can be index, delete or update.
'index': Add or replace documents on the destination index.
'delete': Delete documents on destination index.
'update': Use 'doc_as_upsert' option with bulk update API to do partial update.
(default: index)
--bulk-use-output-index-name
Force use of destination index name (the actual output URL)
as destination while bulk writing to ES. Allows
leveraging Bulk API copying data inside the same
elasticsearch instance.
(default: false)
--timeout
Integer containing the number of milliseconds to wait for
a request to respond before aborting the request. Passed
directly to the request library. If used in bulk writing,
it will result in the entire batch not being written.
Mostly used when you don't care too much if you lose some
data when importing but rather have speed.
--skip
Integer containing the number of rows you wish to skip
ahead from the input transport. When importing a large
index, things can go wrong, be it connectivity, crashes,
someone forgetting to `screen`, etc. This allows you
to start the dump again from the last known line written
(as logged by the `offset` in the output). Please be
advised that since no sorting is specified when the
dump is initially created, there's no real way to
guarantee that the skipped rows have already been
written/parsed. This is more of an option for when
you want to get most data as possible in the index
without concern for losing some rows in the process,
similar to the `timeout` option.
--inputTransport
Provide a custom js file to us as the input transport
--outputTransport
Provide a custom js file to us as the output transport
--toLog
When using a custom outputTransport, should log lines
be appended to the output stream?
(default: true, except for `$`)
--help
This page
Examples:
# Copy an index from production to staging with mappings:
elasticdump
--input=http://production.es.com:9200/my_index
--output=http://staging.es.com:9200/my_index
--type=mapping
elasticdump
--input=http://production.es.com:9200/my_index
--output=http://staging.es.com:9200/my_index
--type=data
# Backup index data to a file:
elasticdump
--input=http://production.es.com:9200/my_index
--output=/data/my_index_mapping.json
--type=mapping
elasticdump
--input=http://production.es.com:9200/my_index
--output=/data/my_index.json
--type=data
# Backup and index to a gzip using stdout:
elasticdump
--input=http://production.es.com:9200/my_index
--output=$
| gzip > /data/my_index.json.gz
# Backup ALL indices, then use Bulk API to populate another ES cluster:
elasticdump
--all=true
--input=http://production-a.es.com:9200/
--output=/data/production.json
elasticdump
--bulk=true
--input=/data/production.json
--output=http://production-b.es.com:9200/
# Backup the results of a query to a file
elasticdump
--input=http://production.es.com:9200/my_index
--output=query.json
--searchBody '{"query":{"term":{"username": "admin"}}}'
------------------------------------------------------------------------------
Learn more @ https://github.com/taskrabbit/elasticsearch-dump`enter code here`
这篇关于如何将elasticsearch数据从一台服务器移动到另一台服务器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!