PHP的Elasticsearch批量上传错误-超出了索引中字段总数[1000]的限制 [英] Elasticsearch bulk upload error with PHP - Limit of total fields [1000] in index has been exceeded

查看:43
本文介绍了PHP的Elasticsearch批量上传错误-超出了索引中字段总数[1000]的限制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们计划在我们的一个项目中使用ElasticSearch.目前,我们正在使用我们的数据测试ElasticSearch 5.0.1.我们面临的一个问题是,由于发生错误,我们正在将MySQL表中的文件批量上传到elasticsearch.

We are planning use ElasticSearch in one of our projects. Currently, we are testing ElasticSearch 5.0.1 with our data. One issue we are facing is when we are doing a bulk upload from our MySQL tables to elasticsearch following error we are getting...

java.lang.IllegalArgumentException: Limit of total fields [1000] in index [shopfront] has been exceeded
at org.elasticsearch.index.mapper.MapperService.checkTotalFieldsLimit(MapperService.java:482) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:343) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:277) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:323) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:241) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.cluster.service.ClusterService.runTasksForExecutor(ClusterService.java:555) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:896) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:451) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) ~[elasticsearch-5.0.1.jar:5.0.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]

我们使用PHP作为Elasticsearch客户端,以完成从MySQL到Elastic的批量上传.经过一番谷歌搜索后,我得到了这条信息- https://discuss.elastic.co/t/es-2-3-5-x-metricbeat-index-field-limit/66821

We are using PHP as elasticsearch client to doing the bulk upload from MySQL to Elastic. After doing some googling I got this piece of info - https://discuss.elastic.co/t/es-2-3-5-x-metricbeat-index-field-limit/66821

我在某处还读到,使用"index.mapping.total_fields.limit"将解决该问题.但是,无法理解如何在我的PHP代码中使用它.这是我的PHP代码.

Somewhere also I read that using of "index.mapping.total_fields.limit" will fix the thing. But, can't able to understand how to using that in my PHP code. Here is my PHP code.

$params = ['body' => []];

$i = 1;
foreach ($productsList as $key => $value) {

    $params['body'][] = [
        'index' => [
            '_index' => 'shopfront',
            '_type' => 'products'
        ],
        'settings' => ['index.mapping.total_fields.limit' => 3000]
    ];

    $params['body'][] = [
        'product_displayname' => $value['product_displayname'],
        'product_price' => $value['product_price'],
        'popularity' => $value['popularity'],
        'lowestcomp_price' => $value['lowestcomp_price']
    ];

    // Every 1000 documents stop and send the bulk request
    if ($i % 1000 == 0) {
        $responses = $client->bulk($params);

        // erase the old bulk request
        $params = ['body' => []];

        // unset the bulk response when you are done to save memory
        unset($responses);
    }

    $i++;
}

// Send the last batch if it exists
if (!empty($params['body'])) {
    $responses = $client->bulk($params);
}

注意-我在Elasticsearch 2.4.1&中使用了相同的代码;很好用.

NOTE - I've used same code with Elasticsearch 2.4.1 & it's working fine with that.

推荐答案

在ES 5中,ES人员决定限制映射类型可以包含的字段数,以防止映射爆炸.您已经注意到,该限制已设置为每个映射1000个字段,但是您可以通过在创建索引时指定 index.mapping.total_fields.limit 设置来提高该限制以满足您的需求.或更新索引设置,像这样:

In ES 5, the ES folks decided to limit the number of fields that a mapping type can contain to prevent a mapping explosion. As you've noticed, that limit has been set at 1000 fields per mapping, but you can lift that limit to suit your needs by specifying the index.mapping.total_fields.limit setting either at index creation time or by updating the index settings, like this:

curl -XPUT 'localhost:9200/shopfront/_settings' -d '
{
    "index.mapping.total_fields.limit": 3000
}'

请注意,您还需要问自己,拥有这么多字段是否是一件好事.你都需要吗?你可以结合一些吗?等,等等

Note that you also need to ask yourself whether having that many fields is a good thing. Do you need them all? Can you combine some? etc, etc

这篇关于PHP的Elasticsearch批量上传错误-超出了索引中字段总数[1000]的限制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆