弹性搜索不提供大量页面大小的数据 [英] Elastic search not giving data with big number for page size

查看:16
本文介绍了弹性搜索不提供大量页面大小的数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

要获取的数据大小:大约 20,000

Size of data to get: 20,000 approx

问题:在 python 中使用以下命令搜索 Elastic Search 索引数据

Issue: searching Elastic Search indexed data using below command in python

但没有得到任何结果.

from pyelasticsearch import ElasticSearch
es_repo = ElasticSearch(settings.ES_INDEX_URL)
search_results = es_repo.search(
            query, index=advertiser_name, es_from=_from, size=_size)

如果我给出的大小小于或等于 10,000,它可以正常工作,但不能使用 20,000请帮我找到一个最佳解决方案.

If I give size less than or equal to 10,000 it works fine but not with 20,000 Please help me find an optimal solution to this.

PS:在深入挖掘 ES 时发现此消息错误:

PS: On digging deeper into ES found this message error:

结果窗口太大,from + size 必须小于或等于:[10000] 但为 [19999].请参阅滚动 API,了解请求大型数据集的更有效方法.

Result window is too large, from + size must be less than or equal to: [10000] but was [19999]. See the scrolling API for a more efficient way to request large data sets.

推荐答案

对于实时使用的最佳解决方案是使用 查询后搜索.您只需要一个日期字段和另一个唯一标识文档的字段 - _id 字段或 _uid 字段就足够了.尝试这样的事情,在我的示例中,我想提取属于单个用户的所有文档 - 在我的示例中,用户字段具有 keyword 数据类型:

for real time use the best solution is to use the search after query . You need only a date field, and another field that uniquely identify a doc - it's enough a _id field or an _uid field. Try something like this, in my example I would like to extract all the documents that belongs to a single user - in my example the user field has a keyword datatype:

from elasticsearch import Elasticsearch


es = Elasticsearch()
es_index = "your_index_name"
documento = "your_doc_type"

user = "Francesco Totti"

body2 = {
        "query": {
        "term" : { "user" : user } 
            }
        }

res = es.count(index=es_index, doc_type=documento, body= body2)
size = res['count']


body = { "size": 10,
            "query": {
                "term" : {
                    "user" : user
                }
            },
            "sort": [
                {"date": "asc"},
                {"_uid": "desc"}
            ]
        }

result = es.search(index=es_index, doc_type=documento, body= body)
bookmark = [result['hits']['hits'][-1]['sort'][0], str(result['hits']['hits'][-1]['sort'][1]) ]

body1 = {"size": 10,
            "query": {
                "term" : {
                    "user" : user
                }
            },
            "search_after": bookmark,
            "sort": [
                {"date": "asc"},
                {"_uid": "desc"}
            ]
        }




while len(result['hits']['hits']) < size:
    res =es.search(index=es_index, doc_type=documento, body= body1)
    for el in res['hits']['hits']:
        result['hits']['hits'].append( el )
    bookmark = [res['hits']['hits'][-1]['sort'][0], str(result['hits']['hits'][-1]['sort'][1]) ]
    body1 = {"size": 10,
            "query": {
                "term" : {
                    "user" : user
                }
            },
            "search_after": bookmark,
            "sort": [
                {"date": "asc"},
                {"_uid": "desc"}
            ]
        }

然后你会发现所有的 doc 附加到 result var

Then you will find all the doc appended to the result var

如果你想使用 scroll query - doc 此处:

If you would like to use scroll query - doc here:

from elasticsearch import Elasticsearch, helpers

es = Elasticsearch()
es_index = "your_index_name"
documento = "your_doc_type"

user = "Francesco Totti"

body = {
        "query": {
        "term" : { "user" : user } 
             }
        }

res = helpers.scan(
                client = es,
                scroll = '2m',
                query = body, 
                index = es_index)

for i in res:
    print(i)

这篇关于弹性搜索不提供大量页面大小的数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆