ElasticSearch 获取高亮片段的偏移量 [英] ElasticSearch get offsets of highlighted snippets

查看:11
本文介绍了ElasticSearch 获取高亮片段的偏移量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以获取每个突出显示片段的字符位置?我需要将突出显示的文本与源文档进行匹配,并且使用字符位置可以做到这一点.

Is it possible to get character positions of each highlighted fragment? I need to match the highlighted text back to the source document and having character positions would make it possible.

例如:

curl "localhost:9200/twitter/tweet/_search?pretty=true" -d '{
    "query": {
        "query_string": {
            "query": "foo"
        }
    },
    "highlight": {
        "fields": {
            "message": {"number_of_fragments": 20}
        }
    }    
}'

返回此亮点:

"highlight" : {
    "message" : [ "some <em>foo</em> text" ]
 }

如果匹配文档中的字段消息是:

If the field message in the matched document were:

"Here is some foo text"

有没有办法知道代码片段从匹配字段的第 8 个字符开始到第 21 个字符结束?

is there a way to know that the snippet begins at char 8 and ends at char 21 of the matched field?

知道匹配的令牌的开始/结束偏移量对我也有好处——也许有一种方法可以使用script_fields访问该信息?(这个问题显示了如何获取令牌,但不显示偏移量).

Knowing the start/end offset of the matched token would be good for me as well - perhaps there is a way to access that information using script_fields? (This question shows how to obtain the tokens, but not the offsets).

消息"字段具有:

"term_vector" : "with_positions_offsets",
"index_options" : "positions" 

推荐答案

客户端方法实际上是标准做法.

The client-side approach is actually standard practice.

我们已经讨论过添加偏移量,但担心这会导致更多的混乱.提供的偏移量特定于 Java 的 UTF-16 字符串编码,虽然它们在技术上可用于计算 $LANG 中的片段,但解析您指定的分隔符的响应文本更为直接.

We have discussed adding the offsets, but are afraid it would lead to more confusion. The offsets provided are specific to Java's UTF-16 String encoding, which, while they could technically be used to calculate the fragments from $LANG, it's way more straightforward to parse the response text for the delimiters you specified.

这篇关于ElasticSearch 获取高亮片段的偏移量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆