自动或查询使用SQLite FTS4 [英] Automatic OR queries using SQLite FTS4

查看:652
本文介绍了自动或查询使用SQLite FTS4的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个SQLite数据库FTS4,我希望能够从用户输入,例如对其执行或查询如果用户输入你好我愿做

I have a SQLite FTS4 database, and I would like to be able to perform OR queries on it from user-input, e.g. if the user enters "hello there" I would like to do

SELECT * FROM fts_table WHERE text MATCHES 'hello OR there'.

不过,如果我通过简单地在用户字符串传递,我得到一个隐含的AND查询来代替。当然,我可以记号化字符串自己,并插入手术室,但后来我用我自己的分词可能来自不同的标记生成器被SQLite的公司FTS内部使用。

However, if I pass simply pass in the user-string I get an implicit AND query instead. I could of course tokenize the string myself, and insert ORs, but then I'm using my own tokenizer which could differ from the tokenizer being used internally by SQLite's FTS.

有没有一些方法来无论是从我的C包装访问S​​QLite的标记生成器,或者构造一个SQL查询,这样做,沿着线:

Is there some way to either access the SQLite tokenizer from my C wrapper, or else to construct a SQL query that does this, along the lines of:

SELECT * FROM fts_table WHERE text MATCHES interpolate('hello there', ' OR ')

感谢任何指针将AP preciated。

Thanks any pointers would be appreciated.

编辑:

任何未来的这个问题的读者应该注意到这个问题,其他:如何如果用户输入您好,有逃避字符串SQLite的FTS查询它不是安全的,只是插或成字符串,因为你会经常与无效的字符串结束,例如插入将创建招呼,或者和或有这是无效的。因此,

Any future readers of this question should note this other question: How to escape string for SQLite FTS query it's not safe to just interpolate OR into the string, because you will often end up with invalid strings, for instance if the user enters hello AND there interpolating OR will create hello OR AND OR there which is not valid. So

推荐答案

因此​​,要回答我的问题的一部分:事实证明,您可以通过 SELECT fts3_tokenizer(小于访问指针SQLite的标记生成器功能;标记者-名称>)。请参阅 http://www.sqlite.org/fts3.html#section_8_1 了解详细信息。

So to answer part of my question: it turns out that you can access a pointer to the SQLite tokenizer function via SELECT fts3_tokenizer(<tokenizer-name>). Refer to http://www.sqlite.org/fts3.html#section_8_1 for more details.

这返回 sqlite3_tokenizer_module 结构,包含函数指针 xCreate XOPEN xNext

This returns a sqlite3_tokenizer_module struct, containing function pointers xCreate, xOpen, and xNext.

您需要按照这里描述的头文件 fts3_tokenizer.h 添加到项目中。 http://www.mail-archive.com/sqlite-users %40sqlite.org / msg62644.html

You'll need to add the header file fts3_tokenizer.h to your project as described here. http://www.mail-archive.com/sqlite-users%40sqlite.org/msg62644.html

我把定义的 sqlite3_tokenizer_module 以下(逐字复制,因为它包括有用的评论)

I'm putting the definition of the sqlite3_tokenizer_module below (copied verbatim because it includes useful comments)

/*
** Structures used by the tokenizer interface. When a new tokenizer
** implementation is registered, the caller provides a pointer to
** an sqlite3_tokenizer_module containing pointers to the callback
** functions that make up an implementation.
**
** When an fts3 table is created, it passes any arguments passed to
** the tokenizer clause of the CREATE VIRTUAL TABLE statement to the
** sqlite3_tokenizer_module.xCreate() function of the requested tokenizer
** implementation. The xCreate() function in turn returns an 
** sqlite3_tokenizer structure representing the specific tokenizer to
** be used for the fts3 table (customized by the tokenizer clause arguments).
**
** To tokenize an input buffer, the sqlite3_tokenizer_module.xOpen()
** method is called. It returns an sqlite3_tokenizer_cursor object
** that may be used to tokenize a specific input buffer based on
** the tokenization rules supplied by a specific sqlite3_tokenizer
** object.
*/
typedef struct sqlite3_tokenizer_module sqlite3_tokenizer_module;
typedef struct sqlite3_tokenizer sqlite3_tokenizer;
typedef struct sqlite3_tokenizer_cursor sqlite3_tokenizer_cursor;

struct sqlite3_tokenizer_module {

  /*
  ** Structure version. Should always be set to 0 or 1.
  */
  int iVersion;

  /*
  ** Create a new tokenizer. The values in the argv[] array are the
  ** arguments passed to the "tokenizer" clause of the CREATE VIRTUAL
  ** TABLE statement that created the fts3 table. For example, if
  ** the following SQL is executed:
  **
  **   CREATE .. USING fts3( ... , tokenizer <tokenizer-name> arg1 arg2)
  **
  ** then argc is set to 2, and the argv[] array contains pointers
  ** to the strings "arg1" and "arg2".
  **
  ** This method should return either SQLITE_OK (0), or an SQLite error 
  ** code. If SQLITE_OK is returned, then *ppTokenizer should be set
  ** to point at the newly created tokenizer structure. The generic
  ** sqlite3_tokenizer.pModule variable should not be initialized by
  ** this callback. The caller will do so.
  */
  int (*xCreate)(
    int argc,                           /* Size of argv array */
    const char *const*argv,             /* Tokenizer argument strings */
    sqlite3_tokenizer **ppTokenizer     /* OUT: Created tokenizer */
  );

  /*
  ** Destroy an existing tokenizer. The fts3 module calls this method
  ** exactly once for each successful call to xCreate().
  */
  int (*xDestroy)(sqlite3_tokenizer *pTokenizer);

  /*
  ** Create a tokenizer cursor to tokenize an input buffer. The caller
  ** is responsible for ensuring that the input buffer remains valid
  ** until the cursor is closed (using the xClose() method). 
  */
  int (*xOpen)(
    sqlite3_tokenizer *pTokenizer,       /* Tokenizer object */
    const char *pInput, int nBytes,      /* Input buffer */
    sqlite3_tokenizer_cursor **ppCursor  /* OUT: Created tokenizer cursor */
  );

  /*
  ** Destroy an existing tokenizer cursor. The fts3 module calls this 
  ** method exactly once for each successful call to xOpen().
  */
  int (*xClose)(sqlite3_tokenizer_cursor *pCursor);

  /*
  ** Retrieve the next token from the tokenizer cursor pCursor. This
  ** method should either return SQLITE_OK and set the values of the
  ** "OUT" variables identified below, or SQLITE_DONE to indicate that
  ** the end of the buffer has been reached, or an SQLite error code.
  **
  ** *ppToken should be set to point at a buffer containing the 
  ** normalized version of the token (i.e. after any case-folding and/or
  ** stemming has been performed). *pnBytes should be set to the length
  ** of this buffer in bytes. The input text that generated the token is
  ** identified by the byte offsets returned in *piStartOffset and
  ** *piEndOffset. *piStartOffset should be set to the index of the first
  ** byte of the token in the input buffer. *piEndOffset should be set
  ** to the index of the first byte just past the end of the token in
  ** the input buffer.
  **
  ** The buffer *ppToken is set to point at is managed by the tokenizer
  ** implementation. It is only required to be valid until the next call
  ** to xNext() or xClose(). 
  */
  /* TODO(shess) current implementation requires pInput to be
  ** nul-terminated.  This should either be fixed, or pInput/nBytes
  ** should be converted to zInput.
  */
  int (*xNext)(
    sqlite3_tokenizer_cursor *pCursor,   /* Tokenizer cursor */
    const char **ppToken, int *pnBytes,  /* OUT: Normalized text for token */
    int *piStartOffset,  /* OUT: Byte offset of token in input buffer */
    int *piEndOffset,    /* OUT: Byte offset of end of token in input buffer */
    int *piPosition      /* OUT: Number of tokens returned before this one */
  );

  /***********************************************************************
  ** Methods below this point are only available if iVersion>=1.
  */

  /* 
  ** Configure the language id of a tokenizer cursor.
  */
  int (*xLanguageid)(sqlite3_tokenizer_cursor *pCsr, int iLangid);
};

这篇关于自动或查询使用SQLite FTS4的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆