在boost :: spirit :: lex中,第一次解析花费的时间最长,而解析之后的时间要短得多 [英] in boost::spirit::lex, it takes longest time to do first parsing, following parsing will be much shorter

查看:87
本文介绍了在boost :: spirit :: lex中,第一次解析花费的时间最长,而解析之后的时间要短得多的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将一系列文本输入到sip解析器中.第一个花费最长时间,无论哪个是第一个.我想知道当spirit :: lex进行第一个解析时是否有任何初始化工作?

I feed a series of text into my sip parser.the first one takes the longest time, no matter which is the first one.I wonder if there is any initialization work when spirit::lex do the first parsing?

template <typename Lexer>
struct sip_token : lex::lexer<Lexer>
{
    sip_token()
    {
        this->self.add_pattern
            ("KSIP", "sip:")
            ("KSIPS", "sips:")
            ("USERINFO", "[0-9a-zA-Z-_.!~*'()]+(:[0-9a-zA-Z-_.!~*'()&=+$,]*)?@")
            ("DOMAINLBL", "([0-9a-zA-Z]|([0-9a-zA-Z][0-9a-zA-Z-]*[0-9a-zA-Z]))")
            ("TOPLBL", "[a-zA-Z]|([a-zA-Z][0-9a-zA-Z-]*[0-9a-zA-Z-])")
            ("INVITE", "INVITE")
            ("ACK", "ACK")
            ("OPTIONS", "OPTIONS")
            ("BYE", "BYE")
            ("CANCEL", "CANCEL")
            ("REGISTER", "REGISTER")
            ("METHOD", "({INVITE}|{ACK}|{OPTIONS}|{BYE}|{CANCEL}|{REGISTER})")
            ("SIPVERSION", "SIP\\/[0-9]\\.[0-9]")
            ("PROTOCOAL", "SIP\\/[^/]+\\/UDP")
            ("IPV4ADDR", "(\\d{1,3}\\.){3}\\d{1,3}")                
            ("HOSTNAME", "[^ \t\r\n]+")            
            ("SIPURL", "{KSIP}{USERINFO}?{HOSTNAME}(:[0-9]+)?")
            ("SIPSURL", "{KSIPS}{USERINFO}?{HOSTNAME}(:[0-9]+)?")
            ("SENTBY", "({HOSTNAME}|{IPV4ADDR})(:[0-9]+)?")
            ("GENPARM", "[^ ;\\n]+=[^ ;\r\\n]+")
            ("TOKEN", "[0-9a-zA-Z-.!%*_+~`']+")
            ("NAMEADDR", "({TOKEN} )?<({SIPURL}|{SIPSURL})>")
            ("STATUSCODE", "\\d{3}")
            ("REASONPHRASE", "[0-9a-zA-Z-_.!~*'()&=+$,]*")
            ("CR", "\\r")
            ("LF", "\\n")
        ;

        this->self.add
            ("{METHOD} {SIPURL} {SIPVERSION}", T_REQ_LINE)
            ("{SIPVERSION} {STATUSCODE} {REASONPHRASE}", T_STAT_LINE)
            ("{CR}?{LF}", T_CRLF)
            ("Via: {PROTOCOAL} {SENTBY}(;{GENPARM})*", T_VIA)
            ("To: {NAMEADDR}(;{GENPARM})*", T_TO)
            ("From: {NAMEADDR}(;{GENPARM})*", T_FROM)
            ("[0-9a-zA-Z -_.!~*'()&=+$,;/?:@]+", T_OTHER)

        ;
    }
};

语法:

template <typename Iterator>
struct sip_grammar : qi::grammar<Iterator>
{
  template <typename TokenDef>
  sip_grammar(TokenDef const& tok)
    : sip_grammar::base_type(start)     
  {
    using boost::phoenix::ref;
    using boost::phoenix::size;
    using boost::spirit::qi::eol;

    start = request  | response;
    response = stat_line >> *(msg_header) >> qi::token(T_CRLF);
    request = req_line >> *(msg_header) >> qi::token(T_CRLF);
    stat_line = qi::token(T_STAT_LINE) >> qi::token(T_CRLF);
    req_line = qi::token(T_REQ_LINE) >> qi::token(T_CRLF);
    msg_header = (qi::token(T_VIA) | qi::token(T_TO) | qi::token(T_FROM) | qi::token(T_OTHER))
      >> qi::token(T_CRLF);    
  }

  std::size_t c, w, l;
  qi::rule<Iterator> start, response, request, stat_line, req_line, msg_header; 
};

时间:

gettimeofday(&t1, NULL);
bool r = lex::tokenize_and_parse(first, last, siplexer, g);
gettimeofday(&t2, NULL);    

结果:

pkt1 time=40945(us)
pkt2 time=140
pkt3 time=60
pkt4 time=74
pkt5 time=58
pkt6 time=51

推荐答案

很明显,它是:)

Lex可能会生成DFA(也许每个Lexer状态一个DFA).这很可能是花费最多时间的事情.使用探查器确定:/

Lex will likely generate a DFA (one for each Lexer state, maybe). This is most likely the thing that takes the most time. Use a profiler to be certain :/

现在,您可以

  • 确保在首次使用之前初始化表,或者
  • 使用静态Lexer模型防止启动成本
  • make sure the tables are initialized before first use, or
  • use the The Static Lexer Model to prevent the startup cost

这意味着您将编写"extra" main以将DFA生成为C ++代码:

This means you'll write an 'extra' main to generate the DFA as C++ code:

#include <boost/spirit/include/lex_lexertl.hpp>
#include <boost/spirit/include/lex_generate_static_lexertl.hpp>

#include <fstream>

#include "sip_token.hpp"

using namespace boost::spirit;

int main(int argc, char* argv[])
{
    // create the lexer object instance needed to invoke the generator
    sip_token<lex::lexertl::lexer<> > my_lexer; // the token definition

    std::ofstream out(argc < 2 ? "sip_token_static.hpp" : argv[1]);

    // invoke the generator, passing the token definition, the output stream 
    // and the name suffix of the tables and functions to be generated
    //
    // The suffix "sip" used below results in a type lexertl::static_::lexer_sip
    // to be generated, which needs to be passed as a template parameter to the 
    // lexertl::static_lexer template (see word_count_static.cpp).
    return lex::lexertl::generate_static_dfa(my_lexer, out, "sip") ? 0 : -1;
}

此处生成了一个代码示例(在本教程中的单词计数示例中):

An example of the code generated is here (in the word-count example from the tutorial): http://www.boost.org/doc/libs/1_54_0/libs/spirit/example/lex/static_lexer/word_count_static.hpp

这篇关于在boost :: spirit :: lex中,第一次解析花费的时间最长,而解析之后的时间要短得多的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆