SLR(1)和LALR(1)和Reduce [英] SLR(1) and LALR(1) and Reduce

查看:302
本文介绍了SLR(1)和LALR(1)和Reduce的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我混淆了正确的!!!!!!



我在我的一个教授笔记中阅读下面的例子。



1)我们有一个SLR(1)语法G如下。我们使用SLR(1)解析器生成器并为G生成解析表S.我们使用LALR(1)解析器生成器并为G.生成解析表L.

  S-> AB 
A-> dAa
A-> lambda(lambda是长度为0的字符串)
B-> aAb

:S中的R(减少)元素的数量大于L.



但在一个网站中我读到:



2)假设对于语法G,使用SLR(1)和LALR(1)创建T1,T2。如果G是SLR(1)语法,则以下哪项为TRUE?



a)T1和T2没有任何区别。



b)总数T1中的无错误条目数低于T2



c)总数T1中的错误条目数低于T2



解决方案:



LALR(1)算法生成与SLR(1)算法完全相同的状态,但它可以生成不同的动作;它能够解决比SLR(1)算法更多的冲突。然而,如果语法是SLR(1),两种算法将产生完全相同的机器(a是正确的)。



任何人都可以为我描述哪些是真的?



编辑:我的问题是为什么对于给定的SLR(1)语法,LALAR(1)和SLR(1)的解析表是完全相同的,错误和非错误条目相等,并且reduce的数目相等),但对于上述语法,S中的Reduced的数目大于L.



另一本书,通常我们有:





摘要:



1)对于我在问题1中写的上述语法,的减少是不同的?



2)如果我们有一个SLR(1)语法,为什么表是完全一样的? (减少的数量和错误条目变得相同)

解决方案

这两个语句都是真的!



您的一个问题是为什么SLR(1)和LALR(1)解析器具有相同的状态。 SLR(1)解析器通过从LR(0)自动机开始形成,然后用来自FOLLOW集合的先行信息增加每个生产。在LALR(1)解析器中,我们从LR(1)解析器开始(其中每个生产具有非常精确的前瞻信息),然后组合具有相同底层LR(0)状态的任何两个状态。这导致具有附加信息的LR(0)自动机,因为每个LR(0)状态对应于至少一个LR(1)状态,并且每个LR(1)状态对应于一些底层LR(0)状态。

SLR(1)和LALR(1)解析器都具有相同的状态集,这些状态与LR(0)解析器中的状态相同。



在SLR(1)和LALR(1)解析器中,每个项目都有一组关联的先行标记。每当解析器进入其中具有reduce项的状态时,如果输入的下一个令牌在先行集中,则解析器将执行该缩减。在SLR(1)解析器中,先行集是在生产的左侧的非终结符的FOLLOW集合。在LALR(1)解析器中,适当地,先行集合被称为针对生产和自动机状态中的非终止的组合的LA集合。



证明在LALR(1)解析器中使用的LA集合是在SLR(1)解析器中使用的FOLLOW集合的子集。这意味着LALR(1)解析器将永远不会有比SLR(1)解析器更多的缩减操作,并且在某些情况下,当SLR(1)解析器会有移位/缩减冲突时,LALR(1)解析器将选择移位。



希望这有助于!


I confused Exactly !!!!!!

I read following example in one of my professor note.

1) we have a SLR(1) Grammar G as following. we use SLR(1) parser generator and generate a parse table S for G. we use LALR(1) parser generator and generate a parse table L for G.

S->AB
A->dAa
A-> lambda (lambda is a string with length=0)
B->aAb

Solution: the number of elements with R (reduce) in S is more than L.

but in one site i read:

2) Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?

a) T1 and T2 has not any difference.

b) total Number of non-error entries in T1 is lower than T2

c) total Number of error entries in T1 is lower than T2

Solution:

The LALR(1) algorithm generates exactly the same states as the SLR(1) algorithm, but it can generate different actions; it is capable of resolving more conflicts than the SLR(1) algorithm. However, if the grammar is SLR(1), both algorithms will produce exactly the same machine (a is right).

any one could describe for me which of them is true?

EDIT: infact my question is why for a given SLR(1) Grammar, the parse table of LALAR(1) and SLR(1) is exactly the same, (error and non-error entries are equal and number of reduce is equal) but for the above grammar, the number of Reduced in S is more than L.

I see in another book that in general we have:

Summary:

1) for the above grammar that i wrote in question 1, why number of reduced is different?

2) if we have a SLR(1) Grammar, why the table is exactly the same? (number of reduced and error entries become the same)

解决方案

Both of these statements are true!

One of your questions was why SLR(1) and LALR(1) parsers have the same states as one another. SLR(1) parsers are formed by starting with an LR(0) automaton, then augmenting each production with lookahead information from FOLLOW sets. In an LALR(1) parser, we begin with an LR(1) parser (where each production has very precise lookahead information), then combine any two states that have the same underlying LR(0) state. This results in an LR(0) automaton with additional information because each LR(0) state corresponds to at least one LR(1) state and each LR(1) state corresponds to some underlying LR(0) state.

SLR(1) and LALR(1) parsers both have the same set of states, which are the same states as in an LR(0) parser. The parsers differ only in what actions they perform in each state.

In both SLR(1) and LALR(1) parsers, each item has an associated set of lookahead tokens. Whenever the parser enters a state with a reduce item in it, the parser will perform that reduction if the next token of input is in the lookahead set. In an SLR(1) parser, the lookahead set is the FOLLOW set for the nonterminal on the left-hand side of the production. In an LALR(1) parser, the lookahead set is, appropriately, called the LA set for the combination of the nonterminal in the production and the automaton state.

You can prove that the LA sets used in an LALR(1) parser are subsets of the FOLLOW sets used in SLR(1) parsers. This means that LALR(1) parsers will never have more reduce actions than SLR(1) parsers, and in some cases the LALR(1) parsers will choose to shift when an SLR(1) parser would have a shift/reduce conflict.

Hope this helps!

这篇关于SLR(1)和LALR(1)和Reduce的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆