Upload
dwain-parrish
View
249
Download
0
Tags:
Embed Size (px)
Citation preview
Parsing IVBottom-up Parsing
Parsing Techniques
Top-down parsers (LL(1), recursive descent)
• Start at the root of the parse tree and grow toward leaves
• Pick a production & try to match the input• Bad “pick” may need to backtrack• Some grammars are backtrack-free (predictive
parsing)
Bottom-up parsers (LR(1), operator precedence)
• Start at the leaves and grow toward root• As input is consumed, encode possibilities in an internal
state• Start in a state valid for legal first tokens• Bottom-up parsers handle a large class of grammars
Bottom-up Parsing (definitions)
The point of parsing is to construct a derivation
A derivation consists of a series of rewrite stepsS 0 1 2 … n–1 n sentence
• Each i is a sentential form If contains only terminal symbols, is a sentence in L(G) If contains ≥ 1 non-terminals, is a sentential form
• To get i from i–1, expand some NT A i–1 by using A Replace the occurrence of A i–1 with to get i
In a leftmost derivation, it would be the first NT A i–1
A left-sentential form occurs in a leftmost derivationA right-sentential form occurs in a rightmost derivation
Bottom-up Parsing
A bottom-up parser builds a derivation by working fromthe input sentence back toward the start symbol S
S 0 1 2 … n–1 n sentence
To reduce i to i–1 match some rhs against i then
replace with its corresponding lhs, A. (assuming the production A)
In terms of the parse tree, this is working from leaves to root
• Nodes with no parent in a partial tree form its upper fringe
• Since each replacement of with A shrinks the upper fringe, we call it a reduction.
The parse tree need not be built, it can be simulated|parse tree nodes| = |words| + |reductions|
bottom-up
Finding Reductions
Consider the simple grammar
And the input string abbcde
The trick is scanning the input and finding the next reduction
The mechanism for doing this must be efficient
Finding Reductions (Handles)
The parser must find a substring of the tree’s frontier that matches some production A that occurs as one step in the rightmost derivation ( A is in
RRD)
Informally, we call this substring a handle
Formally,A handle of a right-sentential form is a pair <A,k> whereA P and k is the position in of ’s rightmost symbol.
If <A,k> is a handle, then replacing at k with A produces the right sentential form from which is derived in the rightmost derivation.
Because is a right-sentential form, the substring to the right of a handle contains only terminal symbols
the parser doesn’t need to scan past the handle (very far)
Finding Reductions (Handles)
Critical Insight (Theorem?)
If G is unambiguous, then every right-sentential form has a unique handle.
If we can find those handles, we can build a derivation !
Sketch of Proof:
1 G is unambiguous rightmost derivation is unique
a unique production A applied to derive i from i–
1
a unique position k at which A is applied a unique handle <A,k> This all follows from the definitions
Example (a very busy slide)
The expression grammar Handles for rightmost derivation of x – 2 * y
This is the inverse of Figure 3.9 in EaC
Handle-pruning, Bottom-up Parsers
The process of discovering a handle & reducing it to the appropriate left-hand side is called handle pruning
Handle pruning forms the basis for a bottom-up parsing method
To construct a rightmost derivationS 0 1 2 … n–1 n w
Apply the following simple algorithmfor i n to 1 by –1 Find the handle <Ai i , ki > in i
Replace i with Ai to generate i–1
This takes 2n steps
Handle-pruning, Bottom-up Parsers
One implementation technique is the shift-reduce parser
push INVALIDtoken next_token( )repeat until (top of stack = Goal and token = EOF) if the top of the stack is a handle A then // reduce to A pop || symbols off the stack push A onto the stack else if (token EOF) then // shift push token token next_token( ) else // need to shift, but out of input
report an error
Figure 3.7 in EAC
How do errors show up?
• failure to find a handle
• hitting EOF & needing to shift (final else clause)
Either generates an error
Back to x - 2 * y
Stack Input Handle Action$ id – num * id none shift$ id – num * id
1. Shift until the top of the stack is the right end of a handle2. Find the left end of the handle & reduce
Back to x - 2 * y
Stack Input Handle Action$ id – num * id none shift$ id – num * id 9,1 red. 9$ Factor – num * id 7,1 red. 7$ Term – num * id 4,1 red. 4$ Expr – num * id
1. Shift until the top of the stack is the right end of a handle2. Find the left end of the handle & reduce
Back to x - 2 * y
Stack Input Handle Action$ id – num * id none shift$ id – num * id 9,1 red. 9$ Factor – num * id 7,1 red. 7$ Term – num * id 4,1 red. 4$ Expr – num * id none shift$ Expr – num * id none shift$ Expr – num * id
1. Shift until the top of the stack is the right end of a handle2. Find the left end of the handle & reduce
Back to x - 2 * y
Stack Input Handle Action$ id – num * id none shift$ id – num * id 9,1 red. 9$ Factor – num * id 7,1 red. 7$ Term – num * id 4,1 red. 4$ Expr – num * id none shift$ Expr – num * id none shift$ Expr – num * id 8,3 red. 8$ Expr – Factor * id 7,3 red. 7$ Expr – Term * id
1. Shift until the top of the stack is the right end of a handle2. Find the left end of the handle & reduce
Back to x - 2 * y
Stack Input Handle Action$ id – num * id none shift$ id – num * id 9,1 red. 9$ Factor – num * id 7,1 red. 7$ Term – num * id 4,1 red. 4$ Expr – num * id none shift$ Expr – num * id none shift$ Expr – num * id 8,3 red. 8$ Expr – Factor * id 7,3 red. 7$ Expr – Term * id none shift$ Expr – Term * id none shift$ Expr – Term * id
1. Shift until the top of the stack is the right end of a handle2. Find the left end of the handle & reduce
Back to x – 2 * y
Stack Input Handle Action$ id – num * id none shift$ id – num * id 9,1 red. 9$ Factor – num * id 7,1 red. 7$ Term – num * id 4,1 red. 4$ Expr – num * id none shift$ Expr – num * id none shift$ Expr – num * id 8,3 red. 8$ Expr – Factor * id 7,3 red. 7$ Expr – Term * id none shift$ Expr – Term * id none shift$ Expr – Term * id 9,5 red. 9$ Expr – Term * Factor 5,5 red. 5$ Expr – Term 3,3 red. 3$ Expr 1,1 red. 1$ Goal none accept
1. Shift until the top of the stack is the right end of a handle2. Find the left end of the handle & reduce
5 shifts + 9 reduces + 1 accept
Example
Goal
<id,x>
Term
Fact.
Expr –
Expr
<id,y>
<num,2>
Fact.
Fact.Term
Term
*
Shift-reduce Parsing
Shift reduce parsers are easily built and easily understood
A shift-reduce parser has just four actions• Shift — next word is shifted onto the stack• Reduce — right end of handle is at top of stack
Locate left end of handle within the stack Pop handle off stack & push appropriate lhs
• Accept — stop parsing & report success• Error — call an error reporting/recovery routine
Accept & Error are simpleShift is just a push and a call to the scannerReduce takes |rhs| pops & 1 pushIf handle-finding requires state, put it in the stack 2x
work
Handle finding is key• handle is on stack• finite set of handles use a DFA !
An Important Lesson about Handles
To be a handle, a substring of a sentential form must have two properties: It must match the right hand side of some rule A There must be some rightmost derivation from the goal
symbol that produces the sentential form with A as the last production applied
• Simply looking for right hand sides that match strings is not good enough
• Critical Question: How can we know when we have found a handle without generating lots of different derivations? Answer: we use look ahead in the grammar along with
tables produced as the result of analyzing the grammar. LR(1) parsers build a DFA that runs over the stack & finds
them
An Important Lesson about Handles
• To be a handle, a substring of a sentential form must have two properties: It must match the right hand side of some rule A There must be some rightmost derivation from the goal
symbol that produces the sentential form with A as the last production applied
• We have seen that simply looking for right hand sides that match strings is not good enough
• Critical Question: How can we know when we have found a handle without generating lots of different derivations? Answer: we use look ahead in the grammar along with
tables produced as the result of analyzing the grammar. o There are a number of different ways to do this.o We will look at two: operator precedence and LR
parsing
LR(1) Parsers
• LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token) for handle
recognition• LR(1) parsers recognize languages that have an LR(1)
grammar
Informal definition:A grammar is LR(1) if, given a rightmost derivation
S 0 1 2 … n–1 n sentence
We can 1. isolate the handle of each right-sentential form i, and
2. determine the production by which to reduce,
by scanning i from left-to-right, going at most 1 symbol beyond
the right end of the handle of i
LR(1) Parsers
A table-driven LR(1) parser looks like
Tables can be built by handHowever, this is a perfect task to automate
ScannerTable-driven
Parser
ACTION & GOTOTables
ParserGenerator
sourcecode
grammar
IR
LR(1) Skeleton Parser
stack.push(INVALID); stack.push(s0); not_found = true;token = scanner.next_token();do while (not_found) { s = stack.top(); if ( ACTION[s,token] == “reduce A” ) then {
stack.popnum(2*||); // pop 2*|| symbols s = stack.top(); stack.push(A); stack.push(GOTO[s,A]);
}
else if ( ACTION[s,token] == “shift si” ) then {stack.push(token); stack.push(si);token scanner.next_token();
} else if ( ACTION[s,token] == “accept”
& token == EOF )then not_found = false;
else report a syntax error and recover;} report success;
The skeleton parser
•uses ACTION & GOTO tables
•does |words| shifts
•does |derivation| reductions •does 1 accept
•detects errors by failure of 3 other cases
To make a parser for L(G), need a set of tables
The grammar
The tables
LR(1) Parsers (parse tables)
Example Parse 1
The string “baa”
Example Parse 1
The string “baa”
Example Parse 1
The string “baa”
Example Parse 1
The string “baa”
Example Parse 2
The string “baa baa ”
Example Parse 2
The string “baa baa ”
Example Parse 2
The string “baa baa ”
Example Parse 2
The string “baa baa ”
LR(1) Parsers
How does this LR(1) stuff work?• Unambiguous grammar unique rightmost derivation
• Keep upper fringe on a stack All active handles include top of stack (TOS) Shift inputs until TOS is right end of a handle
• Language of handles is regular (finite) Build a handle-recognizing DFA ACTION & GOTO tables encode the DFA
• To match subterm, invoke subterm DFA & leave old DFA’s state on stack
• Final state in DFA a reduce action New state is GOTO[state at TOS (after pop), lhs] For SN, this takes the DFA to s1
S0
S3
S2
S1
baa
baa
SN
Control DFA for SN
Reduce action
Reduce action
Building LR(1) Parsers
How do we generate the ACTION and GOTO tables?• Use the grammar to build a model of the DFA
• Use the model to build ACTION & GOTO tables• If construction succeeds, the grammar is LR(1)
The Big Picture• Model the state of the parser• Use two functions goto( s, X ) and closure( s )
goto() is analogous to move() in the subset construction closure() adds information to round out a state
• Build up the states and transition functions of the DFA
• Use this information to fill in the ACTION and GOTO tables
Terminal or non-terminal
LR(k) items
The LR(1) table construction algorithm uses LR(1) items to represent valid configurations of an LR(1) parser
An LR(k) item is a pair [P, ], where
P is a production A with a • at some position in the rhs
is a lookahead string of length ≤ k (words or EOF)
The • in an item indicates the position of the top of the stack[A•,a] means that the input seen so far is consistent with the
use of A immediately after the symbol on top of the stack[A •,a] means that the input sees so far is consistent with the
use of A at this point in the parse, and that the parser has already recognized .
[A •,a] means that the parser has seen , and that a lookahead symbol of a is consistent with reducing to A.
LR(1) Items
The production A, where = B1B1B1 with lookahead a, can give rise to 4 items
[A•B1B2B3,a], [AB1•B2B3,a], [AB1B2•B3,a], & [AB1B2B3•,a]
The set of LR(1) items for a grammar is finite
What’s the point of all these lookahead symbols?• Carry them along to choose the correct reduction, if
there is a choice
• Lookaheads are bookkeeping, unless item has • at right end Has no direct use in [A•,a] In [A•,a], a lookahead of a implies a reduction by A For { [A•,a],[B•,b] }, a reduce to A; FIRST() shift
Limited right context is enough to pick the actions
High-level overview Build the canonical collection of sets of LR(1) Items, I
a Begin in an appropriate state, s0
[S’ •S,EOF], along with any equivalent items Derive equivalent items as closure( s0 )
b Repeatedly compute, for each sk, and each X, goto(sk,X) If the set is not already in the collection, add it Record all the transitions created by goto( )
This eventually reaches a fixed point
2 Fill in the table from the collection of sets of LR(1) items
The canonical collection completely encodes the transition diagram for the handle-finding DFA
LR(1) Table Construction
Back to Finding Handles
Revisiting an issue from last class
Parser in a state where the stack (the fringe) wasExpr – Term With lookahead of *
How did it choose to expand Term rather than reduce to Expr?
• Lookahead symbol is the key• With lookahead of + or –, parser should reduce to Expr• With lookahead of * or /, parser should shift• Parser uses lookahead to decide • All this context from the grammar is encoded in the
handle recognizing mechanism
Back to x – 2 * y
Stack Input Handle Action$ id – num * id none shift$ id – num * id 9,1 red. 9$ Factor – num * id 7,1 red. 7$ Term – num * id 4,1 red. 4$ Expr – num * id none shift$ Expr – num * id none shift$ Expr – num * id 8,3 red. 8$ Expr – Factor * id 7,3 red. 7$ Expr – Term * id none shift$ Expr – Term * id none shift$ Expr – Term * id 9,5 red. 9$ Expr – Term * Factor 5,5 red. 5$ Expr – Term 3,3 red. 3$ Expr 1,1 red. 1$ Goal none accept
1. Shift until TOS is the right end of a handle2. Find the left end of the handle & reduce
shift here
reduce here
Computing FIRST Sets
Define FIRST as• If * a, a T, (T NT)*, then a FIRST()• If * , then FIRST()
Note: if = X, FIRST() = FIRST(X)
To compute FIRST
• Use a fixed-point method• FIRST(A) 2(T )
• Loop is monotonic Algorithm halts
Computation of FIRST uses FOLLOW, so build FOLLOW sets before FIRST sets
Computing FOLLOW Sets
FOLLOW(S) {EOF }for each A NT, FOLLOW(A) Ø
while (FOLLOW sets are still changing) for each p P, of the form A12 … k
FOLLOW(k) FOLLOW(k) FOLLOW(A) TRAILER FOLLOW(A) for i k down to 2
if FIRST( i ) then
FOLLOW(i-1 ) FOLLOW(i-1) { FIRST(i ) – { } } TRAILER
else FOLLOW(i-1 ) FOLLOW(i-1) FIRST(i )
TRAILER Ø
For SheepNoise:FOLLOW(Goal ) = { EOF }FOLLOW(SN) = { baa, EOF }
Computing FIRST Sets
for each x T, FIRST(x) { x }for each A NT, FIRST(A) Ø
while (FIRST sets are still changing) for each p P, of the form A, if is then FIRST(A) FIRST(A) { } else if is B1B2…Bk then begin
FIRST(A) FIRST(A) ( FIRST(B1) – { } )
for i 1 to k–1 by 1 while FIRST(Bi )
FIRST(A) FIRST(A) ( FIRST(Bi +1) – { } )
if i = k–1 and FIRST(Bk )
then FIRST(A) FIRST(A) { } end
for each A NTif FIRST(A) then
FIRST(A) FIRST(A) FOLLOW(A)
For SheepNoise:FIRST(Goal) = { baa }FIRST(SN ) = { baa }FIRST(baa) = { baa }
on SN baaon G SN
Computing Closures
Closure(s) adds all the items implied by items already in s
• Any item [AB,a] implies [B,x] for each production with B on the lhs, and each x FIRST(a)
• Since B is valid, any way to derive B is valid, too
The algorithm
Closure( s ) while ( s is still changing ) items [A •B,a] s productions B P b FIRST(a) // might be if [B • ,b] s then add [B • ,b] to s
Classic fixed-point method Halts because s ITEMS
Worklist version is faster
Closure “fills out” a state
Example From SheepNoise
Initial step builds the item [Goal•SheepNoise,EOF]and takes its closure( )
Closure( [Goal•SheepNoise,EOF] )
So, S0 is { [Goal • SheepNoise,EOF], [SheepNoise • SheepNoise baa,EOF],
[SheepNoise• baa,EOF], [SheepNoise • SheepNoise baa,baa],
[SheepNoise • baa,baa] }
It em From
[Goal•SheepNoise,EOF] Original item
[SheepNoise•SheepNoise baa,EOF] 1, a is EOF
[SheepNoise • baa,EOF] 1, a is EOF
[SheepNoise•SheepNoise baa,baa] 2, a is baa EOF
[SheepNoise • baa,baa] 2, a is baa EOF
Remember, this is the left-recursive SheepNoise; EaC shows the right-recursive version.
Computing Gotos
Goto(s,x) computes the state that the parser would reach if it recognized an x while in state s
• Goto( { [AX,a] }, X ) produces [AX,a] (obviously)
• It also includes closure( [AX,a] ) to fill out the state
The algorithm
Goto( s, X ) new Ø items [A•X,a] s new new [AX•,a]
return closure(new)
Not a fixed-point method!
Straightforward computation
Uses closure ( )
Goto() moves forward
Example from SheepNoise
S0 is { [Goal • SheepNoise,EOF], [SheepNoise • SheepNoise
baa,EOF], [SheepNoise • baa,EOF], [SheepNoise • SheepNoise
baa,baa], [SheepNoise • baa,baa] }
Goto( S0 , baa )
• Loop produces
• Closure adds nothing since • is at end of rhs in each item
In the construction, this produces s2
{ [SheepNoisebaa •, {EOF,baa}]}
I tem From
[SheepNoise baa•, EOF] I tem 3 in s0
[SheepNoise baa•, baa] I tem 5 in s0
New, but obvious, notation for two distinct items
[SheepNoisebaa •, EOF] & [SheepNoisebaa •, baa]
Example from SheepNoise
S0 : { [Goal • SheepNoise, EOF], [SheepNoise • SheepNoise baa, EOF],
[SheepNoise• baa, EOF], [SheepNoise • SheepNoise baa, baa],
[SheepNoise • baa, baa] }
S1 = Goto(S0 , SheepNoise) =
{ [Goal SheepNoise •, EOF], [SheepNoise SheepNoise • baa, EOF],
[SheepNoise SheepNoise • baa, baa] }
S2 = Goto(S0 , baa) = { [SheepNoise baa •, EOF], [SheepNoise baa •, baa] }
S3 = Goto(S1 , baa) = { [SheepNoise SheepNoise baa •, EOF],
[SheepNoise SheepNoise baa •, baa] }
Building the Canonical Collection
Start from s0 = closure( [S’S,EOF ] )
Repeatedly construct new states, until all are found
The algorithm
s0 closure ( [S’S,EOF] )S { s0 }k 1
while ( S is still changing ) sj S and x ( T
NT ) sk goto(sj,x) record sj sk on x if sk S then
S S sk
k k + 1
Fixed-point computation Loop adds to S S 2ITEMS, so S is finite
Worklist version is faster
Example from SheepNoise
Starts with S0
S0 : { [Goal • SheepNoise, EOF], [SheepNoise • SheepNoise baa, EOF],
[SheepNoise• baa, EOF], [SheepNoise • SheepNoise baa, baa],
[SheepNoise • baa, baa] }
Example from SheepNoise
Starts with S0
S0 : { [Goal • SheepNoise, EOF], [SheepNoise • SheepNoise baa, EOF],
[SheepNoise• baa, EOF], [SheepNoise • SheepNoise baa, baa],
[SheepNoise • baa, baa] }
Iteration 1 computesS1 = Goto(S0 , SheepNoise) =
{ [Goal SheepNoise •, EOF], [SheepNoise SheepNoise • baa, EOF],
[SheepNoise SheepNoise • baa, baa] }
S2 = Goto(S0 , baa) = { [SheepNoise baa •, EOF], [SheepNoise baa •, baa] }
Example from SheepNoise
Starts with S0
S0 : { [Goal • SheepNoise, EOF], [SheepNoise • SheepNoise baa, EOF],
[SheepNoise• baa, EOF], [SheepNoise • SheepNoise baa, baa],
[SheepNoise • baa, baa] }
Iteration 1 computesS1 = Goto(S0 , SheepNoise) =
{ [Goal SheepNoise •, EOF], [SheepNoise SheepNoise • baa, EOF],
[SheepNoise SheepNoise • baa, baa] }
S2 = Goto(S0 , baa) = { [SheepNoise baa •, EOF], [SheepNoise baa •, baa] }
Iteration 2 computes S3 = Goto(S1 , baa) = { [SheepNoise SheepNoise baa •, EOF],
[SheepNoise SheepNoise baa •, baa] }
Example from SheepNoise
Starts with S0
S0 : { [Goal • SheepNoise, EOF], [SheepNoise • SheepNoise baa, EOF],
[SheepNoise• baa, EOF], [SheepNoise • SheepNoise baa, baa],
[SheepNoise • baa, baa] }
Iteration 1 computesS1 = Goto(S0 , SheepNoise) =
{ [Goal SheepNoise •, EOF], [SheepNoise SheepNoise • baa, EOF],
[SheepNoise SheepNoise • baa, baa] }
S2 = Goto(S0 , baa) = { [SheepNoise baa •, EOF], [SheepNoise baa •, baa] }
Iteration 2 computes S3 = Goto(S1 , baa) = { [SheepNoise SheepNoise baa •, EOF],
[SheepNoise SheepNoise baa •, baa] }
Nothing more to compute, since • is at the end of every item in S3 .
Example (grammar & sets)
Simplified, right recursive expression grammar
Goal ExprExpr Term – ExprExpr TermTerm Factor * Term Term FactorFactor ident
Example (building the collection)
Initialization Step
s0 closure( { [Goal •Expr , EOF] } )
{ [Goal • Expr , EOF], [Expr • Term – Expr , EOF], [Expr • Term , EOF],
[Term • Factor * Term , EOF], [Term • Factor * Term , –],
[Term • Factor , EOF], [Term • Factor , –],
[Factor • ident , EOF], [Factor • ident , –], [Factor • ident , *] }
S {s0 }
Example (building the collection)
Iteration 1
s1 goto(s0 , Expr)
s2 goto(s0 , Term)
s3 goto(s0 , Factor)
s4 goto(s0 , ident )
Iteration 2
s5 goto(s2 , – )
s6 goto(s3 , * )
Iteration 3
s7 goto(s5 , Expr )
s8 goto(s6 , Term )
Example (Summary)S0 : { [Goal • Expr , EOF], [Expr • Term – Expr , EOF], [Expr • Term , EOF],
[Term • Factor * Term , EOF], [Term • Factor * Term , –],
[Term • Factor , EOF], [Term • Factor , –],
[Factor • ident , EOF], [Factor • ident , –], [Factor • ident, *] }
S1 : { [Goal Expr •, EOF] }
S2 : { [Expr Term • – Expr , EOF], [Expr Term •, EOF] }
S3 : { [Term Factor • * Term , EOF],[Term Factor • * Term , –], [Term Factor •, EOF],
[Term Factor •, –] }
S4 : { [Factor ident •, EOF],[Factor ident •, –], [Factor ident •, *] }
S5 : { [Expr Term – • Expr , EOF], [Expr • Term – Expr , EOF], [Expr • Term , EOF],
[Term • Factor * Term , –], [Term • Factor , –],
[Term • Factor * Term , EOF], [Term • Factor , EOF],
[Factor • ident , *], [Factor • ident , –], [Factor • ident , EOF] }
Example (Summary)
S6 : { [Term Factor * • Term , EOF], [Term Factor * • Term , –],
[Term • Factor * Term , EOF], [Term • Factor * Term , –], [Term • Factor , EOF], [Term • Factor , –], [Factor • ident , EOF], [Factor • ident , –], [Factor • ident , *] }
S7: { [Expr Term – Expr •, EOF] }
S8 : { [Term Factor * Term •, EOF], [Term Factor * Term •, –] }
Example (Summary)
The Goto Relationship (from the construction)
State Expr Term Factor - * Ident
0 1 2 3 4
1
2 5
3 6
4
5 7 2 3 4
6 8 3 4
7
8
Filling in the ACTION and GOTO Tables
The algorithm
Many items generate no table entry Closure( ) instantiates FIRST(X) directly for [A•X,a ]
set sx S item i sx
if i is [A •a,b] and goto(sx,a) = sk , a T then ACTION[x,a] “shift k” else if i is [S’S •,EOF] then ACTION[x ,a] “accept” else if i is [A •,a] then ACTION[x,a] “reduce A”
n NT if goto(sx ,n) = sk
then GOTO[x,n] k
x is the state number
Example (Filling in the tables)
The algorithm produces the following table
ACTION GOTO
Ident - * EOF Expr Term Factor
0 s 4 1 2 31 acc2 s 5 r 33 r 5 s 6 r 54 r 6 r 6 r 65 s 4 7 2 36 s 4 8 37 r 28 r 4 r 4
What can go wrong?
What if set s contains [A•a,b] and [B•,a] ?
• First item generates “shift”, second generates “reduce”
• Both define ACTION[s,a] — cannot do both actions
• This is a fundamental ambiguity, called a shift/reduce error• Modify the grammar to eliminate it (if-then-
else)
• Shifting will often resolve it correctly
What is set s contains [A•, a] and [B•, a] ?
• Each generates “reduce”, but with a different production
• Both define ACTION[s,a] — cannot do both reductions
• This is a fundamental ambiguity, called a reduce/reduce conflict
• Modify the grammar to eliminate it (PL/I’s overloading of (...))
In either case, the grammar is not LR(1)
Shrinking the Tables
Three options:
• Combine terminals such as number & identifier, + & -, * & / Directly removes a column, may remove a row For expression grammar, 198 (vs. 384) table entries
• Combine rows or columns Implement identical rows once & remap states Requires extra indirection on each lookup Use separate mapping for ACTION & for GOTO
• Use another construction algorithm Both LALR(1) and SLR(1) produce smaller tables Implementations are readily available
LR(k) versus LL(k) (Top-down Recursive Descent )
Finding ReductionsLR(k) Each reduction in the parse is detectable with
the complete left context, the reducible phrase, itself, and the k terminal symbols to its right
LL(k) Parser must select the reduction based on The complete left context The next k terminalsThus, LR(k) examines more context
“… in practice, programming languages do not actually seem to fall in the gap between LL(1) languages and deterministic languages” J.J. Horning, “LR Grammars and Analysers”, in Compiler Construction, An Advanced Course, Springer-Verlag, 1976
Summary
Advantages
Fast
Good locality
Simplicity
Good error detection
Fast
Deterministic langs.
Automatable
Left associativity
Disadvantages
Hand-coded
High maintenance
Right associativity
Large working sets
Poor error messages
Large table sizes
Top-down
recursive
descent
LR(1)