CS8602 Notes Compiler Design
CS8602 Notes Compiler Design
www.padeepz.net
UNIT -1
1.2 Preprocessor
A preprocessor produce input to compilers. They may perform the following functions.
1. Macro processing: A preprocessor may allow a user to define macros that are short
hands for longer constructs.
2. File inclusion: A preprocessor may include header files into the program text.
3. Rational preprocessor: these preprocessors augment older languages with more
modern flow-of-control and data structuring facilities.
4. Language Extensions: These preprocessor attempts to add capabilities to the language
by certain amounts to build-in macro
1.3 COMPILER
Compiler is a translator program that translates a program written in (HLL) the source
program and translate it into an equivalent program in (MLL) the target program. As an
important part of a compiler is error showing to the programmer.
Error msg
www.padeepz.net
CS6660 COMPILER DESIGN ` UNIT-1
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
www.padeepz.net
Executing a program written n HLL programming language is basically of two parts. the
source program must first be compiled translated into a object program. Then the results
object program is loaded into a memory executed.
Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA also
uses interpreter. The process of interpretation can be carried out in following phases.
1. Lexical analysis
2. Synatx analysis
3. Semantic analysis
4. Direct Execution
Advantages:
Disadvantages: www.padeepz.net
The execution of the program is slower.
Memory consumption is more.
“A loader is a program that places programs into memory and prepares them for
execution.” It would be more efficient if subroutines could be translated into object form the
loader could”relocate” directly behind the user‟s program. The task of adjusting programs o
they may be placed in arbitrary core locations is called relocation. Relocation loaders
perform four functions.
1.6 TRANSLATOR
A translator is a program that takes as input a program written in one language and
produces as output a program in another language. Beside program translation, the translator
performs another very important role, the error-detection. Any violation of d HLL
specification would be detected and reported to the programmers. Important role of translator
are:
INTERPRETOR
COMPILER
PREPROSSESSOR
www.padeepz.net
CS6660 COMPILER DESIGN ` UNIT-1
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
www.padeepz.net
1.9 STRUCTURE OF THE COMPILER DESIGN
PHASES OF A COMPILER
This is the portion to keep the names used by the program and records
essential information about each. The data structure used to record this information called a
„Symbol Table‟.
Error Handlers:-
It is invoked when a flaw error in the source program is detected.
The output of LA is a stream of tokens, which is passed to the next phase, the
syntax analyzer or parser. The SA groups the tokens together into syntactic structure called
as expression. Expression may further be combined to form statements. The syntactic
structure can be regarded as a tree whose leaves are the token called as parse trees.
The parser has two functions. It checks if the tokens from lexical analyzer,
occur in pattern that are permitted by the specification for the source language. It also
imposes on tokens a tree-like structure that is used by the sub-sequent phases of the compiler.
Example, if a program contains the expression A+/B after lexical analysis this
expression might appear to the syntax analyzer as the token sequence id+/id. On seeing the /,
the syntax analyzer should detect an error situation, because the presence of these two
adjacent binary operators violates the formulations rule of an expression.
Syntax analysis is to make explicit the hierarchical structure of the incoming
token stream by identifying which parts of the token stream should be grouped.
Code Optimization
www.padeepz.net
This is optional phase described to improve the intermediate code so that the
output runs faster and takes less space. Its output is another intermediate code program that
does the some job as the original, but in a way that saves time and / or spaces.
1, Local Optimization:-
There are local transformations that can be applied to a program to
make an improvement. For example,
If A > B goto L2
Goto L3
L2 :
This can be replaced by a single statement
If A < B goto L3
Another important local optimization is the elimination of common
sub-expressions
A := B + C + D
E := B + C + F
Might be evaluated as
T1 := B + C
A := T1 + D
E := T1 + F
Take this advantage of the common sub-expressions B + C.
2, Loop Optimization:-
Another important source of optimization concerns about increasing
the speed of loops. A typical loop improvement is to move a
computation that produces the same result each time around the loop
to a point, in the program just before the loop is entered.
Code generator :-
Cg produces the object code by deciding on the memory locations for data,
selecting code to access each datum and selecting the registers in which each computation is
to be done. Many computers have only a few high speed registers in which computations can
be performed quickly. A good code generator would attempt to utilize registers as efficiently
as possible.
Table Management OR Book-keeping :-
A compiler needs to collect information about all the data objects that appear
in the source program. The information about data objects is collected by the early phases of
the compiler-lexical and syntactic analyzers. The data structure used to record this
information is called as Symbol Table.
Error Handing :-
One of the most important functions of a compiler is the detection and
reporting of errors in the source program. The error message should allow the programmer to
determine exactly where the errors have occurred. Errors may occur in all or the phases of a
compiler.
Example: www.padeepz.net
Position:= initial + rate *60
Lexical Analyzer
Syntax Analyzer
id1 +
id2 *
id3 id4
Semantic Analyzer
id1 +
id2 *
id3 60
int to real
Code Optimizer
www.padeepz.net
Temp1:= id3 * 60.0
Code Generator
MOVF id3, r2
MULF *60.0, r2
MOVF id2, r2
ADDF r2, r1
MOVF r1, id1
1.10 TOKEN
LA reads the source program one character at a time, carving the source program into
a sequence of automatic units called „Tokens‟.
1, Type of the token.
2, Value of the token.
Type : variable, operator, keyword, constant
Value : N1ame of variable, current variable (or) pointer to symbol table.
If the symbols given in the standard format the LA accepts and produces
token as output. Each token is a sub-string of the program that is to be treated as a single
unit. Token are two types.
1, Specific strings such as IF (or) semicolon.
2, Classes of string such as identifiers, label, constants.
www.padeepz.net
www.padeepz.net
UNIT II- LEXICAL ANALYSIS
LEXICAL ANALYSIS
Lexical analysis is the process of converting a sequence of characters into a sequence of tokens. A
program or function which performs lexical analysis is called a lexical analyzer or scanner. A lexer often
exists as a single function which is called by a parser or another function.
symbol
table
Upon receiving a “get next token” command from the parser, the lexical analyzer reads input characters
until it can identify the next token.
TOKENS
A token is a string of characters, categorized according to the rules as a symbol (e.g., IDENTIFIER,
NUMBER, COMMA). The process of forming tokens from an input stream of characters is called
tokenization.
A token can look like anything that is useful for processing an input text stream or text file. Consider this
expression in the C programming language: sum=3+2;
PATTERN:
A pattern is a description of the form that the lexemes of a token may take.
In the case of a keyword as a token, the pattern is just the sequence of characters that
form the keyword. For identifiers and some other tokens, the pattern is a more complex structure
that is matched by many strings.
Some tokens have attributes that can be passed back to the parser. The lexical analyzer collects information
about tokens into their associated attributes. The attributes influence the translation of tokens.
i) Constant : value of the constant
ii) Identifiers: pointer to the corresponding symbol table entry.
INPUT BUFFERING
We often have to look one or more characters beyond the next lexeme before we can be sure we have the right lexeme. As
characters are read from left to right, each character is stored in the buffer to form a meaningful token as shown below:
Forward pointer
A = B + C
We introduce a two-buffer scheme that handles large look aheads safely. We then consider an improvement involving "sentinels"
that saves time checking for the ends of buffers.
www.padeepz.net
www.padeepz.net
BUFFER PAIRS
lexeme_beginning forward
Each buffer is of the same size N, and N is usually the number of characters on one disk block. E.g.,
1024 or 4096 bytes.
Using one system read command we can read N characters into a buffer.
If fewer than N characters remain in the input file, then a special character, represented
by eof, marks the end of the source file.
Two pointers to the input are maintained:
1. Pointer lexeme_beginning, marks the beginning of the current lexeme,
whose extent we are attempting to determine.
2. Pointer forward scans ahead until a pattern match is found.
Once the next lexeme is determined, forward is set to the character at its right
end.
The string of characters between the two pointers is the current lexeme.
After the lexeme is recorded as an attribute value of a token returned to the parser,
lexeme_beginning is set to the character immediately after the lexeme just found.
Advancing forward pointer requires that we first test whether we have reached the end of one of the buffers, and if so, we
must reload the other buffer from the input, and move forward to the beginning of the newly loaded buffer. If the end of
second buffer is reached, we must again reload the first buffer with input and the pointer wraps to the beginning of the
buffer.
SENTINELS
For each character read, we make two tests: one for the end of the buffer, and one to determine what character
is read. We can combine the buffer-end test with the test for the current character if we extend each buffer to
hold a sentinel character at the end.
The sentinel is a special character that cannot be part of the source program, and a natural choice is the
character eof.
www.padeepz.net
www.padeepz.net
The sentinel arrangement is as shown below:
lexeme_beginning
forward
Note that eof retains its use as a marker for the end of the entire input. Any eof that appears other than
at the end of a buffer means that the input is at an end.
forward : = forward + 1;
if forward ↑ = eof then begin
if forward at end of first half then begin
reload second half;
forward := forward + 1
end
else if forward at end of second half then begin
reload first half;
move forward to beginning of first half
end
else /* eof within a buffer signifying end of input */
terminate lexical analysis
end
SPECIFICATION OF TOKENS
Operations on strings
www.padeepz.net
www.padeepz.net
For example, ban is a prefix of banana.
2. A suffix of string s is any string obtained by removing zero or more symbols from the beginning of s.
For example, nana is a suffix of banana.
4. The proper prefixes, suffixes, and substrings of a string s are those prefixes, suffixes, and substrings,
respectively of s that are not ε or not equal to s itself.
5. A subsequence of s is any string formed by deleting zero or more not necessarily consecutive positions of s.
For example, baan is a subsequence of banana.
Operations on languages:
1.Union
2.Concatenation
3.Kleene closure
4.Positive closure
1. Union : L U S={0,1,a,b,c}
2. Concatenation : L.S={0a,1a,0b,1b,0c,1c}
3. Kleene closure : L*={ ε,0,1,00….}
4. Positive closure : L+={0,1,00….}
Regular Expressions
1. ε is a regular expression, and L(ε) is { ε }, that is, the language whose sole member is the empty string.
2. If „a‟ is a symbol in Σ, then „a‟ is a regular expression, and L(a) = {a}, that is, the language with one string, of
length one, with „a‟ in its one position.
3. Suppose r and s are regular expressions denoting the languages L(r) and L(s). Then, a) (r)|(s) is a
www.padeepz.net
www.padeepz.net
4. The unary operator * has highest precedence and is left associative.
5. Concatenation has second highest precedence and is left associative.
6. | has lowest precedence and is left associative.
Regular set
There are a number of algebraic laws for regular expressions that can be used to manipulate into
equivalent forms.
For instance, r|s = s|r is commutative; r|(s|t)=(r|s)|t is associative.
Regular Definitions
dl → r 1
d2 → r2
………
dn → rn
Example: Identifiers is the set of strings of letters and digits beginning with a letter. Regular definition for this
set:
letter → A | B | …. | Z | a | b | …. | z |
digit → 0 | 1 | …. | 9
id → letter ( letter | digit ) *
Shorthands
- If r is a regular expression that denotes the language L(r), then ( r ) + is a regular expression that denotes the
language (L (r ))+
- Thus the regular expression a+ denotes the set of all strings of one or more a‟s.
- The operator + has the same precedence and associativity as the operator *.
www.padeepz.net
www.padeepz.net
2. Zero or one instance ( ?):
- If „r‟ is a regular expression, then ( r )? is a regular expression that denotes the language
L( r ) U { ε }.
3. Character Classes:
- The notation [abc] where a, b and c are alphabet symbols denotes the regular expression a | b | c.
- We can describe identifiers as being strings generated by the regular expression, [A–Za–z][A–Za–z0–9]*
Non-regular Set
A language which cannot be described by any regular expression is a non-regular set. Example: The set of all strings of
balanced parentheses and repeating strings cannot be described by a regular expression. This set can be specified by a
context-free grammar.
RECOGNITION OF TOKENS
term → id
| num
where the terminals if , then, else, relop, id and num generate sets of strings given by the following regular
definitions:
if → if
then → then
else → else
relop → <|<=|=|<>|>|>=
id → letter(letter|digit)*
num → digit+ (.digit+)?(E(+|-)?digit+)?
For this language fragment the lexical analyzer will recognize the keywords if, then, else, as well as the
lexemes denoted by relop, id, and num. To simplify matters, we assume keywords are reserved; that is, they cannot be
used as identifiers.
www.padeepz.net
www.padeepz.net
Transition diagrams
It is a diagrammatic representation to depict the action that will take place when a lexical analyzer is called
by the parser to get the next token. It is used to keep track of information about the characters that are seen as the forward
pointer scans the input.
LEX
Lex is a computer program that generates lexical analyzers. Lex is commonly used with
the yacc parser generator.
First, a specification of a lexical analyzer is prepared by creating a program lex.l in the Lex language.
Then, lex.l is run through the Lex compiler to produce a C program lex.yy.c.
www.padeepz.net
www.padeepz.net
Finally, lex.yy.c is run through the C compiler to produce an object program a.out, which is the lexical
analyzer that transforms an input stream into a sequence of tokens.
Lex Specification
{ definitions }
%%
{ rules }
%%
{ user subroutines }
User subroutines are auxiliary procedures needed by the actions. These can be compiled separately
and loaded with the lexical analyzer.
Yacc provides a general tool for describing the input to a computer program. The Yacc user specifies the structures of
his input, together with code to be invoked as each such structure is recognized. Yacc turns such a specification into a
subroutine that handles the input process; frequently, it is convenient and appropriate to have most of the flow of control
in the user's application handled by this subroutine.
www.padeepz.net
www.padeepz.net
FINITE AUTOMATA
Finite Automata is one of the mathematical models that consist of a number of states and edges. It is a transition diagram
that recognizes a regular expression or grammar.
M = {Qn, Ʃ, δ, q 0, fn}
q0 – starting state
fn – final state
fd – final state
www.padeepz.net
www.padeepz.net
a RE into a non-deterministic finite automaton (NFA) and then it converts the NFA into a
DFA.
A NFA is similar to a DFA but it also permits multiple transitions over the same character
and transitions over . The first type indicates that, when reading the common character
associated with these transitions, we have more than one choice; the NFA succeeds if at
least one of these choices succeeds. The transition doesn't consume any input characters,
so you may jump to another state for free.
Clearly DFAs are a subset of NFAs. But it turns out that DFAs and NFAs have the same
expressive power. The problem is that when converting a NFA to a DFA we may get an
exponential blowup in the number of states.
We will first learn how to convert a RE into a NFA. This is the easy part. There are only 5
rules, one for each type of RE:
The algorithm constructs NFAs with only one final state. For example, the third rule
indicates that, to construct the NFA for the RE AB, we construct the NFAs for A and B
which are represented as two boxes with one start and one final state for each box. Then
the NFA for AB is constructed by connecting the final state of A to the start state of B using
an empty transition.
For example, the RE (a|b)c is mapped to the following NFA:
The next step is to convert a NFA to a DFA (called subset construction). Suppose that you
assign a number to each NFA state. The DFA states generated by subset construction have
sets of numbers, instead of just one number. For example, a DFA state may have been
assigned the set {5,6,8}. This indicates that arriving to the state labeled {5,6,8} in the DFA
www.padeepz.net
www.padeepz.net
is the same as arriving to the state 5, the state 6, or the state 8 in the NFA when parsing the
same input. (Recall that a particular input sequence when parsed by a DFA, leads to a
unique state, while when parsed by a NFA it may lead to multiple states.)
First we need to handle transitions that lead to other states for free (without consuming any
input). These are the transitions. We define the closure of a NFA node as the set of all the
nodes reachable by this node using zero, one, or more transitions. For example, The
closure of node 1 in the left figure below
is the set {1,2}. The start state of the constructed DFA is labeled by the closure of the NFA
start state. For every DFA state labeled by some set and for every character c in
the language alphabet, you find all the states reachable by s1, s2, ..., or sn using c arrows
and you union together the closures of these nodes. If this set is not the label of any other
node in the DFA constructed so far, you create a new DFA node with this label. For
example, node {1,2} in the DFA above has an arrow to a {3,4,5} for the character a since
the NFA node 3 can be reached by 1 on a and nodes 4 and 5 can be reached by 2. The b
arrow for node {1,2} goes to the error node which is associated with an empty set of NFA
nodes. The following NFA recognizes , even though it wasn't constructed
with the 5 RE-to-NFA rules. It has the following DFA:
To convert an NFA to a DFA, we must and a way to remove all "-transitions and to ensure
that there is one transition per symbol in each state. We do this by constructing a DFA in
which each state corresponds to a set of some states from the NFA. In the DFA, transitions
from a state S by some symbol go to the state S that consists of all the possible NFA-states
that could be reached by from some NFA state q contained in the present DFA state S. The
resulting DFA \simulates" the given NFA in the sense that a single DFA-transition
represents many simultaneous NFA-transitions. The first concept we need is the "E-closure
pronounced \epsilon closure". The " -closure of an NFA state q is the set containing q
along with all states in the automaton that are reachable by any number of " E-transitions
from q . In the following automaton, the " E-closures are given in the table to the right:
www.padeepz.net
www.padeepz.net
Likewise, we can done the "-closure of a set of states to be the states reachable by " -
transitions from its members. In other words, this is the union of the " -closures of its
elements. To convert our NFA to its DFA counterpart, we begin by taking the " –closure
of the start state q of our NFA and constructing a new start state S. in our DFA
corresponding to that " -closure. Next, for each symbol in our alphabet, we record the set
of NFA states that we can reach from S 0on that symbol. For each such set, we make a
DFA state corresponding to its "E-closure, taking care to do this only once for each set. In
the case two sets are equal, we simply reuse the existing DFA state that we already
constructed. This process is then repeated for each of the new DFA states (that is, set of
NFA states) until we run out of DFA states to process. Finally, every DFA state whose
corresponding set of NFA states contains an accepting state is itself marked as an
accepting state.
declarations
%%
translation rules
%%
auxiliary procedures
3. The third section holds whatever auxiliary procedures are needed by the actions.
Alternatively these procedures can be compiled separately and loaded with the
lexical analyzer.
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
UNIT -3
SYNTAX ANALYSIS
Parser obtains a string of tokens from the lexical analyzer and verifies that it can be generated
by the language for the source program. The parser should report any syntax errors in an
intelligible fashion. The two types of parsers employed are:
1.Top down parser: which build parse trees from top(root) to bottom(leaves)
2.Bottom up parser: which build parse trees from leaves and work up the root.
Therefore there are two types of parsing methods– top-down parsing and bottom-up parsing
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
leftmost-derivation, and k indicates k-symbol lookahead. Therefore, a parser using the single
symbol look-ahead method and top-down parsing without backtracking is called LL(1)
parser. In the following sections, we will also use an extended BNF notation in which some
regulation expression operators are to be incorporated.
A syntax expression defines sentences of the form , or . A syntax of the form defines
sentences that consist of a sentence of the form followed by a sentence of the form followed
by a sentence of the form . A syntax of the form defines zero or one occurrence of the form .
A syntax of the form defines zero or more occurrences of the form .
A usual implementation of an LL(1) parser is:
initialize its data structures,
get the lookahead token by calling scanner routines, and
call the routine that implements the start symbol.
Here is an example.
proc syntaxAnalysis()
begin
initialize(); // initialize global data and structures
nextToken(); // get the lookahead token
program(); // parser routine that implements the start symbol
end;
3.4 FIRST AND FOLLOW
To compute FIRST(X) for all grammar symbols X, apply the following rules until
no more terminals or e can be added to any FIRST set.
1. If X is terminal, then FIRST(X) is {X}.
2. If X->e is a production, then add e to FIRST(X).
3. If X is nonterminal and X->Y1Y2...Yk is a production, then place a in FIRST(X) if for
some i, a is in FIRST(Yi) and e is in all of FIRST(Y1),...,FIRST(Yi-1) that is,
Y1.......Yi-1=*>e. If e is in FIRST(Yj) for all j=1,2,...,k, then add e to FIRST(X). For
example, everything in FIRST(Yj) is surely in FIRST(X). If y1 does not derive e, then we
add nothing more to FIRST(X), but if Y1=*>e, then we add FIRST(Y2) and so on.
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
To compute the FIRST(A) for all nonterminals A, apply the following rules until nothing
can be added to any FOLLOW set.
1. Place $ in FOLLOW(S), where S is the start symbol and $ in the input right endmarker.
2. If there is a production A=>aBs where FIRST(s) except e is placed in FOLLOW(B).
3. If there is aproduction A->aB or a production A->aBs where FIRST(s) contains e, then
everything in FOLLOW(A) is in FOLLOW(B).
Consider the following example to understand the concept of First and Follow.Find the first
and follow of all nonterminals in the Grammar-
E -> TE'
E'-> +TE'|e
T -> FT'
T'-> *FT'|e
F -> (E)|id
Then:
FIRST(E)=FIRST(T)=FIRST(F)={(,id}
FIRST(E')={+,e}
FIRST(T')={*,e}
FOLLOW(E)=FOLLOW(E')={),$}
FOLLOW(T)=FOLLOW(T')={+,),$}
FOLLOW(F)={+,*,),$}
For example, id and left parenthesis are added to FIRST(F) by rule 3 in definition of FIRST
with i=1 in each case, since FIRST(id)=(id) and FIRST('(')= {(} by rule 1. Then by rule 3
with i=1, the production T -> FT' implies that id and left parenthesis belong to FIRST(T)
also.
To compute FOLLOW,we put $ in FOLLOW(E) by rule 1 for FOLLOW. By rule 2 applied
toproduction F-> (E), right parenthesis is also in FOLLOW(E). By rule 3 applied to
production E-> TE', $ and right parenthesis are in FOLLOW(E').
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
3.6.LL(1) GRAMMAR
The above algorithm can be applied to any grammar G to produce a parsing table M. For
some Grammars, for example if G is left recursive or ambiguous, then M will have at least
one multiply-defined entry. A grammar whose parsing table has no multiply defined entries
is said to be LL(1). It can be shown that the above algorithm can be used to produce for every
LL(1) grammar G a parsing table M that parses all and only the sentences of G. LL(1)
grammars have several distinctive properties. No ambiguous or left recursive grammar can
be LL(1). There remains a question of what should be done in case of multiply defined
entries. One easy solution is to eliminate all left recursion and left factoring, hoping to
produce a grammar which will produce no multiply defined entries in the parse tables.
Unfortunately there are some grammars which will give an LL(1) grammar after any kind of
alteration. In general, there are no universal rules to convert multiply defined entries into
single valued entries without affecting the language recognized by the parser.
The main difficulty in using predictive parsing is in writing a grammar for the source
language such that a predictive parser can be constructed from the grammar. Although left
recursion elimination and left factoring are easy to do, they make the resulting grammar hard
to read and difficult to use the translation purposes. To alleviate some of this difficulty, a
common organization for a parser in a compiler is to use a predictive parser for control
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
As a starting point, we can place all symbols in FOLLOW(A) into the synchronizing
set for nonterminal A. If we skip tokens until an element of FOLLOW(A) is seen and
pop A from the stack, it is likely that parsing can continue.
It is not enough to use FOLLOW(A) as the synchronizingset for A. Fo example , if
semicolons terminate statements, as in C, then keywords that begin statements may
not appear in the FOLLOW set of the nonterminal generating expressions. A missing
semicolon after an assignment may therefore result in the keyword beginning the next
statement being skipped. Often, there is a hierarchica structure on constructs in a
language; e.g., expressions appear within statement, which appear within bblocks,and
so on. We can add to the synchronizing set of a lower construct the symbols that
begin higher constructs. For example, we might add keywords that begin statements
to the synchronizing sets for the nonterminals generaitn expressions.
If we add symbols in FIRST(A) to the synchronizing set for nonterminal A, then it
may be possible to resume parsing according to A if a symbol in FIRST(A) appears in
the input.
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
If a nonterminal can generate the empty string, then the production deriving e can be
used as a default. Doing so may postpone some error detection, but cannot cause an
error to be missed. This approach reduces the number of nonterminals that have to be
considered during error recovery.
If a terminal on top of the stack cannot be matched, a simple idea is to pop the
terminal, issue a message saying that the terminal was inserted, and continue parsing.
In effect, this approach takes the synchronizing set of a token to consist of all other
tokens.
3.8 LR PARSING INTRODUCTION
The "L" is for left-to-right scanning of the input and the "R" is for constructing a rightmost
derivation in reverse.
WHY LR PARSING:
LR parsers can be constructed to recognize virtually all programming-language
constructs for which context-free grammars can be written.
The LR parsing method is the most general non-backtracking shift-reduce parsing
method known, yet it can be implemented as efficiently as other shift-reduce
methods.
The class of grammars that can be parsed using LR methods is a proper subset of the
class of grammars that can be parsed with predictive parsers.
An LR parser can detect a syntactic error as soon as it is possible to do so on a left-
to- right scan of the input.
Department of CSE UNIT III
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
The disadvantage is that it takes too much work to constuct an LR parser by hand for
a typical programming-language grammar. But there are lots of LR parser generators
available to make this task easy.
MODELS OF LR PARSERS
The schematic form of an LR parser is shown below.
The program uses a stack to store a string of the form s0X1s1X2...Xmsm where sm is on top.
Each Xi is a grammar symbol and each si is a symbol representing a state. Each state symbol
summarizes the information contained in the stack below it. The combination of the state
symbol on top of the stack and the current input symbol are used to index the parsing table
and determine the shiftreduce parsing decision. The parsing table consists of two parts: a
parsing action function action and a goto function goto. The program driving the LR parser
behaves as follows: It determines sm the state currently on top of the stack and ai the current
input symbol. It then consults action[sm,ai], which can have one of four values:
shift s, where s is a state
reduce by a grammar production A -> b
Department of CSE UNIT III
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
accept
error
The function goto takes a state and grammar symbol as arguments and produces a state.
For a parsing table constructed for a grammar G, the goto table is the transition function of a
deterministic finite automaton that recognizes the viable prefixes of G. Recall that the viable
prefixes of G are those prefixes of right-sentential forms that can appear on the stack of a
shiftreduce parser because they do not extend past the rightmost handle.
A configuration of an LR parser is a pair whose first component is the stack contents and
whose second component is the unexpended input:
(s0 X1 s1 X2 s2... Xm sm, ai ai+1... an$)
This configuration represents the right-sentential form
X1 X1 ... Xm ai ai+1 ...an
in essentially the same way a shift-reduce parser would; only the presence of the states on the
stack is new. Recall the sample parse we did (see Example 1: Sample bottom-up parse) in
which we assembled the right-sentential form by concatenating the remainder of the input
buffer to the top of the stack. The next move of the parser is determined by reading ai and
sm, and consulting the parsing action table entry action[sm, ai]. Note that we are just looking
at the state here and no symbol below it. We'll see how this actually works later.
The configurations resulting after each of the four types of move are as follows:
If action[sm, ai] = shift s, the parser executes a shift move entering the configuration
(s0 X1 s1 X2 s2... Xm sm ai s, ai+1... an$)
Here the parser has shifted both the current input symbol ai and the next symbol.
If action[sm, ai] = reduce A -> b, then the parser executes a reduce move, entering the
configuration,
(s0 X1 s1 X2 s2... Xm-r sm-r A s, ai ai+1... an$)
where s = goto[sm-r, A] and r is the length of b, the right side of the production. The parser
first popped 2r symbols off the stack (r state symbols and r grammar symbols), exposing state
sm-r. The parser then pushed both A, the left side of the production, and s, the entry for
goto[sm-r, A], onto the stack. The current input symbol is not changed in a reduce move.
The output of an LR parser is generated after a reduce move by executing the semantic action
associated with the reducing production. For example, we might just print out the production
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
reduced.
If action[sm, ai] = accept, parsing is completed.
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
symbols. When the parser is in state s immediately after reducing by rule N, then the next
state to enter is given by goto[s][N].
The current state of a shift-reduce parser is the state on top of the state stack. The detailed
operation of such a parser is as follows:
1. Initialize the parse stack to contain a single state s0, where s0 is the distinguished initial
state of the parser.
2. Use the state s on top of the parse stack and the current lookahead t to consult the action
table entry action[s][t]:
· If the action table entry is shift s' then push state s' onto the stack and advance the
input so that the lookahead is set to the next token.
· If the action table entry is reduce r and rule r has m symbols in its RHS, then pop
m symbols off the parse stack. Let s' be the state now revealed on top of the parse
stack and N be the LHS nonterminal for rule r. Then consult the goto table and
push the state given by goto[s'][N] onto the stack. The lookahead token is not
changed by this step.
If the action table entry is accept, then terminate the parse with success.
If the action table entry is error, then signal an error.
3. Repeat step (2) until the parser terminates.
For example, consider the following simple grammar
0) $S: stmt <EOF>
1) stmt: ID ':=' expr
2) expr: expr '+' ID
3) expr: expr '-' ID
4) expr: ID
which describes assignment statements like a:= b + c - d. (Rule 0 is a special augmenting
production added to the grammar).
One possible set of shift-reduce parsing tables is shown below (sn denotes shift n, rn denotes
reduce n, acc denotes accept and blank entries denote error entries):
Parser Tables
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
The Action and Goto Table The two LR(0) parsing tables for this grammar look as follows:
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
state c d $ S C
0 S36 S47 1 2
1 Accept
2 S36 S47 5
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
36 S36 S47 89
47 R3 R3
5 R1
89 R2 R2 R2
HANDLING ERRORS
The LALR parser may continue to do reductions after the LR parser would have spotted an
error, but the LALR parser will never do a shift after the point the LR parser would have
discovered the error and will eventually find the error.
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY LECTURE N0TES
Phrase-level recovery is implemented by examining each error entry in the LR action table
and deciding on the basis of language usage the most likely programmer error that would
give rise to that error. An appropriate recovery procedure can then be constructed;
presumably the top of the stack and/or first input symbol would be modified in a way deemed
appropriate for each error entry. In designing specific error-handling routines for an LR
parser, we can fill in each blank entry in the action field with a pointer to an error routine that
will take the appropriate action selected by the compiler designer.
The actions may include insertion or deletion of symbols from the stack or the input or both,
or alteration and transposition of input symbols. We must make our choices so that the LR
parser will not get into an infinite loop. A safe strategy will assure that at least one input
symbol will be removed or shifted eventually, or that the stack will eventually shrink if the
end of the input has been reached. Popping a stack state that covers a non terminal should be
avoided, because this modification eliminates from the stack a construct that has already been
successfully parsed.
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY LECTURE NOTES
SEMANTIC ANALYSIS
Semantic Analysis computes additional information related to the meaning of the
program once the syntactic structure is known.
In typed languages as C, semantic analysis involves adding information to the symbol
table and performing type checking.
The information to be computed is beyond the capabilities of standard parsing
techniques, therefore it is not regarded as syntax.
As for Lexical and Syntax analysis, also for Semantic Analysis we need both a
Representation Formalism and an Implementation Mechanism.
As representation formalism this lecture illustrates what are called Syntax Directed
Translations.
SYNTAX DIRECTED TRANSLATION
The Principle of Syntax Directed Translation states that the meaning of an input
sentence is related to its syntactic structure, i.e., to its Parse-Tree.
By Syntax Directed Translations we indicate those formalisms for specifying
translations for programming language constructs guided by context-free grammars.
o We associate Attributes to the grammar symbols representing the language
constructs.
o Values for attributes are computed by Semantic Rules associated with
grammar productions.
Evaluation of Semantic Rules may:
o Generate Code;
o Insert information into the Symbol Table;
o Perform Semantic Check;
o Issue error messages;
o etc.
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY LECTURE NOTES
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY LECTURE NOTES
S-ATTRIBUTED DEFINITIONS
Definition. An S-Attributed Definition is a Syntax Directed Definition that uses only
synthesized attributes.
• Evaluation Order. Semantic rules in a S-Attributed Definition can be evaluated by a
bottom-up, or PostOrder, traversal of the parse-tree.
• Example. The above arithmetic grammar is an example of an S-Attributed
Definition. The annotated parse-tree for the input 3*5+4n is:
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY LECTURE NOTES
L-attributed definition
Definition: A SDD its L-attributed if each inherited attribute of Xi in the RHS of A ! X1 :
:Xn depends only on
1. attributes of X1;X2; : : : ;Xi1 (symbols to the left of Xi in the RHS)
2. inherited attributes of A.
Restrictions for translation schemes:
1. Inherited attribute of Xi must be computed by an action before Xi.
2. An action must not refer to synthesized attribute of any symbol to the right of that action.
3. Synthesized attribute for A can only be computed after all attributes it references have
been completed (usually at end of RHS).
SYMBOL TABLES
A symbol table is a major data structure used in a compiler. Associates attributes with
identifiers used in a program. For instance, a type attribute is usually associated with each
identifier. A symbol table is a necessary component Definition (declaration) of identifiers
appears once in a program .Use of identifiers may appear in many places of the program text
Identifiers and attributes are entered by the analysis phases. When processing a definition
(declaration) of an identifier. In simple languages with only global variables and implicit
declarations. The scanner can enter an identifier into a symbol table if it is not already there
In block-structured languages with scopes and explicit declarations:
The parser and/or semantic analyzer enter identifiers and corresponding attributes
Symbol table information is used by the analysis and synthesis phases
To verify that used identifiers have been defined (declared)
To verify that expressions and assignments are semantically correct – type checking
To generate intermediate or target code
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY LECTURE NOTES
RUNTIME ENVIRONMENT
Runtime organization of different storage locations
Representation of scopes and extents during program execution.
Components of executing program reside in blocks of memory (supplied by OS).
Three kinds of entities that need to be managed at runtime:
o Generated code for various procedures and programs.
forms text or code segment of your program: size known at compile time.
o Data objects:
Global variables/constants: size known at compile time
Variables declared within procedures/blocks: size known
Variables created dynamically: size unknown.
o Stack to keep track of procedure
activations. Subdivide memory conceptually into
code and data areas:
Code:
Program instructions
Stack: Manage activation of procedures at runtime.
Heap: holds variables created dynamically
STORAGE ORGANIZATION
1. Fixed-size objects can be placed in predefined locations.
HEAP ALLOCATION
Some languages do not have tree-structured allocations. In these cases,
activations have to be allocated on the heap. This allows strange situations, like
callee activations that live longer than their callers’ activations. This is not common
Heap is used for allocating space for objects created at run timeFor example: nodes of
dynamic data structures such as linked lists and trees
Dynamic memory allocation and deallocation based on the requirements
of the programmalloc() and free() in C programs
new()and delete()in C++ programs
new()and garbage collection in Java programs
return h(3) + x;
}
Call-by-Value
The actual parameters are evaluated and their r-values are passed to the called
procedure
A procedure called by value can affect its caller either through nonlocal names or
through pointers.
Parameters in C are always passed by value. Array is unusual, what is passed by value
is a pointer.
Pascal uses pass by value by default, but var parameters are passed by reference.
Call-by-Reference
func(a,b) { a = b};
call func(3,4); print(3);
Copy-Restore
A hybrid between call-by-value and call-by reference.
The actual parameters are evaluated and their r-values are passed as in call- by-value.
In addition, l values are determined before the call.
When control returns, the current r-values of the formal parameters are copied back
into the l-values of the actual parameters.
Call-by-Name
The actual parameters literally substituted for the formals. This is like a macro-
expansion or in-line expansion Call-by-name is not used in practice. However, the
conceptually related technique of in-line expansion is commonly used. In-lining may
be one of the most effective optimization transformations if they are guided by
execution profiles.
INTRODUCTION
The code produced by the straight forward compiling algorithms can often be made to run
faster or take less space, or both. This improvement is achieved by program transformations
that are traditionally called optimizations. Compilers that apply code-improving
transformations are called optimizing compilers.
Machine independent optimizations are program transformations that improve the target code
without taking into consideration any properties of the target machine.
Machine dependant optimizations are based on register allocation and utilization of special
machine-instruction sequences.
Simply stated, the best program transformations are those that yield the most benefit for the
least effort.
The transformation must preserve the meaning of programs. That is, the optimization must
not change the output produced by a program for a given input, or cause an error such as
division by zero, that was not present in the original source program. At all times we take the
“safe” approach of missing an opportunity to apply a transformation rather than risk
changing what the program does.
The transformation must be worth the effort. It does not make sense for a compiler writer to
expend the intellectual effort to implement a code improving transformation and to have the
compiler expend the additional time compiling source programs if this effort is not repaid
when the target programs are executed. “Peephole” transformations of this kind are simple
enough and beneficial enough to be included in any compiler.
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY COURSE MATERIAL
There are a number of ways in which a compiler can improve a program without
changing the function it computes.
The transformations
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY COURSE MATERIAL
Frequently, a program will include several calculations of the same value, such as an
offset in an array. Some of the duplicate calculations cannot be avoided by the
programmer because they lie below the level of detail accessible within the source
language.
The above code can be optimized using the common sub-expression elimination as
t1: = 4*i
t2: = a [t1]
t3: = 4*j
t5 : = n
t6: = b [t1] +t5
The common sub expression t4: =4*i is eliminated as its computation is already in t 1. And
value of i is not been changed from definition to use.
Copy Propagation:
Assignments of the form f : = g called copy statements, or copies for short. The idea
behind the copy-propagation transformation is to use g for f, whenever possible after the
copy statement f: = g. Copy propagation means use of one variable instead of another.
This may not appear to be an improvement, but as we shall see it gives us an opportunity
to eliminate x.
For example:
x=Pi;
……
A=x*r*r;
The optimization using copy propagation can be done as follows:
A=Pi*r*r;
Dead-Code Eliminations:
A variable is live at a point in a program if its value can be used subsequently; otherwise,
it is dead at that point. A related idea is dead or useless code, statements that compute
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY COURSE MATERIAL
values that never get used. While the programmer is unlikely to introduce any dead code
intentionally, it may appear as the result of previous transformations. An optimization can
be done by eliminating dead code.
Example:
i=0;
if(i=1)
{
a=b+5;
}
Here, „if‟ statement is dead code because this condition will never get satisfied.
Constant folding:
We can eliminate both the test and printing from the object code. More generally,
deducing at compile time that the value of an expression is a constant and using the
constant instead is known as constant folding.
One advantage of copy propagation is that it often turns the copy statement into dead
code.
For example,
a=3.14157/2 can be replaced by
a=1.570 there by eliminating a division operation.
Loop Optimizations:
We now give a brief introduction to a very important place for optimizations, namely
loops, especially the inner loops where programs tend to spend the bulk of their time. The
running time of a program may be improved if we decrease the number of instructions in
an inner loop, even if we increase the amount of code outside that loop.
Three techniques are important for loop optimization:
Code Motion:
An important modification that decreases the amount of code in a loop is code motion.
This transformation takes an expression that yields the same result independent of the
number of times a loop is executed ( a loop-invariant computation) and places the
expression before the loop. Note that the notion “before the loop” assumes the existence
of an entry for the loop. For example, evaluation of limit-2 is a loop-invariant
computation in the following while-statement:
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY COURSE MATERIAL
t= limit-2;
while (i<=t) /* statement does not change limit or t */
Induction Variables :
Loops are usually processed inside out. For example consider the loop around B3.
Note that the values of j and t 4 remain in lock-step; every time the value of j decreases by
1, that of t4 decreases by 4 because 4*j is assigned to t 4. Such identifiers are called
induction variables.
When there are two or more induction variables in a loop, it may be possible to get rid of
all but one, by the process of induction-variable elimination. For the inner loop around
B3 in Fig. we cannot get rid of either j or t4 completely; t4 is used in B3 and j in B4.
However, we can illustrate reduction in strength and illustrate a part of the process of
induction-variable elimination. Eventually j will be eliminated when the outer loop of B2
- B5 is considered.
Example:
As the relationship t4:=4*j surely holds after such an assignment to t4 in Fig. and t4 is not
changed elsewhere in the inner loop around B3, it follows that just after the statement
j:=j-1 the relationship t4:= 4*j-4 must hold. We may therefore replace the assignment t 4:=
4*j by t4:= t4-4. The only problem is that t4 does not have a value when we enter block B3
for the first time. Since we must maintain the relationship t 4=4*j on entry to the block B3,
we place an initializations of t4 at the end of the block where j itself is
before after
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY COURSE MATERIAL
Reduction In Strength:
Structure-Preserving Transformations
Algebraic Transformations
Structure-Preserving Transformations:
The primary Structure-Preserving Transformation on basic blocks are:
Example:
a: =b+c
b: =a-d
c: =b+c
d: =a-d
The 2nd and 4th statements compute the same expression: b+c and a-d
Basic block can be transformed to
a: = b+c
b: = a-d
c: = a
d: = b
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY COURSE MATERIAL
It‟s possible that a large amount of dead (useless) code may exist in the program. This
might be especially caused when introducing variables and procedures as part of constructio n or
error-correction of a program – once declared and defined, one forgets to remove them in case
they serve no purpose. Eliminating these will definitely optimize the code.
t1:=b+c
t2:=x+y
can be interchanged or reordered in its computation in the basic block when value of t 1
does not affect the value of t 2.
Algebraic Transformations:
Algebraic identities represent another important class of optimizations on basic blocks.
This includes simplifying expressions or replacing expensive operation by cheaper ones
i.e. reduction in strength.
Another class of related optimizations is constant folding. Here we evaluate constant
expressions at compile time and replace the constant expressions by their values. Thus
the expression 2*3.14 would be replaced by 6.28.
The relational operators <=, >=, <, >, + and = sometimes generate unexpected common
sub expressions.
Associative laws may also be applied to expose common sub expressions. For example, if
the source code has the assignments
a :=b+c
e :=c+d+b
a :=b+c
t :=c+d
e :=t+b
Example:
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY COURSE MATERIAL
The compiler writer should examine the language carefully to determine what
rearrangements of computations are permitted, since computer arithmetic does not always
obey the algebraic identities of mathematics. Thus, a compiler may evaluate x*y-x*z as
x*(y-z) but it may not evaluate a+(b-c) as (a+b)-c.
Dominators:
In a flow graph, a node d dominates node n, if every path from initial node of the flow
graph to n goes through d. This will be denoted by d dom n. Every initial node dominates all the
remaining nodes in the flow graph and the entry of a loop dominates all nodes in the loop.
Similarly every node dominates itself.
Example:
www.padeepz.net
www.padeepz.net
SRI VIDYA COLLEGE OF ENGINEERING AND TECHNOLOGY COURSE MATERIAL
The way of presenting dominator information is in a tree, called the dominator tree in
which the initial node is the root.
The parent of each other node is its immediate dominator.
Each node d dominates only its descendents in the tree.
The existence of dominator tree follows from a property of dominators; each node has a
unique immediate dominator in that is the last dominator of n on any path from the initial
node to n.
In terms of the dom relation, the immediate dominator m has the property is d=!n and d
dom n, then d dom m.
D(1)={1} D(2)={1,2}
D(3)={1,3}
D(4)={1,3,4}
D(5)={1,3,4,5}
D(6)={1,3,4,6}
D(7)={1,3,4,7}
D(8)={1,3,4,7,8}
D(9)={1,3,4,7,8,9}
D(10)={1,3,4,7,8,10}
www.padeepz.net
Natural Loop:
www.padeepz.net
One application of dominator information is in determining the loops of a flow graph suitable
for improvement.
A loop must have a single entry point, called the header. This entry point-dominates all
nodes in the loop, or it would not be the sole entry to the loop.
There must be at least one way to iterate the loop(i.e.)at least one path back to the header.
One way to find all the loops in a flow graph is to search for edges in the flow graph whose
heads dominate their tails. If a→b is an edge, b is the head and a is the tail. These types of
edges are called as back edges.
Example:
7→4 4 DOM 7
10 →7 7 DOM 10
4→3
8→3
9 →1
Output: The set loop consisting of all nodes in the natural loop n→d.
Method: Beginning with node n, we consider each node m*d that we know is in loop, to make
sure that m‟s predecessors are also placed in loop. Each node in loop, except for d, is placed once
on stack, so its predecessors will be examined. Note that because d is put in the loop initially, we
never examine its predecessors, and thus find only those nodes that reach n without going
through d.
Procedure insert(m);
if m is not in loop then begin
loop := loop U {m};
push m onto stack
end;
stack : = empty;
www.padeepz.net
loop : = {d};
www.padeepz.net
insert(n);
while stack is not empty do begin
pop m, the first element of stack, off stack;
for each predecessor p of m do insert(p)
end
Inner loop:
If we use the natural loops as “the loops”, then we have the useful property that unless
two loops have the same header, they are either disjointed or one is entirely contained in
the other. Thus, neglecting loops with the same header for the moment, we have a natural
notion of inner loop: one that contains no other loop.
When two natural loops have the same header, but neither is nested within the other, they
are combined and treated as a single loop.
Pre-Headers:
Several transformations require us to move statements “before the header”. Therefore
begin treatment of a loop L by creating a new block, called the preheater.
The pre-header has only the header as successor, and all edges which formerly entered
the header of L from outside L instead enter the pre-header.
Initially the pre-header is empty, but transformations on L may place statements in it.
header pre-header
loop L
header
loop L
Reducible flow graphs are special flow graphs, for which several code optimization
transformations are especially easy to perform, loops are unambiguously defined,
dominators can be easily calculated, data flow analysis problems can also be solved
efficiently.
www.padeepz.net
www.padeepz.net
The most important properties of reducible flow graphs are that there are no jumps into
the middle of loops from outside; the only entry to a loop is through its header.
Definition:
A flow graph G is reducible if and only if we can partition the edges into two disjoint
groups, forward edges and back edges, with the following properties.
The forward edges from an acyclic graph in which every node can be reached from initial
node of G.
The back edges consist only of edges where heads dominate theirs tails.
If we know the relation DOM for a flow graph, we can find and remove all the back
edges.
If the forward edges form an acyclic graph, then we can say the flow graph reducible.
In the above example remove the five back edges 4→3, 7→4, 8→3, 9→1 and 10→7
whose heads dominate their tails, the remaining graph is acyclic.
The key property of reducible flow graphs for loop analysis is that in such flow graphs
every set of nodes that we would informally regard as a loop must contain a back edge.
PEEPHOLE OPTIMIZATION
Redundant-instructions elimination
Flow-of-control optimizations
Algebraic simplifications
Use of machine idioms
Unreachable Code
www.padeepz.net
Redundant Loads And Stores:
www.padeepz.net
If we see the instructions sequence
we can delete instructions (2) because whenever (2) is executed. (1) will ensure that the value of
a is already in register R0.If (2) had a label we could not be sure that (1) was always executed
immediately before (2) and so we could not remove (2).
Unreachable Code:
Another opportunity for peephole optimizations is the removal of unreachable instructions.
An unlabeled instruction immediately following an unconditional jump may be removed.
This operation can be repeated to eliminate a sequence of instructions. For ex ample, for
debugging purposes, a large program may have within it certain segments that are executed
only if a variable debug is 1. In C, the source code might look like:
#define debug 0
….
If ( debug ) {
If debug =1 goto L2
goto L2
L2: …………………………(a)
One obvious peephole optimization is to eliminate jumps over jumps .Thus no matter what
the value of debug; (a) can be replaced by:
If debug ≠1 goto L2
L2: ……………………………(b)
As the argument of the statement of (b) evaluates to a constant true it can be replaced
by
www.padeepz.net
If debug ≠0 goto L2
www.padeepz.net
Print debugging information
L2: ……………………………(c)
As the argument of the first statement of (c) evaluates to a constant true, it can be replaced by
goto L2. Then all the statement that print debugging aids are manifestly unreachable and
can be eliminated one at a time.
Flows-Of-Control Optimizations:
The unnecessary jumps can be eliminated in either the intermediate code or th e target code
by the following types of peephole optimizations. We can replace the jump sequence
goto L1
….
L1: gotoL2
by the sequence
goto L2
….
L1: goto L2
If there are now no jumps to L1, then it may be possible to eliminate the statement L1:goto
L2 provided it is preceded by an unconditional jump .Similarly, the sequence
if a < b goto L1
….
L1: goto L2
can be replaced by
If a < b goto L2
….
L1: goto L2
Finally, suppose there is only one jump to L1 and L1 is preceded by an unconditional goto.
Then the sequence
goto L1
……..
www.padeepz.net
L1: if a < b goto L2
www.padeepz.net
L3: …………………………………..(1)
May be replaced by
If a < b goto L2
goto L3
…….
L3: ………………………………….(2)
While the number of instructions in (1) and (2) is the same, we sometimes skip the
unconditional jump in (2), but never in (1).Thus (2) is superior to (1) in execution time
Algebraic Simplification:
There is no end to the amount of algebraic simplification that can be attempted through
peephole optimization. Only a few algebraic identities occur frequently enough that it is
worth considering implementing them .For example, statements such as
x := x+0
Or
x := x * 1
Are often produced by straightforward intermediate code-generation algorithms, and they can
be eliminated easily through peephole optimization.
Reduction in Strength:
Reduction in strength replaces expensive operations by equivalent cheaper ones on the target
machine. Certain machine instructions are considerably cheaper than others and can often be
used as special cases of more expensive operators.
For example, x² is invariably cheaper to implement as x*x than as a call to an exponentiation
routine. Fixed-point multiplication or division by a power of two is cheaper to implement as
a shift. Floating-point division by a constant can be implemented as multiplication by a
constant, which may be cheaper.
X2 → X*X
www.padeepz.net
i:=i+1 → i++
www.padeepz.net
i:=i-1 → i- -
In order to do code optimization and a good job of code generation , compiler needs to
collect information about the program as a whole and to distribute this information to
each block in the flow graph.
This equation can be read as “ the information at the end of a statement is either generated
within the statement , or enters at the beginning and is not killed as control flows through
the statement.”
The details of how data-flow equations are set and solved depend on three factors.
The notions of generating and killing depend on the desired information, i.e., on the data
flow analysis problem to be solved. Moreover, for some problems, instead of proceeding
along with flow of control and defining out[s] in terms of in[s], we need to proceed
backwards and define in[s] in terms of out[s].
Since data flows along control paths, data-flow analysis is affected by the constructs in a
program. In fact, when we write out[s] we implicitly assume that there is unique end
point where control leaves the statement; in general, equations are set up at the level of
basic blocks rather than statements, because blocks do have unique end points.
There are subtleties that go along with such statements as procedure calls, assignments
through pointer variables, and even assignments to array variables.
Within a basic block, we talk of the point between two adjacent statements, as well as the
point before the first statement and after the last. Thus, block B1 has four points: one
before any of the assignments and one after each of the three assignments.
www.padeepz.net
www.padeepz.net
B1
d1 : i :=m-1
d2: j :=n
d3: a := u1
B2
d4 : I := i+1
B3
d5: j := j-1
B4
B5 B6
d6 :a :=u2
Now let us take a global view and consider all the points in all the blocks. A path from p 1
to pn is a sequence of points p1, p2,….,pn such that for each i between 1 and n-1, either
Pi is the point immediately preceding a statement and p i+1 is the point immediately
following that statement in the same block, or
Pi is the end of some block and pi+1 is the beginning of a successor block.
Reaching definitions:
A definition of variable x is a statement that assigns, or may assign, a value to x. The
most common forms of definition are assignments to x and statements that read a value
from an i/o device and store it in x.
These statements certainly define a value for x, and they are referred to as unambiguous
definitions of x. There are certain kinds of statements that may define a value for x; they
are called ambiguous definitions. The most usual forms of ambiguous definitions of x
are:
An assignment through a pointer that could refer to x. For example, the assignment * q: = y
is a definition of x if it is possible that q points to x. we must assume that an assignment
through a pointer is a definition of every variable.
We say a definition d reaches a point p if there is a path from the point immediately
following d to p, such that d is not “killed” along that path. Thus a point can be reached
www.padeepz.net
www.padeepz.net
by an unambiguous definition and an ambiguous definition of the same variable
appearing later along one path.
Flow graphs for control flow constructs such as do-while statements have a useful
property: there is a single beginning point at which control enters and a single end point
that control leaves from when execution of the statement is over. We exploit this property
when we talk of the definitions reaching the beginning and the end of statements wit h the
following syntax.
E id + id| id
Expressions in this language are similar to those in the intermediate code, but the flow
graphs for statements have restricted forms.
S1
S1
If E goto s1
S2
S1 S2 If E goto s1
S1 ; S2
We define a portion of a flow graph called a region to be a set of nodes N that includes a
header, which dominates all other nodes in the region. All edges between nodes in N are
in the region, except for some that enter the header.
The portion of flow graph corresponding to a statement S is a region that obeys the
further restriction that control can flow to just one outside block when it leaves the
region.
www.padeepz.net
www.padeepz.net
We say that the beginning points of the dummy blocks at the entry and exit of a
statement‟s region are the beginning and end points, respectively, of the statement. The
equations are inductive, or syntax-directed, definition of the sets in[S], out[S], gen[S],
and kill[S] for all statements S.
gen[S] is the set of definitions “generated” by S while kill[S] is the set of definitions
that never reach the end of S.
Consider the following data-flow equations for reaching definitions :
i)
S d: a:=b+c
gen [S] = { d }
kill [S] = Da – { d }
out [S] = gen [S] U ( in[S] – kill[S] )
Observe the rules for a single assignment of variable a. Surely that assignment is a
definition of a, say d. Thus
Gen[S]={d}
On the other hand, d “kills” all other definitions of a, so we write
Kill[S] = Da – {d}
Where, Da is the set of all definitions in the program for variable a. ii )
S S1
S2
gen[S]=gen[S2] U (gen[S1]-kill[S2])
Kill[S] = kill[S2] U (kill[S1] – gen[S2])
in [S1] = in [S]
in [S2] = out [S1]
out [S] = out [S2]
www.padeepz.net
www.padeepz.net
There is a subtle miscalculation in the rules for gen and kill. We have made the
assumption that the conditional expression E in the if and do statements are
“uninterpreted”; that is, there exists inputs to the program that make their branches go
either way.
We assume that any graph-theoretic path in the flow graph is also an execution path, i.e.,
a path that is executed when the program is run with least one possible input.
When we compare the computed gen with the “true” gen we discover that the true gen is
always a subset of the computed gen. on the other hand, the true kill is always a superset
of the computed kill.
These containments hold even after we consider the other rules. It is natural to wonder
whether these differences between the true and computed gen and kill sets present a
serious obstacle to data-flow analysis. The answer lies in the use intended for these data.
Overestimating the set of definitions reaching a point does not seem serious; it merely
stops us from doing an optimization that we could legitimately do. On the other hand,
underestimating the set of definitions is a fatal error; it could lead us into making a
change in the program that changes what the program computes. For the case of reaching
definitions, then, we call a set of definitions safe or conservative if the estimate is a
superset of the true set of reaching definitions. We call the estimate unsafe, if it is not
necessarily a superset of the truth.
Returning now to the implications of safety on the estimation of gen and kill for reaching
definitions, note that our discrepancies, supersets for gen and subsets for kill are both in
the safe direction. Intuitively, increasing gen adds to the set of definitions that can reach a
point, and cannot prevent a definition from reaching a place that it truly reached.
Decreasing kill can only increase the set of definitions reaching any given point.
However, there are other kinds of data-flow information, such as the reaching-definitions
problem. It turns out that in is an inherited attribute, and out is a synthesized attribute
www.padeepz.net
depending on in. we intend that in[S] be the set of definitions reaching the beginning of
www.padeepz.net
S, taking into account the flow of control throughout the entire program, including
statements outside of S or within which S is nested.
The set out[S] is defined similarly for the end of s. it is important to note the distinction
between out[S] and gen[S]. The latter is the set of definitions that reach the end of S
without following paths outside S.
Considering if-statement we have conservatively assumed that control can follow either
branch, a definition reaches the beginning of S1 or S2 exactly when it reaches the
beginning of S.
If a definition reaches the end of S if and only if it reaches the end of one or both sub
statements; i.e,
Out[S]=out[S1] U out[S2]
Representation of sets:
Sets of definitions, such as gen[S] and kill[S], can be represented compactly using bit
vectors. We assign a number to each definition of interest in the flow graph. Then bit
vector representing a set of definitions will have 1 in position I if and only if the
definition numbered I is in the set.
The number of definition statement can be taken as the index of statement in an array
holding pointers to statements. However, not all definitions may be of interest during
global data-flow analysis. Therefore the number of definitions of interest will typically be
recorded in a separate table.
A bit vector representation for sets also allows set operations to be implemented
efficiently. The union and intersection of two sets can be implemented by logical or and
logical and, respectively, basic operations in most systems-oriented programming
languages. The difference A-B of sets A and B can be implemented by taking the
complement of B and then using logical and to compute A
www.padeepz.net
www.padeepz.net
Space for data-flow information can be traded for time, by saving information only at
certain points and, as needed, recomputing information at intervening points. Basic
blocks are usually treated as a unit during global flow analysis, with attention restricted to
only those points that are the beginnings of blocks.
Since there are usually many more points than blocks, restricting our effort to blocks is a
significant savings. When needed, the reaching definitions for all points in a block can be
calculated from the reaching definitions for the beginning of a block.
Use-definition chains:
Evaluation order:
The techniques for conserving space during attribute evaluation, also apply to the
computation of data-flow information using specifications. Specifically, the only
constraint on the evaluation order for the gen, kill, in and out sets for statements is that
imposed by dependencies between these sets. Having chosen an evaluation order, we are
free to release the space for a set after all uses of it have occurred.
Earlier circular dependencies between attributes were not allowed, but we have seen that
data-flow equations may have circular dependencies.
When programs can contain goto statements or even the more disciplined break and
continue statements, the approach we have taken must be modified to take the actual
control paths into account.
Several approaches may be taken. The iterative method works arbitrary flow graphs.
Since the flow graphs obtained in the presence of break and continue statements are
reducible, such constraints can be handled systematically using the interval -based
methods
www.padeepz.net
www.padeepz.net
However, the syntax-directed approach need not be abandoned when break and continue
statements are allowed.
CODE GENERATION
The final phase in compiler model is the code generator. It takes as input an intermediate
representation of the source program and produces as output an equivalent target program. The
code generation techniques presented below can be used whether or not an optimizing phase
occurs before code generation.
symbol
table
www.padeepz.net
www.padeepz.net
b. Relocatable machine language
- It allows subprograms to be compiled separately.
c. Assembly language
- Code generation is made easier.
3. Memory management:
Names in the source program are mapped to addresses of data objects in run-time
memory by the front end and code generator.
It makes use of symbol table, that is, a name in a three-address statement refers to a
symbol-table entry for the name.
Labels in three-address statements have to be converted to addresses of instructions.
For example,
j : goto i generates jump instruction as follows :
if i < j, a backward jump instruction with target address equal to location of
code for quadruple i is generated.
if i > j, the jump is forward. We must store on a list for quadruple i the
location of the first machine instruction generated for quadruple j. When i is
processed, the machine locations for all instructions that forward jumps to i
are filled.
4. Instruction selection:
The instructions of target machine should be complete and uniform.
Instruction speeds and machine idioms are important factors when efficiency of target
program is considered.
The quality of the generated code is determined by its speed and size.
The former statement can be translated into the latter statement as shown below:
5. Register allocation
Instructions involving register operands are shorter and faster than those involving
operands in memory.
The use of registers is subdivided into two subproblems :
Register allocation – the set of variables that will reside in registers at a point in
the program is selected.
www.padeepz.net
www.padeepz.net
Register assignment – the specific register that a variable will reside in is
picked
Certain machine requires even-odd register pairs for some operands and results.
For example , consider the division instruction of the form :
D x, y
6. Evaluation order
The order in which the computations are performed can affect the efficiency of the
target code. Some computation orders require fewer registers to hold intermediate
results than others.
TARGET MACHINE
Familiarity with the target machine and its instruction set is a prerequisite for designing a
good code generator.
The target computer is a byte-addressable machine with 4 bytes to a word.
It has n general-purpose registers, R0, R1, . . . , Rn-1.
It has two-address instructions of the form:
op source, destination
where, op is an op-code, and source and destination are data fields.
The source and destination of an instruction are specified by combining registers and
memory locations with address modes.
Address modes with their assembly-language forms
absolute M M 1
register R R 0
literal #c c 1
www.padeepz.net
www.padeepz.net
For example : MOV R0, M stores contents of Register R0 into memory location M ;
MOV 4(R0), M stores the value contents(4+contents(R0)) into M.
Instruction costs :
Instruction cost = 1+cost for source and destination address modes. This cost corresponds
to the length of the instruction.
Address modes involving registers have cost zero.
Address modes involving memory location or literal have cost one.
Instruction length should be minimized if space is important. Doing so also minimizes the
time taken to fetch and perform the instruction.
For example : MOV R0, R1 copies the contents of register R0 into R1. It has cost one,
since it occupies only one word of memory.
The three-address statement a : = b + c can be implemented by many different instruction
sequences :
i) MOV b, R0
ADD c, R0 cost = 6
MOV R0, a
ii) MOV b, a
ADD c, a cost = 6
In order to generate good code for target machine, we must utilize its addressing
capabilities efficiently.
www.padeepz.net
Static allocation
www.padeepz.net
Implementation of call statement:
GOTO callee.code_area /*It transfers control to the target code for the called procedure */
where,
callee.static_area – Address of the activation record
callee.code_area – Address of the first instruction for called procedure
#here + 20 – Literal return address which is the address of the instruction following GOTO.
This transfers control to the address saved at the beginning of the activation record.
The statement HALT is the final instruction that returns control to the operating system.
Stack allocation
Static allocation can become stack allocation by using relative addresses for storage in
activation records. In stack allocation, the position of activation record is stored in register so
words in activation records can be accessed as offsets from the value in this register.
Initialization of stack:
GOTO callee.code_area
www.padeepz.net
where,
www.padeepz.net
caller.recordsize – size of the activation record
#here + 16 – address of the instruction following the GOTO
Basic Blocks
Output: A list of basic blocks with each three-address statement in exactly one block
Method:
1. We first determine the set of leaders, the first statements of basic blocks. The rules
we use are of the following:
a. The first statement is a leader.
b. Any statement that is the target of a conditional or unconditional goto is a
leader.
c. Any statement that immediately follows a goto or conditional goto statement
is a leader.
2. For each leader, its basic block consists of the leader and all statements up to but not
including the next leader or the end of the program.
www.padeepz.net
www.padeepz.net
Consider the following source code for dot product of two vectors a and b of length 20
begin
prod :=0;
i:=1;
do begin
i :=i+1;
end
while i <= 20
end
(2) i := 1
(3) t1 := 4* i
(5) t3 := 4* i
(7) t5 := t2*t4
(8) t6 := prod+t5
(9) prod := t6
(10) t7 := i+1
(11) i := t7
www.padeepz.net
Transformations on Basic Blocks:
www.padeepz.net
A number of transformations can be applied to a basic block without changing the set of
expressions computed by the block. Two important classes of transformation are :
Structure-preserving transformations
Algebraic transformations
a:=b+c a:=b+c
b:=a–d b:=a-d
c:=b+c c:=b+c
d:=a–d d:=b
Since the second and fourth expressions compute the same expression, the basic block can be
transformed as above.
b) Dead-code elimination:
Suppose x is dead, that is, never subsequently used, at the point where the statement x : =
y + z appears in a basic block. Then this statement may be safely removed without changing
the value of the basic block.
d) Interchange of statements:
t1 : = b + c
t2 : = x + y
We can interchange the two statements without affecting the value of the block if and
only if neither x nor y is t1 and neither b nor c is t2.
2. Algebraic transformations:
Algebraic transformations can be used to change the set of expressions computed by a basic
block into an algebraically equivalent set.
Examples:
i) x : = x + 0 or x : = x * 1 can be eliminated from a basic block without changing the set of
expressions it computes.
ii) The exponential statement x : = y * * 2 can be replaced by x : = y * y.
www.padeepz.net
Flow Graphs www.padeepz.net
Flow graph is a directed graph containing the flow-of-control information for the set of
basic blocks making up a program.
The nodes of the flow graph are basic blocks. It has a distinguished initial node.
E.g.: Flow graph for the vector dot product is given as follows:
prod : = 0 B1
i:=1
t1 : = 4 * i
t2 : = a [ t1 ]
t3 : = 4 * i
B2
t4 : = b [ t3 ]
t5 : = t2 * t4
t6 : = prod + t5
prod : = t6
t7 : = i + 1
i : = t7
if i <= 20 goto B2
B1 is the initial node. B2 immediately follows B1, so there is an edge from B1 to B2. The
target of jump from last statement of B1 is the first statement B2, so there is an edge from
B1 (last statement) to B2 (first statement).
B1 is the predecessor of B2, and B2 is a successor of B1.
Loops
NEXT-USE INFORMATION
If the name in a register is no longer needed, then we remove the name from the register
and the register can be used to store some other names.
www.padeepz.net
www.padeepz.net
Symbol Table:
y Live i
z Live i
(or)
(or)
ADD Rj, Ri
A register descriptor is used to keep track of what is currently in each registers. The
register descriptors show that initially all the registers are empty.
An address descriptor stores the location where the current value of the name can be
found at run time.
www.padeepz.net
www.padeepz.net
A code-generation algorithm:
The algorithm takes as input a sequence of three-address statements constituting a basic block.
For each three-address statement of the form x : = y op z, perform the following actions:
1. Invoke a function getreg to determine the location L where the result of the computation y op
z should be stored.
2. Consult the address descriptor for y to determine y‟, the current location of y. Prefer the
register for y‟ if the value of y is currently both in memory and a register. If the value of y is
not already in L, generate the instruction MOV y’ , L to place a copy of y in L.
4. If the current values of y or z have no next uses, are not live on exit from the block, and are in
registers, alter the register descriptor to indicate that, after execution of x : = y op z , those
registers will no longer contain y or z.
The assignment d : = (a-b) + (a-c) + (a-c) might be translated into the following three-
address code sequence:
t:=a–b
u:=a–c
v:=t+u
d:=v+u
with d live at the end.
Register empty
www.padeepz.net
www.padeepz.net
The table shows the code sequences generated for the indexed assignment statements
a : = b [ i ] and a [ i ] : = b
The table shows the code sequences generated for the pointer assignments
a : = *p and *p : = a
a : = *p MOV *Rp, a 2
*p : = a MOV a, *Rp 2
Statement Code
x : = y +z MOV y, R0
if x < 0 goto z ADD z, R0
MOV R0,x
CJ< z
A DAG for a basic block is a directed acyclic graph with the following labels on nodes:
1. Leaves are labeled by unique identifiers, either variable names or constants.
2. Interior nodes are labeled by an operator symbol.
3. Nodes are also optionally given a sequence of identifiers for labels to store the
computed values.
DAGs are useful data structures for implementing transformations on basic blocks.
It gives a picture of how the value computed by a statement is used in subsequent
statements.
www.padeepz.net
It provides a good way of determining common sub - expressions.
www.padeepz.net
Algorithm for construction of DAG
Output: A DAG for the basic block containing the following information:
1. A label for each node. For leaves, the label is an identifier. For interior nodes, an
operator symbol.
2. For each node a list of attached identifiers to hold the computed values.
Case (i) x : = y OP z
Case (ii) x : = OP y
Case (iii) x : = y
Method:
Step 2: For the case(i), create a node(OP) whose left child is node(y) and right child is
For case(ii), determine whether there is node(OP) with one child node(y). If not create such
a node.
Step 3: Delete x from the list of identifiers for node(x). Append x to the list of attached
1. t1 := 4* i
2. t2 := a[t1]
3. t3 := 4* i
4. t4 := b[t3]
5. t5 := t2*t4
6. t6 := prod+t5
7. prod := t6
8. t7 := i+1
9. i := t7
10. if i<=20 goto (1)
www.padeepz.net
www.padeepz.net
Stages in DAG Construction
www.padeepz.net
www.padeepz.net
www.padeepz.net
www.padeepz.net
Application of DAGs:
www.padeepz.net
www.padeepz.net
GENERATING CODE FROM DAGs
The advantage of generating code for a basic block from its dag representation is that,
from a dag we can easily see how to rearrange the order of the final computation sequence than
we can starting from a linear sequence of three-address statements or quadruples.
MOV a , R0
ADD b , R0
MOV c , R1
ADD d , R1
MOV R0 , t1
MOV e , R0
SUB R1 , R0
MOV t1 , R1
SUB R0 , R1
MOV R1 , t4
t2 : = c + d
t3 : = e – t2
t1 : = a + b
t4 : = t1 – t3
MOV c , R0
ADD d , R0
MOV a , R0
SUB R0 , R1
MOV a , R0
ADD b , R0
SUB R1 , R0
MOV R0 , t4
In this order, two instructions MOV R0 , t1 and MOV t1 , R1 have been saved.
www.padeepz.net
www.padeepz.net
A Heuristic ordering for Dags
The heuristic ordering algorithm attempts to make the evaluation of a node immediately follow
the evaluation of its leftmost argument.
Algorithm:
1
*
2 + - 3
4
*
5 - + 8
6 + 7 c d 11 e 12
a b
9 10
Initially, the only node with no unlisted parents is 1 so set n=1 at line (2) and list 1 at line (3).
Now, the left argument of 1, which is 2, has its parents listed, so we list 2 and set n=2 at line (6).
Now, at line (4) we find the leftmost child of 2, which is 6, has an unlisted parent 5. Thus we
select a new n at line (2), and node 3 is the only candidate. We list 3 and proceed down its left
chain, listing 4, 5 and 6. This leaves only 8 among the interior nodes so we list that.
www.padeepz.net
www.padeepz.net
Code
sequence:
t8 : = d + e
t6 : = a + b
t5 : = t6 – c
t4 : = t5 * t8
t3 : = t4 – e
t2 : = t6 + t4
t1 : = t2 * t3
This will yield an optimal code for the DAG on machine whatever be the n umber of registers.
www.padeepz.net