KEMBAR78
Compiler Design Lab Manual | PDF | Parsing | C (Programming Language)
0% found this document useful (0 votes)
8 views51 pages

Compiler Design Lab Manual

The document outlines the hardware and software requirements for running lexical analysis tools, specifically Turbo C/C++, LEX, and YACC. It provides a detailed description of tokenization, including an algorithm and sample C code for extracting tokens from source code. Additionally, it explains the structure and functionality of LEX and YACC, including how to generate a lexical analyzer and the basics of regular expressions used in the process.

Uploaded by

azeesab8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views51 pages

Compiler Design Lab Manual

The document outlines the hardware and software requirements for running lexical analysis tools, specifically Turbo C/C++, LEX, and YACC. It provides a detailed description of tokenization, including an algorithm and sample C code for extracting tokens from source code. Additionally, it explains the structure and functionality of LEX and YACC, including how to generate a lexical analyzer and the basics of regular expressions used in the process.

Uploaded by

azeesab8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 51

lOMoARcPSD|1917946

Hardware and Software Requirement

1. Software Requirement:

Turbo C / C++ compiler.

Download from following links:

http://www.megaleecher.net/Download_Turbo_For_Windows

LEX tool--flex-2.5.4a-1.exe

YACC tool--bison-2.4.1-setup.exe

JFLAP Tool for Automata-------www.JFLAP.org

2. Minimum hardware requirements:

 Intel Pentium III800 MHz Processor or higher version


 Intel chipset 810 mother board or higher version
 14’’ color monitor or greater than that
 Mouse
 Keyboard
 2GB HDD or greater
 256 MB RAM or greater
 Intel® Pentium® 4, Intel Centrino®, Intel Xeon®, or Intel Core™ Duo (or compatible)
processor
 Microsoft® Windows® 7 (64 bit) or Windows 8 (64 bit)
 4GB of RAM
 2.5GB of available hard-disk space for installation; additional free space required during
installation (cannot install on removable flash storage devices)
 1024x768 display (1280x800 recommended)
 QuickTime 10.x software recommended

Downloaded by Sangeetha Raja


Experiment – 1
Tokenizing a file using C

Description: Lexical analysis is the process of converting a sequence of characters (such as in a


computer program of web page) into a sequence of tokens (strings with an identified
“meaning”). A program that perform lexical analysis may be called a lexer, tokenize or
scanner.

Token
A token is a structure representing a lexeme that explicitly indicates its categorization for the
Purpose of parsing. A category of token is what in linguistics might be called a part-of- speech.
Examples of token categories may include “identifier” and “integer literal”, although the set of
Token differ in different programming languages. The process of forming tokens from an input
stream of characters is called tokenization. Consider this expression in the C programming
language: Sum=3 + 2;
Tokenized and represented by the following table:

Downloaded by Sangeetha Raja


Aim: (Tokenizing).

A program that reads a source code in C/C++ from an unformatted file and extract various types of
tokens from it (e.g. keywords/variable names, operators, constant values).

Algorithm:-

1. Start the program


2. Include the header files.
3. Allocate memory for the variable by dynamic memory allocation function.
4. Use the file accessing functions to read the file.
5. Get the input file from the user.
6. Separate all the file contents as tokens and match it with the functions.
7. Define all the keywords in a separate file and name it as key.c
8. Define all the operators in a separate file and name it as open.c
9. Give the input program in a file and name it as input.c
10. Finally print the output after recognizing all the tokens.
11. Stop the program.

Downloaded by Sangeetha Raja


Program:

#include<stdio.h>
#include<conio.h>
#include<ctype.h>
#include<string.h>
void main()
{
FILE *fi,*fo,*fop,*fk;
int flag=0,i=1;
char c,t,a[15],ch[15],file[20];
clrscr();
printf("\n Enter the File Name:");
scanf("%s",&file);
fi=fopen(file,"r");
fo=fopen("inter.c","w");
fop=fopen("oper.c","r");
fk=fopen("key.c","r");
c=getc(fi); while(!
feof(fi))
{
if(isalpha(c)||isdigit(c)||(c=='['||c==']'||c=='.'==1))
fputc(c,fo);
Else

{
if(c=='\n')
fprintf(fo,"\t$\t");
else fprintf(fo,"\t%c\t",c);
}
c=getc(fi);
}
fclose(fi);
fclose(fo);
fi=fopen("inter.c","r");
printf("\n Lexical Analysis");
fscanf(fi,"%s",a);
printf("\n Line: %d\n",i++);
while(!feof(fi))
{
if(strcmp(a,"$")==0)

Downloaded by Sangeetha Raja


{
printf("\n Line: %d \n",i++);
fscanf(fi,"%s",a);
}
fscanf(fop,"%s",ch);
while(!feof(fop))
{
if(strcmp(ch,a)==0)
{
fscanf(fop,"%s",ch); printf("\t\t
%s\t:\t%s\n",a,ch); flag=1;
} fscanf(fop,"%s",ch);
}
rewind(fop);
fscanf(fk,"%s",ch);
while(!feof(fk))
{
if(strcmp(ch,a)==0)
{
fscanf(fk,"%k",ch); printf("\t\t
%s\t:\tKeyword\n",a); flag=1;
}
fscanf(fk,"%s",ch);
}
rewind(fk);
if(flag==0)
{
if(isdigit(a[0])) printf("\t\t%s\t:\
tConstant\n",a); else printf("\t\t
%s\t:\tIdentifier\n",a);
}
flag=0;
fscanf(fi,"%s",a); }
getch();
}

Input Files:
Oper.c

Downloaded by Sangeetha Raja


( open para
) closepara
{ openbrace
} closebrace
< lesser
> greater
" doublequote ' singlequote
: colon
; semicolon
# preprocessor
= equal
== asign
% percentage
^ bitwise
& reference
* star
+ add
- sub
\ backslash
/ slash

Downloaded by Sangeetha Raja


Key.C
int
void
main
char
if
for
while
else
printf
scanf
FILE
Include
stdio.h
conio.h
iostream.h

Input.c
#include "stdio.h"
#include "conio.h"
void main()
{
int a=10,b,c;
a=b*c;
getch();
}

Downloaded by Sangeetha Raja


Output:-

Downloaded by Sangeetha Raja


Experiment 2
Implementation of Lexical Analyzer using Lex Tool

Description:

 A language for specifying lexical analyzer.


 There is a wide range of tools for construction of lexical analyzer. The majority of these tools are
based on regular expressions.
 The one of the traditional tools of that kind is lex.

Lex:-

 The lex is used in the manner depicted. A specification of the lexical analyzer is preferred by
creating a program lex.1 in the lex language.
 Then lex.1 is run through the lex compiler to produce a ‘c’ program lex.yy.c.
 The program lex.yy.c consists of a tabular representation of a transition diagram constructed
from the regular expression of lex.1 together with a standard routine that uses table of
recognize leximes.
 Lex.yy.c is run through the ‘C’ compiler to produce as object program a.out, which is the lexical
analyzer that transform as input stream into sequence of tokens.

Downloaded by Sangeetha Raja


Aim: (Tokenizing)
Use Lex and yacc to extract tokens from a given source code.

Algorithm:
1. First, a specification of a lexical analyzer is prepared by creating a program lexp.l in
the LEX language.
2. The Lexp.l program is run through the LEX compiler to produce an equivalent code
in C language named Lex.yy.c
3. The program lex.yy.c consists of a table constructed from the Regular Expressions of
Lexp.l, together with standard routines that uses the table to recognize lexemes.
4. Finally, lex.yy.c program is run through the C Compiler to produce an object
program a.out, which is the lexical analyzer that transforms an input stream into a
sequence of tokens.

Downloaded by Sangeetha Raja


Program:
lexp.l
%{
int COMMENT=0;
%}
identifier [a-zA-Z][a-zA-Z0-9]*
%%
#.* {printf ("\n %s is a Preprocessor Directive",yytext);}
int |
float |
main |
if |
else |
printf |
scanf |
for |
char |
getch |
while {printf("\n %s is a Keyword",yytext);}
"/*" {COMMENT=1;}
"*/" {COMMENT=0;}
{identifier}\( {if(!COMMENT) printf("\n Function:\t %s",yytext);}
\{ {if(!COMMENT) printf("\n Block Begins");
\} {if(!COMMENT) printf("\n Block Ends");}
{identifier}(\[[0-9]*\])? {if(!COMMENT) printf("\n %s is an Identifier",yytext);}
\".*\" {if(!COMMENT) printf("\n %s is a String",yytext);}
[0-9]+ {if(!COMMENT) printf("\n %s is a Number",yytext);}
\)(\;)? {if(!COMMENT) printf("\t");ECHO;printf("\n");}
\( ECHO;
= {if(!COMMENT) printf("\n%s is an Assmt oprtr",yytext);}
\<= |
\>= |
\< |
== {if(!COMMENT) printf("\n %s is a Rel. Operator",yytext);}
.|\n
%%
int main(int argc, char **argv)
{
if(argc>1)

10

Downloaded by Sangeetha Raja


{
FILE *file;
file=fopen(argv[1],"r");
if(!file)
{
printf("\n Could not open the file: %s",argv[1]);
exit(0);
}
yyin=file;
}
yylex();
printf("\n\n");
return 0;
}
int yywrap()
{
return 0;
}

Downloaded by Sangeetha Raja


Output:
test.c
#include<stdio.h>
main()
{
int fact=1,n;
for(int i=1;i<=n;i++)
{ fact=fact*i; }
printf("Factorial Value of N is", fact);
getch();
}

$ lex lexp.l
$ cc lex.yy.c
$ ./a.out test.c

Downloaded by Sangeetha Raja


OUTPUT:
#include<stdio.h> is a Preprocessor Directive
Function: main( )
Block Begins
int is a Keyword
fact is an Identifier
= is an Assignment Operator
1 is a Number
n is an Identifier
Function: for(
int is a Keyword
i is an Identifier
= is an Assignment Operator
1 is a Number
i is an Identifier
<= is a Relational Operator
n is an Identifier
i is an Identifier
);
Block Begins
fact is an Identifier
= is an Assignment Operator
fact is an Identifier
i is an Identifier
Block Ends
Function: printf(
"Factorial Value of N is" is a String
fact is an Identifier );
Function: getch( );
Block Ends

Downloaded by Sangeetha Raja


Downloaded by Sangeetha Raja
Experiment-3

Study of LEX and YACC Tool

Description:

LEX-A Lexical analyzer generator:

Lex is a computer program that generates lexical analyzers ("scanners" or "lexers").Lex is commonly
used with the yacc parser generator.

Lex reads an input stream specifying the lexical analyzer and outputs source code implementing the
lexer in the C programming language

1. A lexer or scanner is used to perform lexical analysis, or the breaking up of an input stream
into meaningful units, or tokens.
2. For example, consider breaking a text file up into individual words.
3. Lex: a tool for automatically generating a lexer or scanner given a lex specification (.l file).

Structure of a Lex file

The structure of a Lex file is intentionally similar to that of a yacc file; files are divided up into three
sections, separated by lines that contain only two percent signs, as follows:

Definition section:
%%
Rules section:
%%
C code section:
<statements>

 The definition section is the place to define macros and to import header files written
in C. It is also possible to write any C code here, which will be copied verbatim into the
generated source file.
 The rules section is the most important section; it associates patterns with C
statements. Patterns are simply regular expressions. When the lexer sees some text in
the input matching a given pattern, it executes the associated C code. This is the basis
of how Lex operates.

13

Downloaded by Sangeetha Raja


 The C code section contains C statements and functions that are copied verbatim to the
generated source file. These statements presumably contain code called by the rules in
the rules section. In large programs it is more convenient to place this code in a
separate file and link it in at compile time.

Description:-
The lex command reads File or standard input, generates a C language program, and writes it to a file
named lex.yy.c. This file, lex.yy.c, is a compilable C language program. A C++ compiler also can compile
the output of the lex command. The -C flag renames the output file to lex.yy.C for the C++ compiler. The
C++ program generated by the lex command can use either STDIO or
IOSTREAMS. If the cpp define _CPP_IOSTREAMS is true during a C++ compilation, the program uses
IOSTREAMS for all I/O. Otherwise, STDIO is used.

The lex command uses rules and actions contained in File to generate a program, lex.yy.c,which can be
compiled with the cc command. The compiled lex.yy.c can then receive input, break the input into the
logical pieces defined by the rules in File, and run program fragments contained in the actions in File.

The generated program is a C language function called yylex. The lex command stores the yylex function
in a file named lex.yy.c. You can use the yylex function alone to recognize simple one-word input, or you
can use it with other C language programs to perform more difficult input analysis functions. For
example, you can use the lex command to generate a program that simplifies an input stream before
sending it to a parser program generated by the yacc command.
The yylex function analyzes the input stream using a program structure called a finite state machine.
This structure allows the program to exist in only one state (or condition) at a time. There is a finite
number of states allowed. The rules in File determine how the program moves from one state to
another. If you do not specify a File, the lex command reads standard input. It treats multiple files as a
single file.
Note: Since the lex command uses fixed names for intermediate and output files, you can have only one
program generated by lex in a given directory.

Regular Expression Basics


. : matches any single character except \n
* : matches 0 or more instances of the preceding regular expression
+ : matches 1 or more instances of the preceding regular expression
? : matches 0 or 1 of the preceding regular expression
| : matches the preceding or following regular expression
[ ] : defines a character class
() : groups enclosed regular expression into a new regular expression
“…”: matches everything within the “ “ literally

Downloaded by Sangeetha Raja


Special Functions

• yytext
– where text matched most recently is stored
• yyleng
– number of characters in text most recently matched
• yylval
– associated value of current token
• yymore()
– append next string matched to current contents of yytext
• yyless(n)
– remove from yytext all but the first n characters
• unput(c)
– return character c to input stream
• yywrap()
– may be replaced by user
– The yywrap method is called by the lexical analyser whenever it inputs an EOF as the first character
when trying to match a regular expression

Files
y.output--Contains a readable description of the parsing tables and a report on conflicts generated by
grammar ambiguities.
y.tab.c----Contains an output file.
y.tab.h-----Contains definitions for token names.
yacc.tmp-----Temporary file.
yacc.debug----Temporary file.
yacc.acts-----Temporary file.
/usr/ccs/lib/yaccpar---Contains parser prototype for C programs.
/usr/ccs/lib/liby.a----Contains a run-time library.

YACC: Yet Another Compiler-Compiler


Yacc is written in portable C. The class of specifications accepted is a very general one:LALR(1) grammars
with disambiguating rules.

Basic specification
Names refer to either tokens or non-terminal symbols. Yacc requires tokens names to be declared as
such. In addition, for reasons discussed in section 3, it is often desirable to include the lexical analyzer as
part of the specification file, I may be useful to include other programs as well. Thus, the sections are
separated by double percent “%%” marks. (the percent‟%‟ is generally used in yacc specifications as an
escape character). In other words, a full specification file looks like.
In other words a full specification file looks like
Declarations
%%

15

Downloaded by Sangeetha Raja


Rules
%%
Programs
The declaration section may be empty. More over if the programs section is omitted, the second %%
mark may be omitted also thus the smallest legal yacc specification is
%%
Rules
Blanks, tabs and newlines are ignored except that they may not appear in
names or multi-character reserved symbols. Comments may appear wherever legal,
they are enclosed in /*….*/ as in C and PL/l
The rules section is made up of one or more grammar rule has the form:
A:BODY:
USING THE LEX PROGRAM WITH THE YACC PROGRAM

The Lex program recognizes only extended regular expressions and formats them into character
packages called tokens, as specified by the input file. When using the Lex program to make a lexical
analyzer for a parser, the lexical analyzer (created from the Lex command) partitions the input stream.
The parser(from the yacc command) assigns structure to the resulting pieces. You can also use other
programs along with programs generated by Lex or yacc commands.

A token is the smallest independent unit of meaning as defined by either the parser or the lexical
analyzer. A token can contain data, a language keyword, an identifier or the parts of language syntax.

The yacc program looks for a lexical analyzer subroutine named yylex, which is generated by the lex
command. Normally the default main program in the Lex library calls the yylex subroutines. However if
the yacc command is loaded and its main program is used , yacc calls the yylex subroutines. In this case
each Lex rule should end with:
return (token);

Downloaded by Sangeetha Raja


Where the appropriate token value is returned

The yacc command assigns an integer value to each token defined in the yacc grammar file through a #
define preprocessor statement.
The lexical analyzer must have access to these macros to return the tokens to the parser. Use the yacc –
d option to create a y.tab.h file and include the y.tab.h file in the Lex specification file by adding the
following lines to the definition section of the Lex specification file:
%{
#include “y.tab.h”
%}
Alternatively you can include the lex.yy.c file the yacc output file by adding the following lines after the
second %% (percent sign, percent sign) delimiter in the yacc grammar file:
#include”lex.yy.c”
The yacc library should be loaded before the Lex library to get a main program that invokes the yacc
parser. You can generate Lex and yacc programs in either order.

Evaluating Arithmetic Expression using different Operators (Calculator)

Aim:

Study the LEX and YACC tool and Evaluate an arithmetic expression with parentheses, unary and

Downloaded by Sangeetha Raja


binary operators using Flex and Yacc. [Need to write yylex() function and to be used with Lex and
yacc.].

Algorithm:
1) Get the input from the user and Parse it token by token.

2) First identify the valid inputs that can be given for a program.

3) The Inputs include numbers, functions like LOG, COS, SIN, TAN, etc. and operators.

4) Define the precedence and the associativity of various operators like +,-,/,* etc.

5) Write codes for saving the answer into memory and displaying the result on the screen.

6) Write codes for performing various arithmetic operations.

7) Display the possible Error message that can be associated with this calculation.

8) Display the output on the screen else display the error message on the screen.

Program
Downloaded by Sangeetha Raja
: CALC.L

%{
#include<stdio.h>
#include<stdlib.h>
void yyerror(char
*);

#include
"y.tab.h" int
yylval;
%}
%%
[a-z] {yylval=*yytext='&'; return
VARIABLE;} [0-9]+ {yylval=atoi(yytext);
return INTEGER;} CALC.Y
[\t] ;
%%
int yywrap(void)
{
return 1;
}
%token INTEGER VARIABLE
%left '+' '-'
%left '*' '/'
%{
int yylex(void);
void yyerror(char
*); int sym[26];
%}
%%
PROG
:
PROG STMT '\n'
;

Downloaded by Sangeetha Raja


STMT: EXPR {printf("\n %d",$1);}
| VARIABLE '=' EXPR {sym[$1] = $3;}
;
EXPR: INTEGER
| VARIABLE {$$ = sym[$1];}
| EXPR '+' EXPR {$$ = $1 + $3;}
| '(' EXPR ')' {$$ = $2;}
%%

void yyerror(char *s)


{
printf("\n
%s",s); return;
}
int main(void)
{
printf("\n Enter the
Expression:"); yyparse();
return 0;
}

Downloaded by Sangeetha Raja


Output:
$ lex calc.l
$ yacc -d calc.y
$ cc y.tab.c lex.yy.c -ll -ly -lm
$ . / a . out
Enter the Expression: ( 5 + 4 ) *
3 Answer: 27

Downloaded by Sangeetha Raja


Experiment-4

Using JFLAP, create a DFA from a given regular expression

Aim: Using JFLAP, create a DFA from a given regular expression. All types of error must be checked
during the conversion.

What is JFLAP: -

JFLAP program makes it possible to create and simulate automata. Learning about automata with pen
and paper can be difficult, time consuming and error-prone. With JFLAP we can create automata of
different types and it is easy to change them as we want. JFLAP supports creation of DFA and NFA,
Regular Expressions, PDA, Turing Machines, Grammars and more.

Setup: -

JFLAP is available from the homepage: (www.JFLAP.org). From there press “Get FLAP” and follow the
instructions. You will notice that JFLAP have a .JAR extension. This means that you need Java to run
JFLAP. With Java correctly installed you can simply select the program to run it. You can also use a
command console run it from the files current directory with, Java –jar JFLAP.jar.

Using JFLAP: -

When you first start JFLAP you will see a small menu with a selection of eleven different automata and
rule sets. Choosing one of them will open the editor where you create chosen type of automata. Usually
you can create automata containing states and transitions but there is also creation of Grammar and
Regular Expression which is made with a text editor.

DFA from a given regular expression: -

First, we need to select Regular Expression from the JFLAP Menu.

20

Downloaded by Sangeetha Raja


Now you should have an empty window in front of you. You will have a couple of tools and features at
your disposal.

The toolbar contains six tools, which are used to edit automata.

Attribute Editor Tool, changes properties and position of existing states and transitions.
State Creator Tool, creates new states. Transition
Creator Tool, creates transitions. Deletion Tool,
deletes states and transitions.
Undo/Redo, changes the selected object prior to their history.

21

Downloaded by Sangeetha Raja


Regular Expressions can be typed into JFLAP to be converted to an NFA

Choose Regular Expression in the main menu, then just type the expression in the textbox. Definitions
for Regular Expressions in JFLAP:

 Kleene Star
 + Union
 ! Empty String

Correctly written expressions can then be converted to an NFA. To convert your expression select
Convert → Convert to NFA. The conversion will begin with two states and a transition with your Regular
Expression. With the (D)e-expressionify Transition tool you can break down the Regular Expression into
smaller parts. Each transition will contain a sub expression. The next step is to link every rule with
lambda transitions. Add new transition between states that should be connected with the Transition
Tool. If you are unsure what to do you can select Do Step to automatically make the next step. If you
want the NFA immediately Do All creates the whole NFA for you.

22
Downloaded by Sangeetha Raja
You can notice how the conversion differs depending on how the Regular Expression looks. For
example the expression a+b results in a fork, were either ‘a’ or ‘b’ can be chosen.

Now finally convernt NFA to DFA:-

23

Downloaded by Sangeetha Raja


Experiment-5

Create LL(1) parse table for a given CFG and hence Simulate LL(1) Parsing

Aim: Using JFLAP create LL(1) parse table for a given CFG and hence Simulate LL(1) parsing

Implementation: -

Step 1. Choose the grammar from JFLAP and insert grammar you want to create LL(1) parsing table.

24

Downloaded by Sangeetha Raja


Step 2: - Select the Input from Menu and select Build LL(1) parsing table from it.

Step 3: - Now select parse to use that table to create parse tree from it.

Result:- We create LL(1) parse table for a given CFG and hence Simulate LL(1) parsing

Downloaded by Sangeetha Raja


Experiment-6

Create SLR(1) parse table for a given grammar

Aim: Using JFLAP create SLR(1) parse table for a given grammar. Simulate parsing and output the parse
tree proper format.

Implementation: -

Step 1: - Choose the grammar from JFLAP and insert grammar you want to create SLR(1) parsing table.

Downloaded by Sangeetha Raja


Step 2: - Select the Input from Menu and select Build SLR (1) parsing table from it.

Step 3: - Now select parse to use that table to create parse tree from it.

Result :- We created SLR(1) parse table for a given grammar and Simulated parsing and output the
parse tree proper format.

Downloaded by Sangeetha Raja


Experiment-7

FIRST and FOLLOW of all the variables


Aim: Write a suitable data structure to store a Context Free Grammar. Prerequisite is to eliminate left
recursion from the grammar before storing. Write functions to find FIRST and FOLLOW of all the
variables. [May use unformatted file / array to store the result].

Algorithm:

First ()-

If x is a terminal, then FIRST(x) = { ‘x’ }


If x-> Є, is a production rule, then add Є to FIRST(x).
If X->Y1 Y2 Y3….Yn is a production,
FIRST(X) = FIRST(Y1)
If FIRST(Y1) contains Є then FIRST(X) = { FIRST(Y1) – Є } U { FIRST(Y2) }
If FIRST (Yi) contains Є for all i = 1 to n, then add Є to FIRST(X).

Follow ()-

FOLLOW(S) = { $ } // where S is the starting Non-Terminal


If A -> pBq is a production, where p, B and q are any grammar symbols,
then everything in FIRST(q) except Є is in FOLLOW(B).
If A->pB is a production, then everything in FOLLOW(A) is in FOLLOW(B).
If A->pBq is a production and FIRST(q) contains Є,
then FOLLOW(B) contains { FIRST(q) – Є } U FOLLOW(A)

Program:

#include<stdio.h>
#include<math.h>
#include<string.h>
#include<ctype.h>
#include<stdlib.h>
int n,m=0,p,i=0,j=0;
char a[10][10],f[10];
void follow(char c);
void first(char c);
int main(){
int i,z;
char c,ch;
clrscr();
printf("Enter the no of prooductions:\n");

Downloaded by Sangeetha Raja


scanf("%d",&n);
printf("Enter the productions:\n");
for(i=0;i<n;i++) scanf("%s
%c",a[i],&ch);
do{
m=0;
printf("Enter the elemets whose fisrt & follow is to be found:");
scanf("%c",&c);
first(c);
printf("First(%c)={",c);
for(i=0;i<m;i++)
printf("%c",f[i]);
printf("}\n");
strcpy(f," ");
//flushall();
m=0;
follow(c);
printf("Follow(%c)={",c);
for(i=0;i<m;i++)
printf("%c",f[i]);
printf("}\n");
printf("Continue(0/1)?");
scanf("%d%c",&z,&ch);
}while(z==1);
return(0);
}
void first(char c)
{
int k; if(!
isupper(c)) f[m+
+]=c;
for(k=0;k<n;k++)
{
if(a[k][0]==c)
{
if(a[k][2]=='$')
follow(a[k][0]);
else if(islower(a[k][2]))
f[m++]=a[k][2];
else first(a[k][2]);
}
}

Downloaded by Sangeetha Raja


}
void follow(char c)
{ if(a[0]
[0]==c)
f[m++]='$';
for(i=0;i<n;i++)
{
for(j=2;j<strlen(a[i]);j++)
{
if(a[i][j]==c)
{
if(a[i][j+1]!='\0')
first(a[i][j+1]); if(a[i][j+1]=='\
0' && c!=a[i][0]) follow(a[i]
[0]);
}
}
}

Downloaded by Sangeetha Raja


Output:

30

Downloaded by Sangeetha Raja


Experiment-8
Generate a three address code for a given expression

AIM :
To write a C program to generate a three address code for a given expression.

ALGORITHM:

Step1: Begin the program


Step2 : The expression is read from the file using a file pointer
Step3 : Each string is read and the total no. of strings in the file is calculated.
Step4: Each string is compared with an operator; if any operator is seen then the previous string and next
string are concatenated and stored in a first temporary value and the three address code expression is
printed
Step5 : Suppose if another operand is seen then the first temporary value is concatenated to the next string
using the operator and the expression is printed.
Step6 : The final temporary value is replaced to the left operand value.
Step7 : End the program.
PROGRAM: //program for three address code generation
#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
#include<string.h>

struct three
{
char data[10],temp[7];
}s[30];
void main()
{
char d1[7],d2[7]="t";
int i=0,j=1,len=0;
FILE *f1,*f2;
clrscr();
f1=fopen("sum.txt","r");
f2=fopen("out.txt","w");
while(fscanf(f1,"%s",s[len].data)!=EOF)
len++;
itoa(j,d1,7);
strcat(d2,d1);
strcpy(s[j].temp,d2);
strcpy(d1,"");

Downloaded by Sangeetha Raja


strcpy(d2,"t");
if(!strcmp(s[3].data,"+"))
{
fprintf(f2,"%s=%s+%s",s[j].temp,s[i+2].data,s[i+4].data);
j++;
}
else if(!strcmp(s[3].data,"-"))
{
fprintf(f2,"%s=%s-%s",s[j].temp,s[i+2].data,s[i+4].data);
j++;
}
for(i=4;i<len-2;i+=2)
{
itoa(j,d1,7);
strcat(d2,d1);
strcpy(s[j].temp,d2);
if(!strcmp(s[i+1].data,"+"))
fprintf(f2,"\n%s=%s+%s",s[j].temp,s[j-1].temp,s[i+2].data);
else if(!strcmp(s[i+1].data,"-"))
fprintf(f2,"\n%s=%s-%s",s[j].temp,s[j-1].temp,s[i+2].data);
strcpy(d1,"");
strcpy(d2,"t");
j++;
}
fprintf(f2,"\n%s=%s",s[0].data,s[j-1].temp);
fclose(f1);
fclose(f2);
getch();
}

Input: sum.txt

out = in1 + in2 + in3 - in4

Downloaded by Sangeetha Raja


Output : out.txt

t1=in1+in2
t2=t1+in3
t3=t2-in4
out=t3

RESULT:
Thus a C program to generate a three address code for a given expression is written, executed and the
output is verified.

Downloaded by Sangeetha Raja


Experiment-9

PROGRAM TO RECOGNIZE A VALID ARITHMETIC EXPRESSION

AIM:

To write a Yacc program to valid arithmetic expression using Yacc .

ALGORITHM:

Step1: Start the program.

Step2: Reading an expression .

Step3: Checking the validating of the given expression according to the rule using yacc.

Step4: Using expression rule print the result of the given values
Step5: Stop the program.

Downloaded by Sangeetha Raja


PROGRAM:

//Program to recognize a valid arithmetic expression that uses operator +, -, * and /

LEX PART:

%{
#include "y.tab.h"
%}

%%
[a-zA-Z_][a-zA-Z_0-9]* return id;
[0-9]+(\.[0-9]*)? return num;
[+/*] return op;
. return yytext[0];
\n return 0;
%%
int yywrap()
{
return 1;
}

YACC PART:

%{
#include<stdio.h>
int valid=1;
%}

%token num id op
%%
start : id '=' s ';'
s : id x
| num x
| '-' num x
| '(' s ')' x
;
x : op s
| '-' s
|
;
%%
int yyerror()
{
valid=0;
printf("\nInvalid expression!\n");
return 0;
}

Downloaded by Sangeetha Raja


int main()
{
printf("\nEnter the expression:\n");
yyparse();
if(valid)
{
printf("\nValid expression!\n");
}
}

Downloaded by Sangeetha Raja


OUTPUT:

Downloaded by Sangeetha Raja


Experiment-10

IMPLEMENTATION OF SYMBOL TABLE USING C (WITH OUTPUT)

AIM:
To write a program for implementing Symbol Table using C.

ALGORITHM:

Step1: Start the program for performing insert, display, delete, search and modify option in symbol table
Step2: Define the structure of the Symbol Table
Step3: Enter the choice for performing the operations in the symbol Table
Step4: If the entered choice is 1, search the symbol table for the symbol to be inserted. If the symbol is
already present, it displays “Duplicate Symbol”. Else, insert the symbol and the corresponding address in
the symbol table.
Step5: If the entered choice is 2, the symbols present in the symbol table are displayed.
Step6: If the entered choice is 3, the symbol to be deleted is searched in the symbol table.
Step7: If it is not found in the symbol table it displays “Label Not found”. Else, the symbol is deleted.
Step8: If the entered choice is 5, the symbol to be modified is searched in the symbol table.

Downloaded by Sangeetha Raja


PROGRAM CODE:
//Implementation of symbol table
#include<stdio.h>
#include<ctype.h>
#include<stdlib.h>
#include<string.h>
#include<math.h>
void main()
{
int i=0,j=0,x=0,n;
void *p,*add[5];
char ch,srch,b[15],d[15],c;
printf("Expression terminated by $:");
while((c=getchar())!='$')
{
b[i]=c;
i++;
}
n=i-1;
printf("Given Expression:");
i=0;
while(i<=n)
{
printf("%c",b[i]);
i++;
}
printf("\n Symbol Table\n");
printf("Symbol \t addr \t type");
while(j<=n)
{

c=b[j];
if(isalpha(toascii(c)))
{
p=malloc(c);
add[x]=p;
d[x]=c;

Downloaded by Sangeetha Raja


printf("\n%c \t %d \t identifier\n",c,p);
x++;
j++;
}
else
{
ch=c;
if(ch=='+'||ch=='-'||ch=='*'||ch=='=')
{
p=malloc(ch);
add[x]=p;
d[x]=ch;
printf("\n %c \t %d \t operator\n",ch,p);
x++;
j++;
}}}}

Downloaded by Sangeetha Raja


OUTPUT:

RESULT:

Thus the program for symbol table has been executed successfully.

Downloaded by Sangeetha Raja


Experiment-11

CODE OPTIMIZATION TECHNIQUE

AIM:

To write a program for implementation of Code Optimization Technique.

ALGORITHM:

Step1: Generate the program for factorial program using for and do-while loop to specify optimization
technique.

Step2: In for loop variable initialization is activated first and the condition is checked next. If the condition is
true the corresponding statements are executed and specified increment / decrement operation is
performed.

Step3: The for loop operation is activated till the condition failure.

Step4: In do-while loop the variable is initialized and the statements are executed then the condition
checking and increment / decrement operation is performed.

Step5: When comparing both for and do-while loop for optimization do while is best because first the
statement execution is done then only the condition is checked. So, during the statement execution itself we
can find the inconvenience of the result and no need to wait for the specified condition result.

Step6: Finally when considering Code Optimization in loop do-while is best with respect to performance.

Downloaded by Sangeetha Raja


PROGRAM CODE:
//Code Optimization Technique
#include<stdio.h>
#include<string.h>
struct op
{
char l;
char r[20];
}
op[10],pr[10];
void main()
{
int a,i,k,j,n,z=0,m,q;
char *p,*l;
char temp,t;
char *tem;
printf("Enter the Number of Values:");
scanf("%d",&n);
for(i=0;i<n;i++)
{
printf("left: ");
scanf(" %c",&op[i].l);
printf("right: ");
scanf(" %s",&op[i].r);
}
printf("Intermediate Code\n") ;
for(i=0;i<n;i++)
{
printf("%c=",op[i].l);
printf("%s\n",op[i].r);
}
for(i=0;i<n-1;i++)
{
temp=op[i].l;
for(j=0;j<n;j++)
{
p=strchr(op[j].r,temp);
if(p)
{
pr[z].l=op[i].l;
strcpy(pr[z].r,op[i].
r);
z++;
}
}
}

pr[z].l=op[n-1].l;
strcpy(pr[z].r,op[n-1].r);
z++;

Downloaded by Sangeetha Raja


printf("\nAfter Dead Code Elimination\n");
for(k=0;k<z;k++)
{
printf("%c\t=",pr[k].l);
printf("%s\n",pr[k].r);
}
for(m=0;m<z;m++)
{
tem=pr[m].r;
for(j=m+1;j<z;j++)
{
p=strstr(tem,pr[j].r);
if(p)
{
t=pr[j].l;
pr[j].l=pr[m].l;
for(i=0;i<z;i++)
{
l=strchr(pr[i].r,t) ;
if(l)
{
a=l-pr[i].r;
printf("pos: %d\n",a);
pr[i].r[a]=pr[m].l;
}}}}}
printf("Eliminate Common Expression\n");
for(i=0;i<z;i++)
{
printf("%c\t=",pr[i].l);
printf("%s\n",pr[i].r);
}
for(i=0;i<z;i++)
{
for(j=i+1;j<z;j++)
{
q=strcmp(pr[i].r,pr[j].r);
if((pr[i].l==pr[j].l)&&!q)
{
pr[i].l='\0';
}
}
}
printf("Optimized Code\n");
for(i=0;i<z;i++)
{
if(pr[i].l!='\0')
{
printf("%c=",pr[i].l);
printf("%s\n",pr[i].r);
}}}

Downloaded by Sangeetha Raja


OUTPUT:

Downloaded by Sangeetha Raja

You might also like