Department of Computer Science & Engineering
Course Name : Computer Networking Lab
Course Code : CSE 306
Student Name : Prosenjit Das
Student ID :182015030
Course Section :182 EC
Date of Submit :05/01/2021
Submitted to: Submitted by:
Mr. Abdullah Al Arif Prosenjit Das
Lecturer, Green University of Id: 182015030
Bangladesh.
1
Index Page
Experiment No Experiment Name Experiment Date Submission Date
01 Remove Comment Line 06/11/2020 05/01/2021
02 Token Separation 13/11/2020 05/01/2021
03 Production Rules 04/12/2020 05/01/2021
04 Regular expression 11/12/2020 05/01/2021
2
Experiment No - 01
Title
Write a c program to remove comment line from a program.
Objective:
In this experiment we will learn how to remove single and multi-line
comment from a program.
Theory:
Generally, there are two type of comment in editor which we use. Those
are-
i. Single line comment.
ii. Multi line comment.
Removing the comment is happened in lexical analyzer.
Lexical analysis is the first phase of a compiler. It takes the modified
source code from language preprocessors that are written in the form of
sentences. The lexical analyzer breaks these syntaxes into a series of
tokens, by removing any whitespace or comments in the source code.
3
If the lexical analyzer finds a token invalid, it generates an error. The
lexical analyzer works closely with the syntax analyzer. It reads character
streams from the source code, checks for legal tokens, and passes the data
to the syntax analyzer when it demands.
4
Procedure:
Here is the code to remove comment from a program-
5
Output
Limitation:
From this experiment we can understand how to remove single and multi-
line comment from a program.
6
Experiment No - 02
Title:
Write a c program to Token separation in compiler.
Learning Outcome:
After complete this experiment I will know how to separate token using
lexical analyzer in compiler.
Theory
Lexical analysis is the first phase of a compiler. It takes the modified
source code from language preprocessors that are written in the form of
sentences. The lexical analyzer breaks these syntaxes into a series of
tokens, by removing any whitespace or comments in the source code.
If the lexical analyzer finds a token invalid, it generates an error. The
lexical analyzer works closely with the syntax analyzer. It reads character
streams from the source code, checks for legal tokens, and passes the data
to the syntax analyzer when it demands.
Tokens
Lexemes are said to be a sequence of characters (alphanumeric) in a
token. There are some predefined rules for every lexeme to be identified
as a valid token. These rules are defined by grammar rules, by means of
a pattern. A pattern explains what can be a token, and these patterns are
defined by means of regular expressions.
In programming language, keywords, constants, identifiers, strings,
numbers, operators and punctuations symbols can be considered as
tokens.
For example, in C language, the variable declaration line
Int value = 100;
7
contains the tokens:
int (keyword), value (identifier), = (operator), 100 (constant) and ;
(symbol).
Special Symbols
A typical high-level language contains the following symbols:-
Arithmetic Addition(+), Subtraction(-), Modulo(%),
Symbols Multiplication(*), Division(/)
Punctuation Comma(,), Semicolon(;), Dot(.), Arrow(->)
Assignment =
Special Assignment +=, /=, *=, -=
Comparison ==, !=, <, <=, >, >=
Preprocessor #
Location Specifier &
Logical &, &&, |, ||, !
8
Shift Operator >>, >>>, <<, <<<
Roles of the Lexical analyzer
Lexical analyzer performs below given tasks:
Helps to identify token into the symbol table
Removes white spaces and comments from the source program
Correlates error messages with the source program
Helps you to expands the macros if it is found in the source
program
Read input characters from the source program
9
Procedure
Output:
10
Discussion:
From this experiment we can understand how to separate a token in
compiler by using c programming.
Experiment No: 03
Title:
Write a c program to make a Production rules.
Objective:
In this experiment I will working with syntax analyzing which very
important part of compiling process. In this experiment we will learn how
to make a Production rules in compiler.
Theory:
The productions of a grammar specify the manner in which the terminals
and non-terminals can be combined to form strings. Each production
consists of a non-terminal called the left side of the production, an arrow,
and a sequence of tokens and/or on- terminals, called the right side of the
production.
11
Procedure:
Here is the code to make a Production rules in compiler -
12
Result:
Discussion:
From this experiment we can understand how to make a Production rules
in compiler by using c programming.
13
Experiment No: 04
Title:
Write a c program to make Regular expression.
Objective:
In this experiment we will learn how to make Regular expression in
compiler by using c programming.
Theory:
Regular expression is an important notation for specifying patterns. Each
pattern matches a set of strings, so regular expressions serve as names for
a set of strings. Programming language tokens can be described by regular
languages. The specification of regular expressions is an example of a
recursive definition. Regular languages are easy to understand and have
efficient implementation.
Operation
The various operations on languages are:
Union of two
languages L and M
is written as L U M
= {s | s is in L or s
is in M}
14
Concatenation of two
languages L and M is
written as LM = {st | s is
in L and t is in M}
The Kleene Closure
of a language L is
written as L* = Zero
or more occurrence
of language L.
15
Procedure:
Here is the code to make a regular expression in compiler -
16
Output
17