KEMBAR78
Programming Language Fundamentals | PDF | Variable (Computer Science) | Scope (Computer Science)
0% found this document useful (0 votes)
208 views88 pages

Programming Language Fundamentals

This document discusses the evolution of programming languages from early machine languages to modern high-level languages. It covers several important early languages including FORTRAN, LISP, ALGOL 58 and ALGOL 60. FORTRAN was one of the first high-level languages and helped popularize computing. LISP pioneered functional programming and is still dominant in AI. The ALGOL languages aimed to provide a universal language for describing algorithms in a way close to mathematical notation. Overall, it traces the progression from low-level to high-level languages and how early programming needs and computer capabilities influenced language design.

Uploaded by

Victini code
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
208 views88 pages

Programming Language Fundamentals

This document discusses the evolution of programming languages from early machine languages to modern high-level languages. It covers several important early languages including FORTRAN, LISP, ALGOL 58 and ALGOL 60. FORTRAN was one of the first high-level languages and helped popularize computing. LISP pioneered functional programming and is still dominant in AI. The ALGOL languages aimed to provide a universal language for describing algorithms in a way close to mathematical notation. Overall, it traces the progression from low-level to high-level languages and how early programming needs and computer capabilities influenced language design.

Uploaded by

Victini code
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

PRINCIPLES OF PROGRAMMING LANGUAGES

UNIT 1

SYNTAX AND SEMANTICS

Evolution of programming languages – describing syntax – context-free grammars – attribute


grammars – describing semantics – lexical analysis – parsing – recursive-descent – bottom up
parsing

Programming Languages:

A programming language is a computer language that is used by programmers (developers) to


communicate with computers. It is a set of instructions written in any specific language ( C, C++, Java,
Python) to perform a specific task.

Types of programming languages:

1. Low-level programming language

Low-level language is machine-dependent (0s and 1s) programming language. The processor
runs low- level programs directly without the need of a compiler or interpreter, so the programs written in
low-level language can be run very fast. Low-level language is further divided into two parts -

i. Machine Language

Machine language is a type of low-level programming language. It is also called as machine code
or object code. Machine language is easier to read because it is normally displayed in binary or
hexadecimal form (base 16) form. It does not require a translator to convert the programs because
computers directly understand the machine language programs.

The advantage of machine language is that it helps the programmer to execute the programs faster than
the high-level programming language.

ii. Assembly Language

Assembly language (ASM) is also a type of low-level programming language that is designed for
specific processors. It represents the set of instructions in a symbolic and human-understandable form.
It uses an assembler to convert the assembly language to machine language.

The advantage of assembly language is that it requires less memory and less execution time to execute a
program.

2. High-level programming language

High-level programming language (HLL) is designed for developing user-friendly software


programs and websites. This programming language requires a compiler or interpreter to translate the
program into machine language (execute the program).
The main advantage of a high-level language is that it is easy to read, write, and maintain.

High-level programming language includes Python, Java, JavaScript, PHP, C#, C++, Objective C,
Cobol, Perl, Pascal, LISP, FORTRAN, and Swift programming language.

A high-level language is further divided into three parts -

i. Procedural Oriented programming language

Procedural Oriented Programming (POP) language is derived from structured programming and
based upon the procedure call concept. It divides a program into small procedures called routines or
functions.

Procedural Oriented programming language is used by a software programmer to create a program that
can be accomplished by using a programming editor like IDE, Adobe Dreamweaver, or Microsoft Visual
Studio.

The advantage of POP language is that it helps programmers to easily track the program flow and code
can be reused in different parts of the program.

Example: C, FORTRAN, Basic, Pascal, etc.

ii. Object-Oriented Programming language

Object-Oriented Programming (OOP) language is based upon the objects. In this programming
language, programs are divided into small parts called objects. It is used to implement real-world
entities like inheritance, polymorphism, abstraction, etc in the program to makes the program resusable,
efficient, and easy-to-use.

The main advantage of object-oriented programming is that OOP is faster and easier to execute, maintain,
modify, as well as debug.

Example: C++, Java, Python, C#, etc.

iii. Natural language

Natural language is a part of human languages such as English, Russian, German, and Japanese.
It is used by machines to understand, manipulate, and interpret human's language. It is used by developers
to perform tasks such as translation, automatic summarization, Named Entity Recognition (NER),
relationship extraction, and topic segmentation.

The main advantage of natural language is that it helps users to ask questions in any subject and directly
respond within seconds.

3. Middle-level programming language

Middle-level programming language lies between the low-level programming language and
high-level programming language. It is also known as the intermediate programming language and
pseudo-language.
A middle-level programming language's advantages are that it supports the features of high-level
programming, it is a user-friendly language, and closely related to machine language and human
language.

Example: C, C++, language

Evolution of Programming Languages:


1. Plankalkül - 1945

- Never implemented

- Advanced data structures

- floating point, arrays, records

- Invariants

- Notation:

A(7) := 5 * B(6)

| 5 * B => A

V | 6 7 (subscripts)

S | 1.n 1.n (data types)

2. Pseudocodes - 1949
Disadvantage of using machine code

a. Poor readability

b. Poor modifiability

c. Expression coding was tedious

d. Machine deficiencies--no indexing or fl. pt.

Short Code:

- 1949; BINAC; Mauchly

- Expressions were coded, left to right

- Some operations:

1n => (n+2)nd power

2n => (n+2)nd root

07 => addition

Speedcoding
- 1954; IBM 701, Backus

- Pseudo ops for arithmetic and math functions

- Conditional and unconditional branching

- Autoincrement registers for array access

- Slow!

- Only 700 words left for user program


3. Laning and Zierler System - 1953

- Implemented on the MIT Whirlwind computer

- First "algebraic" compiler system

- Subscripted variables, function calls, expression translation

- Never ported to any other machine

4. FORTRAN I - 1957

(FORTRAN 0 - 1954 - not implemented)

- Designed for the new IBM 704, which had index registers and floating point hardware

- Environment of development:

1. Computers were small and unreliable

2. Applications were scientific

3. No programming methodology or tools

4. Machine efficiency was most important

- Impact of environment on design

1. No need for dynamic storage

2. Need good array handling and counting loops

3. No string handling, decimal arithmetic, or powerful input/output (commercial stuff)

- First implemented version of FORTRAN

- Names could have up to six characters

- Posttest counting loop (DO)

- Formatted i/o

- User-defined subprograms

- Three-way selection statement (arithmetic IF)


- No data typing statements

- No separate compilation

- Compiler released in April 1957, after 18 worker/years of effort

- Programs larger than 400 lines rarely compiled correctly, mainly due to poor reliability of the
704

- Code was very fast

- Quickly became widely used

5. FORTRAN II - 1958

- Independent compilation
- Fix the bugs

6. FORTRAN IV - 1960-62

- Explicit type declarations

- Logical selection statement

- Subprogram names could be parameters

- ANSI standard in 1966

7. FORTRAN 77 - 1978

- Character string handling

- Logical loop control statement

- IF-THEN-ELSE statement

8. FORTRAN 90 - 1990
- Modules

- Dynamic arrays

- Pointers

- Recursion

- CASE statement

- Parameter type checking

FORTRAN Evaluation

- Dramatically changed forever the way computers are used


9. LISP - 1959

- LISt Processing language

(Designed at MIT by McCarthy)

- AI research needed a language that:

1. Process data in lists (rather than arrays)

2. Symbolic computation (rather than numeric)


- Only two data types: atoms and lists

- Syntax is based on lambda calculus

- Pioneered functional programming

- No need for variables or assignment

- Control via recursion and conditional expressions

- Still the dominant language for AI

- COMMON LISP and Scheme are contemporary dialects of LISP

- ML, Miranda, and Haskell are related languages

10. ALGOL 58 - 1958

- Environment of development:

1. FORTRAN had (barely) arrived for IBM 70x

2. Many other languages were being developed, all for specific machines

3. No portable language; all were machine- dependent

4. No universal language for communicating algorithms

11. ALGOL 58 (continued)

- ACM and GAMM met for four days for design

- Goals of the language:

1. Close to mathematical notation

2. Good for describing algorithms

3. Must be translatable to machine code

- Language Features:

- Concept of type was formalized

- Names could have any length


- Arrays could have any number of subscripts

- Parameters were separated by mode (in & out)

- Subscripts were placed in brackets

- Compound statements (begin ... end)

- Semicolon as a statement separator

- Assignment operator was :=


- if had an else-if clause

- Comments:

- Not meant to be implemented, but variations of it were (MAD, JOVIAL)

- Although IBM was initially enthusiastic, all support was dropped by mid-1959

12. ALGOL 60 - 1960

- Modified ALGOL 58 at 6-day meeting in Paris

- New Features:

- Block structure (local scope)

- Two parameter passing methods

- Subprogram recursion

- Stack-dynamic arrays

- Still no i/o and no string handling

- Successes:

- It was the standard way to publish algorithms for over 20 years

- All subsequent imperative languages are based on it

- First machine-independent language

- First language whose syntax was formally defined (BNF)

- Failure:

- Never widely used, especially in U.S.

Reasons:

1. No i/o and the character set made programs nonportable

2. Too flexible--hard to implement

3. Intrenchment of FORTRAN
4. Formal syntax description

5. Lack of support of IBM

13. COBOL - 1960

- Environment of development:

- UNIVAC was beginning to use FLOW-MATIC

- USAF was beginning to use AIMACO


- IBM was developing COMTRAN

- Based on FLOW-MATIC

- FLOW-MATIC features:

- Names up to 12 characters, with embedded hyphens

- English names for arithmetic operators

- Data and code were completely separate

- Verbs were first word in every statement

- First Design Meeting - May 1959

- Design goals:

1. Must look like simple English

2. Must be easy to use, even if that means it will be less powerful

3. Must broaden the base of computer users

4. Must not be biased by current compiler problems

- Design committee were all from computer manufacturers and DoD branches

- Design Problems: arithmetic expressions? subscripts? Fights among manufacturers

- Contributions:

- First macro facility in a high-level language

- Hierarchical data structures (records)

- Nested selection statements

- Long names (up to 30 characters), with hyphens

- Data Division

- Comments:

- First language required by DoD; would have failed without DoD


- Still the most widely used business applications language

14. BASIC - 1964

- Designed by Kemeny & Kurtz at Dartmouth

- Design Goals:

- Easy to learn and use for non-science students

- Must be ”pleasant and friendly"


- Fast turnaround for homework

- Free and private access

- User time is more important than computer time

- Current popular dialects: QuickBASIC and Visual BASIC

15. PL/I - 1965

- Designed by IBM and SHARE

- Computing situation in 1964 (IBM's point of view)

1. Scientific computing

- IBM 1620 and 7090 computers

- FORTRAN

- SHARE user group

2. Business computing

- IBM 1401, 7080 computers

- COBOL

- GUIDE user group

- By 1963, however,Scientific users began to need more elaborate i/o, like COBOL had; Business

users began to need fl. pt. and arrays (MIS)

- It looked like many shops would begin to need two kinds of computers, languages,

and support staff--too costly

- The obvious solution:

1. Build a new computer to do both kinds of applications

2. Design a new language to do both kinds of applications

- PL/I contributions:
1. First unit-level concurrency

2. First exception handling

3. Switch-selectable recursion

4. First pointer data type

5. First array cross sections

- Comments:
- Many new features were poorly designed

- Too large and too complex

- Was (and still is) actually used for both scientific and business applications

16. Early Dynamic Languages

- Characterized by dynamic typing and dynamic storage allocation

- APL (A Programming Language) 1962

- Designed as a hardware description language (at IBM by Ken Iverson)

- Highly expressive (many operators, for both scalars and arrays of various dimensions)

- Programs are very difficult to read

- SNOBOL(1964)

- Designed as a string manipulation language (at Bell Labs by Farber, Griswold, and Polensky)

- Powerful operators for string pattern matching

17. SIMULA 67 - 1967

- Designed primarily for system simulation (in Norway by Nygaard and Dahl)

- Based on ALGOL 60 and SIMULA I

- Primary Contribution:

- Coroutines - a kind of subprogram

- Implemented in a structure called a class

- Classes are the basis for data abstraction

- Classes are structures that include both local data and functionality

18. ALGOL 68 - 1968

- From the continued development of ALGOL 60, but it is not a superset of that language

- Design is based on the concept of orthogonality


- Contributions:

1. User-defined data structures

2. Reference types

3. Dynamic arrays (called flex arrays)

- Comments:

- Had even less usage than ALGOL 60


- Had strong influence on subsequent languages, especially Pascal, C, and Ada

19. Pascal - 1971

- Designed by Wirth, who quit the ALGOL 68 committee (didn't like the direction of that work)

- Designed for teaching structured programming

- Small, simple, nothing really new

- Still the most widely used language for teaching programming in colleges (but use is shrinking)

20. C - 1972

- Designed for systems programming (at Bell Labs by Dennis Richie)

- Evolved primarily from B, but also ALGOL 68

- Powerful set of operators, but poor type checking

- Initially spread through UNIX

21. Other descendants of ALGOL

- Modula-2 (mid-1970s by Niklaus Wirth at ETH)

- Pascal plus modules and some low-level features designed for systems programming

- Modula-3 (late 1980s at Digital & Olivetti)

- Modula-2 plus classes, exception handling, garbage collection, and concurrency

- Oberon (late 1980s by Wirth at ETH)

- Adds support for OOP to Modula-2

- Many Modula-2 features were deleted (e.g., for statement, enumeration types, with statement,
noninteger array indices)

- Delphi (Borland)

- Pascal plus features to support OOP

- More elegant and safer than C++


22. Prolog - 1972

- Developed at the University of Aix-Marseille, by Comerauer and Roussel, with some help from

Kowalski at the University of Edinburgh

- Based on formal logic

- Non-procedural

- Can be summarized as being an intelligent database system that uses an inferencing


process to infer the truth of given queries

23. Ada - 1983 (began in mid-1970s)

- Huge design effort, involving hundreds of people,much money, and about eight years

- Contributions:

1. Packages - support for data abstraction

2. Exception handling - elaborate

3. Generic program units

4. Concurrency - through the tasking model

- Comments:

- Competitive design

- Included all that was then known about software engineering and language design

- First compilers were very difficult; the first really usable compiler came nearly five
years

after the language design was completed


- Ada 95 (began in 1988)

- Support for OOP through type derivation

- Better control mechanisms for shared data (new concurrency features)

- More flexible libraries

24. Smalltalk - 1972-1980

- Developed at Xerox PARC, initially by Alan Kay,later by Adele Goldberg

- First full implementation of an object-oriented language (data abstraction, inheritance, and

dynamic type binding)

- Pioneered the graphical user interface everyone now uses


25. C++ - 1985

- Developed at Bell Labs by Stroustrup

- Evolved from C and SIMULA 67

- Facilities for object-oriented programming, taken partially from SIMULA 67, were added to C

- Also has exception handling

- A large and complex language, in part because it supports both procedural and OO
programming

- Rapidly grew in popularity, along with OOP

- ANSI standard approved in November, 1997


- Eiffel - a related language that supports OOP

- (Designed by Bertrand Meyer - 1992)

- Not directly derived from any other language

- Smaller and simpler than C++, but still has most of the power

26. Java (1995)

- Developed at Sun in the early 1990s

- Based on C++

- Significantly simplified

- Supports only OOP

- Has references, but not pointers

- Includes support for applets and a form of concurrency


Describing Syntax:

Syntax - the form or structure of the expressions, statements, and program units
Syntax Provide a language definitions
1. Other language designers
2. Implementors
3. Programmers (the users of the language)
General Problem of Describing Syntax
A sentence is a string of characters over some alphabet
A language is a set of sentences
A lexeme is the lowest level syntactic unit of a language (e.g., *, sum, begin)
A token is a category of lexemes (e.g., identifier)
Example
Index = 2 * count +17;
Refer Written Notes
1. Recognizers - used in compilers
Determine whether the input sting belong to the language
Example: Syntax analysis part of a compiler
2. Generators - what we'll study
Generates sentence of a language.
Context-Free Grammars:
- Developed by Noam Chomsky in the mid-1950s
- Language generators, meant to describe the syntax of natural languages
- Define a class of languages called context-free languages
Backus Normal Form (1959)
- Invented by John Backus to describe Algol 58
- BNF is equivalent to context-free grammars
- A metalanguage is a language used to describe another language.
-In BNF, abstractions are used to represent classes of syntactic structures--they act like syntactic
variables (also called nonterminal symbols)
e.g.
<while_stmt> -> while <logic_expr> do <stmt>
This is a rule; it describes the structure of a while statement
BNF Fundamentals:
Kindly Refer Written Notes
BNF Rules:
- A rule has a left-hand side (LHS) and a right-hand side (RHS), and consists of terminal and
nonterminal symbols
- A grammar is a finite nonempty set of rules
- An abstraction (or nonterminal symbol) can have more than one RHS
<stmt> -> <single_stmt>
| begin <stmt_list> end
Describing Lists
Syntactic lists are described in BNF using recursion
<ident_list> -> ident| ident, <ident_list>
A derivation is a repeated application of rules, starting with the start symbol and ending with a
sentence (all terminal symbols)
An example grammar:
<program> -> <stmts>
<stmts> -> <stmt> | <stmt> ; <stmts>
<stmt> -> <var> = <expr>
<var> -> a | b | c | d
<expr> -> <term> + <term> | <term> - <term>
<term> -> <var> | const
An example derivation:
<program> => <stmts> => <stmt>
=> <var> = <expr> => a = <expr>
=> a = <term> + <term>
=> a = <var> + <term>
=> a = b + <term>
=> a = b + const
Derivation
- Every string of symbols in the derivation is a sentential form
- A sentence is a sentential form that has only terminal symbols
- A leftmost derivation is one in which the leftmost nonterminal in each sentential form is the one
that is expanded
- A derivation may be neither leftmost nor rightmost
Ambiguity in grammars
- A grammar is ambiguous iff it generates a sentential form that has two or more distinct parse
trees
- A parse tree is a hierarchical representation of a derivation
An ambiguous expression grammar:
<expr> -> <expr> <op> <expr> | const
<op> -> / | -
<expr> <expr>
<expr> <op> <expr> <expr> <op> <expr>
<expr><op><expr> <expr><op><expr>
const - const / const const - const / const
- If we use the parse tree to indicate precedence levels of the operators, we cannot have ambiguity
-
An unambiguous expression grammar:
<expr> -> <expr> - <term> | <term>
<term> -> <term> / const | const
<expr>
<expr> - <term>
<term> <term> / const
const const
!

<expr> => <expr> - <term> => <term> - <term>


=> const - <term>
=> const - <term> / const
=> const - const / const

Associativity of operators
Operator associativity can also be indicated by a grammar
<expr> -> <expr> + <expr> | const (ambiguous)
<expr> -> <expr> + const | const (unambiguous)
<expr>
<expr> + const
<expr> + const
Const

Extended BNF (just abbreviations):


1. Optional parts are placed in brackets ([])
<proc_call> -> ident [ ( <expr_list>)]
2. Put alternative parts of RHSs in parentheses and separate them with vertical bars
<term> -> <term> (+ | -) const
3. Put repetitions (0 or more) in braces ({})
<ident> -> letter {letter | digit}
BNF:
<expr> -> <expr> + <term>
| <expr> - <term>
| <term>
<term> -> <term> * <factor>
| <term> / <factor>
| <factor>
EBNF:
<expr> -> <term> {(+ | -) <term>}
<term> -> <factor> {(* | /) <factor>}
Attribute Grammars (AGs) (Knuth, 1968)
- Cfgs cannot describe all of the syntax of programming languages
- Additions to cfgs to carry some semantic info along through parse trees
Primary value of AGs:
1. Static semantics specification
2. Compiler design(static semantics checking)
Definition
An attribute grammar is a cfg G = (S, N, T, P) with the following additions:
1. For each grammar symbol x there is a set A(x) of attribute values
2. Each rule has a set of functions that define certain attributes of the nonterminals in the rule
3. Each rule has a (possibly empty) set of predicates to check for attribute consistency
- Let X0 -> X1 ... Xn be a rule.
- Functions of the form S(X0) = f(A(X1), ... A(Xn)) define synthesized attributes
- Functions of the form I(Xj) = f(A(X0), ... , A(Xn)), for i <= j <= n, define inherited attributes
- Initially, there are intrinsic attributes on the leaves

Example: expressions of the form id + id


- id's can be either int_type or real_type
- types of the two id's must be the same
- type of the expression must match it's expected
type
BNF:
<expr> -> <var> + <var>
<var> -> id
Attributes:
actual_type - synthesized for <var> and <expr>
expected_type - inherited for <expr>

Attribute Grammar:
1. Syntax rule: <expr> -> <var>[1] + <var>[2]
Semantic rules:
<var>[1].env ¬<expr>.env
<var>[2].env ¬<expr>.env
<expr>.actual_type ¬<var>[1].actual_type
Predicate:
<var>[1].actual_type = <var>[2].actual_type
<expr>.expected_type = <expr>.actual_type

2. Syntax rule: <var> -> id


Semantic rule:
<var>.actual_type <- lookup (id, <var>.env)

How are attribute values computed?


1. If all attributes were inherited, the tree could be decorated in top-down order.
2. If all attributes were synthesized, the tree could be decorated in bottom-up order.
3. In many cases, both kinds of attributes are used, and it is some combination of top-down and
bottom-up that must be used.

1. <expr>.env ¬ inherited from parent


<expr>.expected_type ¬ inherited from parent

2. <var>[1].env ¬<expr>.env
<var>[2].env ¬<expr>.env

3. <var>[1].actual_type ¬ lookup (A, <var>[1].env)


<var>[2].actual_type ¬ lookup (B, <var>[2].env)
<var>[1].actual_type =? <var>[2].actual_type

4. <expr>.actual_type ¬<var>[1].actual_type
<expr>.actual_type =? <expr>.expected_type

Semantics
Semantics - the meaning of the expressions, statements, and program units
Describing Semantics:
- No single widely acceptable notation or formalism for describing semantics
I. Operational Semantics
- Describe the meaning of a program by executing its statements on a machine, either simulated
or actual. The change in the state of the machine (memory, registers, etc.) defines the meaning of
the statement
- To use operational semantics for a high-level language, a virtual machine in needed
- A hardware pure interpreter would be too expensive
- A software pure interpreter also has problems:
1. The detailed characteristics of the particular computer would make actions difficult to
understand
2. Such a semantic definition would be machinedependent
- A better alternative: A complete computer simulation
- The process:
1. Build a translator (translates source code to the machine code of an idealized
computer)
2. Build a simulator for the idealized computer
- Evaluation of operational semantics:
- Good if used informally
- Extremely complex if used formally (e.g., VDL)
Axiomatic Semantics
- Based on formal logic (first order predicate calculus)
- Original purpose: formal program verification
- Approach: Define axioms or inference rules for each statement type in the language (to allow
transformations of expressions to other expressions)
- The expressions are called assertions
- An assertion before a statement (a precondition) states the relationships and constraints among
variables that are true at that point in execution
- An assertion following a statement is a postcondition
- A weakest precondition is the least restrictive precondition that will guarantee the postcondition
- Pre-post form: {P} statement {Q}
- An example: a := b + 1 {a > 1}
One possible precondition: {b > 10}
Weakest precondition: {b > 0}
Program proof process: The postcondition for the whole program is the desired results. Work back
through the program to the first statement. If the precondition on the first statement is the same as
the program spec, the program is correct.
- An axiom for assignment statements:
{Qx->E} x := E {Q}
- The Rule of Consequence:
{P} S {Q}, P' => P, Q => Q'
{P'} S {Q'}
- An inference rule for sequences
- For a sequence S1;S2:
{P1} S1 {P2}
{P2} S2 {P3}
the inference rule is:
{P1} S1 {P2}, {P2} S2 {P3}
{P1} S1; S2 {P3}
- An inference rule for logical pretest loops
For the loop construct:
{P} while B do S end {Q} the inference rule is:
(I and B) S {I}
{I} while B do S {I and (not B)}
where I is the loop invariant.
Characteristics of the loop invariant
I must meet the following conditions:
1. P => I (the loop invariant must be true initially)
2. {I} B {I} (evaluation of the Boolean must not change the validity of I)
3. {I and B} S {I} (I is not changed by executing the body of the loop)
4. (I and (not B)) => Q (if I is true and B is false, Q is implied)
5. The loop terminates (this can be difficult to prove)
- The loop invariant I is a weakened version of the loop postcondition, and it is also a precondition.
- I must be weak enough to be satisfied prior to the beginning of the loop, but when combined with the
loop exit condition, it must be strong enough to force the truth of the postcondition.

Evaluation of axiomatic semantics:


1. Developing axioms or inference rules for all of the statements in a language is difficult
2. It is a good tool for correctness proofs, and an excellent framework for reasoning about
programs, but it is not as useful for language users and compiler writers
Denotational Semantics
- Based on recursive function theory
- The most abstract semantics description method
- Originally developed by Scott and Strachey (1970)
- The process of building a denotational spec for a language:
1. Define a mathematical object for each language entity
2. Define a function that maps instances of the language entities onto instances of the
corresponding mathematical objects
- The meaning of language constructs are defined by only the values of the program's variables
- The difference between denotational and operational semantics: In operational semantics,the state
changes are defined by coded algorithms; in denotational semantics, they are
defined by rigorous mathematical functions
- The state of a program is the values of all its
current variables
s = {<i1, v1>, <i2, v2>, …, <in, vn>}
- Let VARMAP be a function that, when given a variable name and a state, returns the current
value of the variable
VARMAP(ij, s) = vj
1. Decimal Numbers
<dec_num> ® 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
| <dec_num> (0 | 1 | 2 | 3 | 4 |
5 | 6 | 7 | 8 | 9)
Mdec('0') = 0, Mdec ('1') = 1, …, Mdec ('9') = 9
Mdec (<dec_num> '0') = 10 * Mdec (<dec_num>)
Mdec (<dec_num> '1’) = 10 * Mdec (<dec_num>) + 1

Mdec (<dec_num> '9') = 10 * Mdec (<dec_num>) + 9

2. Expressions
Me(<expr>, s) D=
case <expr> of
<dec_num> => Mdec(<dec_num>, s)
<var> =>
if VARMAP(<var>, s) = undef
then error
else VARMAP(<var>, s)
<binary_expr> =>
if (Me(<binary_expr>.<left_expr>, s) = undef
OR Me(<binary_expr>.<right_expr>, s) =
undef)
then error
else
if (<binary_expr>.<operator> = ë+í then
Me(<binary_expr>.<left_expr>, s) +
Me(<binary_expr>.<right_expr>, s)
else Me(<binary_expr>.<left_expr>, s) *
Me(<binary_expr>.<right_expr>, s)
Copyright © 1998 by Addison Wesley Longman, Inc. 23
Chapter 3
3 Assignment Statements
Ma(x := E, s) D=
if Me(E, s) = error
then error
else s’ = {<i1’,v1’>,<i2’,v2’>,...,<in’,vn’>},
where for j = 1, 2, ..., n,
vj’ = VARMAP(ij, s) if ij <> x
= Me(E, s) if ij = x
4 Logical Pretest Loops
Ml(while B do L, s) D=
if Mb(B, s) = undef
then error
else if Mb(B, s) = false
then s
else if Msl(L, s) = error
then error
else Ml(while B do L, Msl(L, s))
The meaning of the loop is the value of the program variables after the statements in the loop
have been executed the prescribed number of times, assuming there have been no errors
- In essence, the loop has been converted from iteration to recursion, where the recursive control is
mathematically defined by other recursive state mapping functions
- Recursion, when compared to iteration, is easier to describe with mathematical rigor
Evaluation of denotation semantics:
- Can be used to prove the correctness of programs
- Provides a rigorous way to think about programs
- Can be an aid to language design
- Has been used in compiler generation systems

Kindly refer remaining topics in Written Notes

UNIT -2

DATA, DATA TYPES AND BASIC STATEMENTS

Names – variables – binding – type checking – scope – scope rules – lifetime and garbage collection –
primitive data types – strings – array types – associative arrays – record types – union types – pointers
and references – Arithmetic expressions – overloaded operators – type conversions – relational and
boolean expressions – assignment statements – mixed mode assignments – control structures – selection –
iterations – branching – guarded statements

2.1 Names
Design issues for names:
– Maximum length?
– Are connector characters allowed?
– Are names case sensitive?
– Are special words reserved words or keywords?
Length
– If too short, they cannot be connotative
– Language examples:
o FORTRAN I: maximum 6
o COBOL: maximum 30
o FORTRAN 90 and ANSI C: maximum 31
o Ada and Java: no limit, and all are significant
o C++: no limit, but implementors often impose one
Connectors
– Pascal, Modula-2, and FORTRAN 77 don't allow
– Others do
Case sensitivity
– Disadvantage: readability (names that look alike are different)
o worse in C++ and Java because predefined names are mixed case (e.g.,
IndexOutOfBoundsException)
– C, C++, and Java names are case sensitive
– The names in other languages are not
Special words
– An aid to readability; used to delimit or separate statement clauses
– Def: A keyword is a word that is special only in certain contexts
o i.e. in Fortran:
– Real VarName (Real is data type followed with a name, therefore Real is
a keyword)
– Real = 3.4 (Real is a variable)
– Disadvantage: poor readability
Defnition: A reserved word is a special word that cannot be used as a user-defined
name
2.2 Variables:

o A variable is an abstraction of a memory cell


o Variables can be characterized as a sextuple of attributes: (name, address, value, type,
lifetime, and scope)
o Name - not all variables have them (anonymous)
o Address - the memory address with which it is associated (also called l-value)
 A variable may have different addresses at different times during execution
 A variable may have different addresses at different places in a program
 If two variable names can be used to access the same memory location, they
o are called aliases
 Aliases are harmful to readability (program readers must remember all of
o them)
o How aliases can be created:
 Pointers, reference variables, C and C++ unions
 Some of the original justifications for aliases are no longer valid; e.g., memory
reuse in FORTRAN
 Replace them with dynamic allocation
o Type - determines the range of values of variables and the set of operations that are
defined for values of that type; in the case of floating point, type also determines the
Precision
o Value - the contents of the location with which the variable is associated
o Abstract memory

2.3 Binding

- The l-value of a variable is its address


- The r-value of a variable is its value
- Def: A binding is an association, such as between an attribute and an entity, or between
an operation and a symbol
- Def: Binding time is the time at which a binding takes place.
- Possible binding times:
 Language design time--e.g., bind operator symbols to operations
 Language implementation time--e.g., bind floating point type to a
representation
 Compile time--e.g., bind a variable to a type in C or Java
 Load time--e.g., bind a FORTRAN 77 variable to a memory cell (or a C
static variable)
 Runtime--e.g., bind a nonstatic local variable to a memory cell
Binding of attributes to Variables:
- Def: A binding is static if it first occurs before run time and remains unchanged throughout
program execution.
- Def: A binding is dynamic if it first occurs during execution or can change during execution of
the program.
Types of Bindings
 How is a type specified?
 When does the binding take place?
Static Binding
- If static, the type may be specified by either an explicit or an implicit declaration
- Def: An explicit declaration is a program statement used for declaring the types of variables
- Def: An implicit declaration is a default mechanism for specifying types of variables (the first
appearance of the variable in the program)
- FORTRAN, PL/I, BASIC, and Perl provide implicit declarations
- Advantage: writability
- Disadvantage: reliability (less trouble with Perl)
Dynamic Type Binding (JavaScript and PHP)
o Specified through an assignment statement e.g., JavaScript
list = [2, 4.33, 6, 8];
list = 17.3;
Advantage: flexibility (generic program units)
Disadvantages:
High cost (dynamic type checking and interpretation)
Type error detection by the compiler is difficult
Type Inferencing (ML, Miranda, and Haskell)-Rather than by assignment
statement, types are determined from the context of the reference
Storage Bindings & Lifetime
- Allocation - getting a cell from some pool of available cells
- Deallocation - putting a cell back into the pool
- Def: The lifetime of a variable is the time during which it is bound to a particular memory cell
Categories of variables by lifetimes
- Static--bound to memory cells before execution begins and remains bound to the same memory
cell throughout execution.
e.g., all FORTRAN 77 variables, C static variables
Advantages: efficiency (direct addressing), history-sensitive subprogram support
Disadvantage: lack of flexibility (no recursion)
- Stack-dynamic--Storage bindings are created for variables when their declaration statements are
elaborated.
If scalar, all attributes except address are statically bound
e.g., local variables in C subprograms and Java methods
Advantage: allows recursion; conserves storage
Disadvantages:
Overhead of allocation and deallocation
Subprograms cannot be history sensitive
Inefficient references (indirect addressing)
- Explicit heap-dynamic--Allocated and deallocated by explicit directives, specified by the
programmer, which take effect during execution
Referenced only through pointers or references
- e.g., dynamic objects in C++ (via new and delete) all objects in Java
Advantage: provides for dynamic storage management
Disadvantage: inefficient and unreliable

- Implicit heap dynamic--Allocation and deallocation caused by assignment statements


- e.g., all variables in APL; all strings and arrays in Perl and JavaScript
Advantage: flexibility
Disadvantages:
Inefficient, because all attributes are dynamic
Loss of error detection
2.4 Type Checking

- Generalize the concept of operands and operators to include subprograms and


assignments
- Type checking is the activity of ensuring that the operands of an operator are of
compatible types
- A compatible type is one that is either legal for the operator, or is allowed under language
rules to be implicitly converted, by compiler- generated code, to a legal type. This
automatic conversion is called as coercion.
- A type error is the application of an operator to an operand of an inappropriate type
- If all type bindings are static, nearly all type checking can be static
- If type bindings are dynamic, type checking must be dynamic
- Def: A programming language is strongly typed if type errors are always detected

2.5 Scope

- The scope of a variable is the range of statements over which it is visible


- The nonlocal variables of a program unit are those that are visible but not declared there
- The scope rules of a language determine how references to names are associated with
variables
- Scope and lifetime are sometimes closely related, but are different concepts
- Consider a static variable in a C or C++ function

Static scope

– Based on program text


– To connect a name reference to a variable, you (or the compiler) must find the declaration
– Search process: search declarations, first locally, then in increasingly larger enclosing scopes,
until one is found for the given name
– Enclosing static scopes (to a specific scope) are called its static ancestors; the nearest static
ancestor is called a static parent
-Variables can be hidden from a unit by having a "closer" variable with the same name
-C++ and Ada allow access to these "hidden" variables
– In Ada: unit.name
– In C++: class_name::name
Blocks
– A method of creating static scopes inside program units--from ALGOL 60
– Examples:
C and C++: for (...)
{ int index;
...
}
Ada: declare LCL : FLOAT;
begin
...
end
Evaluation of Static Scoping
Consider the example:
Assume MAIN calls A and B
A calls C and D
B calls A and E
Suppose the spec is changed so that D must now access some data in B
Solutions:
– Put D in B (but then C can no longer call it and D cannot access A's variables)
– Move the data from B that D needs to MAIN (but then all procedures can access them)
Same problem for procedure access
Overall: static scoping often encourages many globals

Dynamic Scope
– Based on calling sequences of program units, not their textual layout (temporal versus spatial)
– References to variables are connected to declarations by searching back through the chain of
subprogram calls that forced execution to this point

Scope Example
MAIN
- declaration of x
SUB1
- declaration of x -
...
call SUB2
...
SUB2
...
- reference to x -
...
...
call SUB1

Static scoping
– Reference to x is to MAIN's x
Dynamic scoping
– Reference to x is to SUB1's x
Evaluation of Dynamic Scoping:
– Advantage: convenience
– Disadvantage: poor readability

2.6 Scope Rules and 2.7 Lifetime and garbage collection


Kindly refer written Notes
2.8 Primitive Data Types
Data Types
A data type defines a collection of data objects and a set of predefined operations on those
objects
- A descriptor is the collection of the attributes of a variable
- An object represents an instance of a user-defined (abstract data) type
- One design issue for all data types: What operations are defined and how are they
specified?

Primitive Data Types


- Almost all programming languages provide a set of primitive data types
- Primitive data types: Those not defined in terms of other data types
- Some primitive data types are merely reflections of the hardware
- Others require only a little non-hardware support for their implementation
Integer
- Almost always an exact reflection of the hardware so the mapping is trivial
- There may be as many as eight different integer types in a language
- Java‘s signed integer sizes: byte, short, int, long
Floating Point
- Model real numbers, but only as approximations
- Languages for scientific use support at least two floating-point types (e.g.,
float and double; sometimes more
- Usually exactly like the hardware, but not always
- IEEE Floating-Point Standard 754
Complex
 Some languages support a complex type, e.g., Fortran and Python
 Each value consists of two floats, the real part and the imaginary part
 Literal form (in Python):
o (7 + 3j), where 7 is the real part and 3 is the imaginary part
Decimal
 For business applications (money)
 Essential to COBOL
 C# offers a decimal data type
 Store a fixed number of decimal digits, in coded form (BCD)
 Advantage: accuracy
 Disadvantages: limited range, wastes memory
Boolean
 Simplest of all
 Range of values: two elements, one for “true” and one for “false”
 Could be implemented as bits, but often as bytes
– Advantage: readability
Character
 Stored as numeric coding
 Most commonly used coding: ASCII
 An alternative, 16-bit coding: Unicode
– Includes characters from most natural languages
– Originally used in Java
– C# and JavaScript also support Unicode
2.9 Character String Types
 Values are sequences of characters
 Design issues:
– Is it a primitive type or just a special kind of array?
– Should the length of strings be static or dynamic?
Operations
Typical operations:
– Assignment and copying
– Comparison (=, >, etc.)
– Catenation
– Substring reference
– Pattern matching
Character String Type in Certain Languages
 C and C++
– Not primitive
– Use char arrays and a library of functions that provide operations
 SNOBOL4 (a string manipulation language)
 Primitive
 Many operations, including elaborate pattern matching
 Fortran and Python
– Primitive type with assignment and several operations
 Java
– Primitive via the String class
 Perl, JavaScript, Ruby, and PHP
– Provide built-in pattern matching, using regular expressions
Character String Length Options
 Static: COBOL, Java‘s String class
 Limited Dynamic Length: C and C++
– In these languages, a special character is used to indicate the end of a
string‘s characters, rather than maintaining the length
 Dynamic (no maximum): SNOBOL4, Perl, JavaScript
 Ada supports all three string length options
Evaluation
 Aid to writability
 As a primitive type with static length, they are inexpensive to provide—why not
have them?
 Dynamic length is nice, but is it worth the expense?
Implementation
 Static length: compile-time descriptor
 Limited dynamic length: may need a run-time descriptor for length (but not in C and
C++)
 Dynamic length: need run-time descriptor; allocation/de-allocation is the biggest
implementation problem
Figure Compile-Time Descriptor Figure Run-Time Descriptors

2.10 Array Types


 An array is an aggregate of homogeneous data elements in which an individual element
is identified by its position in the aggregate, relative to the first element.
Array Design Issues
 What types are legal for subscripts?
 Are subscripting expressions in element references range checked?
 When are subscript ranges bound?
 When does allocation take place?
 What is the maximum number of subscripts?
 Can array objects be initialized?
 Are any kind of slices supported?
Array Indexing
 Indexing (or subscripting) is a mapping from indices to elements
o array_name (index_value_list) an element
 Index Syntax
– FORTRAN, PL/I, Ada use parentheses
 Ada explicitly uses parentheses to show uniformity between array references and
function calls because both are mappings
– Most other languages use brackets
Arrays Index (Subscript) Types
 FORTRAN, C: integer only
 Ada: integer or enumeration (includes Boolean and char)
 Java: integer types only
 Index range checking
– C, C++, Perl, and Fortran do not specify range checking
– Java, ML, C# specify range checking
– In Ada, the default is to require range checking, but it can be turned off
Subscript Binding and Array Categories
 Static: subscript ranges are statically bound and storage allocation is static (before run-
time)
– Advantage: efficiency (no dynamic allocation)
 Fixed stack-dynamic: subscript ranges are statically bound, but the allocation is done at
declaration time
– Advantage: space efficiency
 Stack-dynamic: subscript ranges are dynamically bound and the storage allocation is
dynamic (done at run-time)
– Advantage: flexibility (the size of an array need not be known until the
array is to be used)
 Fixed heap-dynamic: similar to fixed stack-dynamic: storage binding is dynamic but
fixed after allocation (i.e., binding is done when requested and storage is allocated from
heap, not stack)
 Heap-dynamic: binding of subscript ranges and storage allocation is dynamic and can
change any number of times
– Advantage: flexibility (arrays can grow or shrink during program
execution)
 C and C++ arrays that include static modifier are static
 C and C++ arrays without static modifier are fixed stack-dynamic
 C and C++ provide fixed heap-dynamic arrays
 C# includes a second array class ArrayList that provides fixed heap-dynamic
 Perl, JavaScript, Python, and Ruby support heap-dynamic arrays
Array Initialization
 Some language allow initialization at the time of storage allocation
– C, C++, Java, C# example
int list [] = {4, 5, 7, 83}
– Character strings in C and C++
char name [] = “freddie”;
– Arrays of strings in C and C++
char *names [] = {“Bob”, “Jake”, “Joe”};
– Java initialization of String objects
String[] names = {“Bob”, “Jake”, “Joe”};
Heterogeneous Arrays
 A heterogeneous array is one in which the elements need not be of the same type
 Supported by Perl, Python, JavaScript, and Ruby
Arrays Operations
 APL provides the most powerful array processing operations for vectors and matrixes as
well as unary operators (for example, to reverse column elements)
 Ada allows array assignment but also catenation
 Python‘s array assignments, but they are only reference changes. Python also supports
array catenation and element membership operations
 Ruby also provides array catenation
 Fortran provides elemental operations because they are between pairs of array elements
– For example, + operator between two arrays results in an array of the sums of
the element pairs of the two arrays
Rectangular and Jagged Arrays
 A rectangular array is a multi-dimensioned array in which all of the rows have the same
number of elements and all columns have the same number of elements
 A jagged matrix has rows with varying number of elements
– Possible when multi-dimensioned arrays actually appear as arrays of
arrays
 C, C++, and Java support jagged arrays
 Fortran, Ada, and C# support rectangular arrays (C# also supports jagged arrays)
Slices
 A slice is some substructure of an array; nothing more than a referencing mechanism
 Slices are only useful in languages that have array operations
Slice Examples Figure 3.3 Slices Examples in Fortran 95
– Fortran 95
Integer, Dimension (10) :: Vector
Integer, Dimension (3, 3) :: Mat
Integer, Dimension (3, 3) :: Cube
Vector (3:6) is a four element array

Figure Slices Examples in Fortran 95

Implementation of Arrays
 Access function maps subscript expressions to an address in the array
 Access function for single-dimensioned arrays:
address(list[k]) = address (list[lower_bound])+((k-lower_bound) * element_size)
Accessing Multi-dimensioned Arrays
 Two common ways:
– Row major order (by rows) – used in most languages
– Column major order (by columns) – used in Fortran

Figure Locating an Element in a Multi-dimensioned Array


Compile-Time Descriptors

Figure Single-Dimensioned Array Figure Multi-Dimensioned Array

2.11 Associative Arrays


 An associative array is an unordered collection of data elements that are indexed by an
equal number of values called keys
– User-defined keys must be stored
 Design issues:
– What is the form of references to elements?
– Is the size static or dynamic?
Associative Arrays in Perl
 Names begin with %; literals are delimited by parentheses
o %hi_temps = ("Mon" => 77, "Tue" => 79, ―Wed‖ => 65, …);
 Subscripting is done using braces and keys
o $hi_temps{"Wed"} = 83;
 Elements can be removed with delete
o delete $hi_temps{"Tue"};
2.12 Record Types
 A record is a possibly heterogeneous aggregate of data elements in which the individual
elements are identified by names
 Design issues:
– What is the syntactic form of references to the field?
– Are elliptical references allowed?
Definition of Records in COBOL
- COBOL uses level numbers to show nested records; others use recursive definition
01 EMP-REC.
02 EMP-NAME.
05 FIRST PIC X(20).
05 MID PIC X(10).
05 LAST PIC X(20).
02 HOURLY-RATE PIC 99V99.
Definition of Records in Ada
- Record structures are indicated in an orthogonal way
type Emp_Rec_Type is record
First: String (1..20);
Mid: String (1..10);
Last: String (1..20);
Hourly_Rate: Float;
end record;
Emp_Rec: Emp_Rec_Type;
References to Records
 Record field references
– COBOL
field_name OF record_name_1 OF ... OF record_name_n
– Others (dot notation)
record_name_1.record_name_2. ... record_name_n.field_name
 Fully qualified references must include all record names
 Elliptical references allow leaving out record names as long as the reference is
unambiguous, for example in COBOL
FIRST, FIRST OF EMP-NAME, and FIRST of EMP-REC are elliptical
references to the employee‘s first name
Operations on Records
 Assignment is very common if the types are identical
 Ada allows record comparison
 Ada records can be initialized with aggregate literals
 COBOL provides MOVE CORRESPONDING
– Copies a field of the source record to the corresponding field in the target record
Evaluation and Comparison to Arrays
 Records are used when collection of data values is heterogeneous
 Access to array elements is much slower than access to record fields, because subscripts
are dynamic (field names are static)
 Dynamic subscripts could be used with record field access, but it would disallow type
checking and it would be much slower

Figure Implementation of Record Type


2.13 Unions Types
 A union is a type whose variables are allowed to store different type values at different
times during execution
 Design issues
– Should type checking be required?
– Should unions be embedded in records?
Discriminated vs. Free Unions
 Fortran, C, and C++ provide union constructs in which there is no language support for
type checking; the union in these languages is called free union
 Type checking of unions require that each union include a type indicator called a
discriminant
– Supported by Ada
Ada Union Types
type Shape is (Circle, Triangle, Rectangle);
type Colors is (Red, Green, Blue);
type Figure (Form: Shape) is record
Filled: Boolean;
Color: Colors;
case Form is
when Circle => Diameter: Float;
when Triangle =>Leftside, Rightside: Integer;
Angle: Float;
when Rectangle => Side1, Side2: Integer;
end case;
end record;
Evaluation of Unions
 Free unions are unsafe
 Do not allow type checking
 Java and C# do not support unions
 Reflective of growing concerns for safety in programming language
 Ada‘s descriminated unions are safe

Ada Union Type Illustrated

Figure Discriminated Union of Three Shape Variables

2.14 Pointer and Reference Types


 A pointer type variable has a range of values that consists of memory addresses and a
special value, nil
 Provide the power of indirect addressing
 Provide a way to manage dynamic memory
 A pointer can be used to access a location in the area where storage is dynamically
created (usually called a heap)
Design Issues of Pointers
 What are the scope of and lifetime of a pointer variable?
 What is the lifetime of a heap-dynamic variable?
 Are pointers restricted as to the type of value to which they can point?
 Are pointers used for dynamic storage management, indirect addressing, or both?
 Should the language support pointer types, reference types, or both?
Pointer Operations
 Two fundamental operations: assignment and dereferencing
 Assignment is used to set a pointer variable‘s value to some useful address
 Dereferencing yields the value stored at the location represented by the pointer‘s
value
– Dereferencing can be explicit or implicit
– C++ uses an explicit operation via *
j = *ptr
sets j to the value located at ptr
Pointer Assignment Illustration

Figure The assignment operation j = *ptr


Problems with Pointers
 Dangling pointers (dangerous)
– A pointer points to a heap-dynamic variable that has been deallocated
 Lost heap-dynamic variable
– An allocated heap-dynamic variable that is no longer accessible to the user
program (often called garbage)
 Pointer p1 is set to point to a newly created heap-dynamic variable
 Pointer p1 is later set to point to another newly created heap-dynamic variable
 The process of losing heap-dynamic variables is called memory leakage
Pointers in Ada
 Some dangling pointers are disallowed because dynamic objects can be automatically
deallocated at the end of pointer's type scope
 The lost heap-dynamic variable problem is not eliminated by Ada (possible with
UNCHECKED_DEALLOCATION)
Pointers in C and C++
 Extremely flexible but must be used with care
 Pointers can point at any variable regardless of when or where it was allocated
 Used for dynamic storage management and addressing
 Pointer arithmetic is possible
 Explicit dereferencing and address-of operators
 Domain type need not be fixed (void *)
void * can point to any type and can be type
checked (cannot be de-referenced)
Pointer Arithmetic in C and C++
float stuff[100];
float *p;
p = stuff;
*(p+5) is equivalent to stuff[5] and p[5]
*(p+i) is equivalent to stuff[i] and p[i]
Reference Types
 C++ includes a special kind of pointer type called a reference type that is used
primarily for formal parameters
– Advantages of both pass-by-reference and pass-by-value
 Java extends C++‘s reference variables and allows them to replace pointers entirely
– References are references to objects, rather than being addresses
 C# includes both the references of Java and the pointers of C++
Evaluation of Pointers
 Dangling pointers and dangling objects are problems as is heap management
 Pointers are like goto's--they widen the range of cells that can be accessed by a
variable
 Pointers or references are necessary for dynamic data structures--so we can't design a
language without them
Representations of Pointers
 Large computers use single values
 Intel microprocessors use segment and offset
Dangling Pointer Problem
 Tombstone: extra heap cell that is a pointer to the heap-dynamic variable
– The actual pointer variable points only at tombstones
– When heap-dynamic variable de-allocated, tombstone remains but set to
nil
– Costly in time and space
 Locks-and-keys: Pointer values are represented as (key, address) pairs
– Heap-dynamic variables are represented as variable plus cell for integer
lock value
– When heap-dynamic variable allocated, lock value is created and placed in
lock cell and key cell of pointer
Heap Management
 A very complex run-time process
 Single-size cells vs. variable-size cells
 Two approaches to reclaim garbage
– Reference counters (eager approach): reclamation is gradual
– Mark-sweep (lazy approach): reclamation occurs when the list of variable space
becomes empty
Reference Counter
 Reference counters: maintain a counter in every cell that store the number of
pointers currently pointing at the cell
– Disadvantages: space required, execution time required, complications for cells
connected circularly
– Advantage: it is intrinsically incremental, so significant delays in the application
execution are avoided
Mark-Sweep
 The run-time system allocates storage cells as requested and disconnects
pointers from cells as necessary; mark-sweep then begins
– Every heap cell has an extra bit used by collection algorithm
– All cells initially set to garbage
– All pointers traced into heap, and reachable cells marked as not garbage
– All garbage cells returned to list of available cells
– Disadvantages: in its original form, it was done too infrequently. When done, it caused
significant delays in application execution. Contemporary marksweep algorithms avoid this by
doing it more often—called incremental marksweep

Figure Marking Algorithm

Variable-Size Cells
 All the difficulties of single-size cells plus more
 Required by most programming languages
 If mark-sweep is used, additional problems occur
– The initial setting of the indicators of all cells in the heap is difficult
– The marking process in nontrivial
– Maintaining the list of available space is another source of overhead

2.15 Arithmetic Expressions

 Arithmetic Expressions consist of operators, operands, parentheses and function calls.


Design Issues
Design issues for arithmetic expressions
 Operator precedence rules
 Operator associativity rules
 Order of operand evaluation
 Operand evaluation side effects
 Operator overloading
 Type mixing in expressions
Operators
 A unary operator has one operand.
 A binary operator has two operands.
 A ternary operator has three operands.
Operator Precedence Rules
 The operator precedence rules for expression evaluation define the order in which
adjacent operators of different precedence levels are evaluated.
 Typical precedence levels:
 parentheses
 unary operators
 ** (if the language supports it)
 *, /
 +, -
Operator Associativity Rule
 The operator associativity rules for expression evaluation define the order in which
adjacent operators with the same precedence level are evaluated.
 Typical associativity rules:
 Left to right, except **(Ruby and Fortran), which is right to left
 Sometimes unary operators associate right to left (e.g., in FORTRAN)
– APL is different; all operators have equal precedence and all operators associate
right to left.
– Precedence and associativity rules can be overridden with parentheses.
Conditional Expressions
 Conditional Expressions (ternary operator ?:) available in C-based languages.
An example: (C, C++)
average = (count == 0)? 0 : sum/count
Evaluates as if written like
if (count == 0)
average = 0
else
average = sum /count
Operand Evaluation Order
 Operand evaluation order as follows.
 Variables: fetch the value from memory.
 Constants: sometimes a fetch from memory; sometimes the constant is in the
 machine language instruction.
 Parenthesized expressions: evaluate all operands and operators first.
 The most interesting case is when an operand is a function call.
Potentials for Side Effects :
 when a function changes a two way parameter or a non-local variable
 Problem with functional side effects: When a function referenced in an expression alters
another operand of the expression;
e.g., for a parameter change:
a = 10;
/* assume that fun changes its parameter */
b = a + fun(a);
Two possible solutions to the functional side effects problem: Write the language definition to
disallow functional side effects.
 No two-way parameters in functions
 No non-local references in functions
Advantage: it works!
Disadvantages: inflexibility of one-way parameters and lack of non-local references
– Write the language definition to demand that operand evaluation order be fixed.
Disadvantage: limits some compiler optimizations
– Java requires that operands appear to be evaluated in left-to-right order.

2.16 Overloaded Operators


Use of an operator for more than one purpose is called operator overloading.
 Some are common (e.g., + for int and float)
 Some are potential trouble (e.g., * in C and C++)
Problems:
 Loss of compiler error detection (omission of an operand should be a detectable error)
 Some loss of readability
 Can be avoided by introduction of new symbols (e.g., Pascal‘s div for integer division)
 C++, Ada, Fortran 95, and C# allow user-defined overloaded operators
Potential problems:
 Users can define nonsense operations.
 Readability may suffer, even when the operators make sense.

Type Conversions
 A narrowing conversion is one that converts an object to a type that cannot include all of
the values of the original type.
o e.g., float to int
 A widening conversion is one in which an object is converted to a type that can include at
least approximations to all of the values of the original type.
o e.g., int to float
Mixed Mode
 A mixed-mode expression is one that has operands of different types
 A coercion is an implicit type conversion
Disadvantage of coercions:
– They decrease in the type error detection ability of the compiler
 In most languages, all numeric types are coerced in expressions, using widening
 conversions.
 In Ada, there are virtually no coercions in expressions
Explicit Type Conversions called as casting in C-based languages.
Examples:-
C: (int)angle, Ada: Float (Sum)
– Note that Ada’s syntax is similar to that of function calls
Errors in Expressions causes
 Inherent limitations of arithmetic e.g., division by zero
 Limitations of computer arithmetic e.g., overflow
 Often ignored by the run-time system

2.17 Relational and Boolean Expressions


Relational Expressions:
 Use relational operators and operands of various types
 Evaluate to some Boolean representation
 Operator symbols used vary somewhat among languages (!=, /=, .NE., <>, #)
 JavaScript and PHP have two additional relational operator, === and !==
 Similar to their cousins, == and !=, except that they do not coerce their operands
Boolean Expressions:
 Operands are Boolean and the result is Boolean
Example operators
FORTRAN 77 FORTRAN 90 C Ada
.AND. and && and
.OR. or || or
.NOT. not ! not
xor --- --- ----
 No Boolean Type in C
 C89 has no Boolean type--it uses int type with 0 for false and nonzero for true
 One odd characteristic of C‘s expressions: a<b<c is a legal expression, but the result is
not what you might expect:
 Left operator is evaluated, producing 0 or 1
 The evaluation result is then compared with the third operand (i.e., c).

2.18 Assignment Statements


 The general syntax: <target_var> <assign_operator> <expression>
 The assignment operator
 ‘=’ in FORTRAN, BASIC, the C-based languages
 ‘:=’ in ALGOL, Pascal, Ada
 Equal ‘=’ can be bad when it is overloaded for the relational operator for equality (that‘s
why the C-based languages use == as the relational operator)
Conditional Targets (Perl)
($flag ? $total : $subtotal) = 0
Which is equivalent to
if ($flag){$total = 0}
else {$subtotal = 0}
Compound Assignment Operators
 A shorthand method of specifying a commonly needed form of assignment.
 Introduced in ALGOL; adopted by C
Ex: ‘a = a + b’ is written as ‘a += b’
Unary Assignment Operators
 Unary assignment operators in C-based languages combine increment and decrement
operations with assignment.
For example
 sum + = ++count (count incremented, added to sum)
 sum + = count++ (count incremented, added to sum)
 count++ (count incremented)
 -count++ (count incremented then negated)
Assignment as an Expression
 In C, C++, and Java, the assignment statement produces a result and can be used as
operands.
while ((ch = getchar())!= EOF){…}
ch = getchar() is carried out; the result (assigned to ch) is used as a conditional
value for the while statement
List Assignments
 Perl and Ruby support list assignments
e.g., ($first, $second, $third) = (20, 30, 40);

2.19 Mixed-Mode Assignment


Assignment statements can also be mixed-mode, for example
int a, b;
float c;
c = a / b;
– In Fortran, C, and C++, any numeric type value can be assigned to any numeric
type variable.
– In Java, only widening assignment coercions are done.
– In Ada, there is no assignment coercion.

2.20 Control Structures


 A control structure is a control statement and the statements whose execution it controls.
 Levels of Control Flow
 Within expressions
 Among program units
 Among program statements
Control Statements: Evolution
 FORTRAN I control statements were based directly on IBM 704 hardware
 Much research and argument in the 1960s about the issue
 One important result: It was proven that all algorithms represented by flowcharts can be
coded with only two-way selection and pretest logical loops
2.21 Selection Statements
 A selection statement provides the means of choosing between two or more paths of
execution. Two general categories:
 Two-way selectors
 Multiple-way selectors
Two-Way Selection Statements
 General form as follows…
if control_expression
then clause
else clause
Design Issues:
 What is the form and type of the control expression?
 How are the then and else clauses specified?
 How should the meaning of nested selectors be specified?
The Control Expression
 If the ‘then’ reserved word or some other syntactic marker is not used to introduce the
‘then’ clause, the control expression is placed in parentheses.
 In C89, C99, Python, and C++, the control expression can be arithmetic.
 In languages such as Ada, Java, Ruby, and C#, the control expression must be Boolean.
Clause Form
 In many contemporary languages, the then and else clauses can be single statements or
compound statements
 In Perl, all clauses must be delimited by braces (they must be compound)
 In Fortran 95, Ada, and Ruby, clauses are statement sequences
 Python uses indentation to define clauses
if x > y :
x=y
print "case 1"
Nesting Selectors:Java example
if (sum == 0)
if (count == 0)
result = 0;
else result = 1;
 Which if gets the else?
 Java's static semantics rule: else matches with the nearest if Nesting Selectors
 To force an alternative semantics, compound statements may be used:
if (sum == 0) {
if (count == 0)
result = 0;}
else result = 1;
 The above solution is used in C, C++, and C#
 Perl requires that all then and else clauses to be compound
 Statement sequences as clauses: Ruby
if sum == 0 then
if count == 0 then
result = 0
else
result = 1
end
end
 Python
if sum == 0 :
if count == 0 :
result = 0
else :
result = 1
Multiple-Way Selection Statements
 Allow the selection of one of any number of statements or statement groups
Design Issues:
 What is the form and type of the control expression?
 How are the selectable segments specified?
 Is execution flow through the structure restricted to include just a single selectable
segment?
 How are case values specified?
 What is done about unrepresented expression values?
Multiple-Way Selection: Examples
 C, C++, and Java
switch (expression) {
case const_expr_1: stmt_1;

case const_expr_n: stmt_n;
[default: stmt_n+1]}
 Design choices for C‘s switch statement
 Control expression can be only an integer type
 Selectable segments can be statement sequences, blocks, or compound statements
 Any number of segments can be executed in one execution of the construct (there is no
implicit branch at the end of selectable segments)
 default clause is for unrepresented values (if there is no default, the whole statement
does nothing)
 C#
– Differs from C in that it has a static semantics rule that disallows the
implicit execution of more than one segment
– Each selectable segment must end with an unconditional branch (goto or
break)
 Ada
case expression is
when choice list => stmt_sequence;

when choice list => stmt_sequence;
when others => stmt_sequence;]
end case;
More reliable than C‘s switch (once a stmt_sequence execution is completed, control is passed to
the first statement after the case statement
 Ada design choices:
1. Expression can be any ordinal type
2. Segments can be single or compound
3. Only one segment can be executed per execution of the construct
4. Unrepresented values are not allowed
 Constant List Forms:
 A list of constants
 Can include:
- Subranges
- Boolean OR operators (|)
Multiple-Way Selection Using if
Multiple Selectors can appear as direct extensions to two-way selectors, using else-if clauses, for
example in Python:
if count < 10 :
bag1 = True
elsif count < 100 :
bag2 = True
elseif count < 1000 :
bag3 = True
2.22 Iterative Statements
 The repeated execution of a statement or compound statement is accomplished either by
iteration or recursion
 General design issues for iteration control statements:
1. How is iteration controlled?
2. Where is the control mechanism in the loop?
Counter-Controlled Loops
A counting iterative statement has a loop variable, and a means of specifying the initial and
terminal, and stepsize values
Design Issues:
 What are the type and scope of the loop variable?
 What is the value of the loop variable at loop termination?
 Should it be legal for the loop variable or loop parameters to be changed in the loop body,
and if so, does the change affect loop control?
 Should the loop parameters be evaluated only once, or once for every iteration?

Iterative Statements: Examples


FORTRAN 95 syntax
DO label var = start, finish [, stepsize]
Stepsize can be any value but zero
Parameters can be expressions
Design choices:
1. Loop variable must be INTEGER
2. Loop variable always has its last value
3. The loop variable cannot be changed in the loop, but the parameters can; because they are
evaluated only once, it does not affect loop control
4. Loop parameters are evaluated only once
 FORTRAN 95 : a second form:
[name:] Do variable = initial, terminal [,stepsize]

End Do [name]
– Cannot branch into either of Fortran‘s Do statements
 Ada
for var in [reverse] discrete_range loop ...
end loop
 Design choices:
– Type of the loop variable is that of the discrete range (A discrete range is a sub-range of
an integer or enumeration type).
– Loop variable does not exist outside the loop
– The loop variable cannot be changed in the loop, but the discrete range can; it does not
affect loop control
– The discrete range is evaluated just once
– Cannot branch into the loop body
 C-based languages
for ([expr_1] ; [expr_2] ; [expr_3]) statement
– The expressions can be whole statements, or even statement sequences,
with the statements separated by commas
– The value of a multiple-statement expression is the value of the last
statement in the expression
– If the second expression is absent, it is an infinite loop
 Design choices:
– There is no explicit loop variable
– Everything can be changed in the loop
– The first expression is evaluated once, but the other two are evaluated with each
iteration
 C++ differs from C in two ways:
– The control expression can also be Boolean
– The initial expression can include variable definitions (scope is from the
definition to the end of the loop body)
 Java and C#
– Differs from C++ in that the control expression must be Boolean
– Iterative Statements: Logically-Controlled Loops
– Repetition control is based on a Boolean expression
 Design issues:
– Pretest or posttest?
– Should the logically controlled loop be a special case of the counting loop statement or
a separate statement?
 Iterative Statements: Logically-Controlled Loops: Examples C and C++ have both pretest
and posttest forms, in which the control expression can be arithmetic:
while (ctrl_expr) do
loop body loop body
while (ctrl_expr)
 Java is like C and C++, except the control expression must be Boolean (and the body can
only be entered at the beginning -- Java has no goto
 Iterative Statements: Logically-Controlled Loops: Examples
 Ada has a pretest version, but no posttest
 FORTRAN 95 has neither
 Perl and Ruby have two pretest logical loops, while and until. Perl also has two posttest
loops
2.23 Unconditional Branching: User-Located Loop Control Mechanisms
 Sometimes it is convenient for the programmers to decide a location for loop
 control (other than top or bottom of the loop)
 Simple design for single loops (e.g., break)
 Design issues for nested loops
– Should the conditional be part of the exit?
– Should control be transferable out of more than one loop?
User-Located Loop Control Mechanisms break and continue
 C , C++, Python, Ruby, and C# have unconditional unlabeled exits (break)
 Java and Perl have unconditional labeled exits (break in Java, last in Perl)
 C, C++, and Python have an unlabeled control statement, continue, that skips the
remainder of the current iteration, but does not exit the loop
 Java and Perl have labeled versions of continue
Iterative Statements: Iteration Based on Data Structures
 Number of elements of in a data structure control loop iteration
 Control mechanism is a call to an iterator function that returns the next element in some
chosen order, if there is one; else loop is terminate
 C's for can be used to build a user-defined iterator:
for (p=root; p==NULL;
traverse(p)){ }
 C#‘s foreach statement iterates on the elements of arrays and other collections:
Strings[] = strList = {"Bob", "Carol", "Ted"};
foreach (Strings name in strList)
Console.WriteLine ("Name: {0}", name);
The notation {0} indicates the position in the string to be displayed
 Perl has a built-in iterator for arrays and hashes, foreach Unconditional Branching
 Transfers execution control to a specified place in the program
 Represented one of the most heated debates in 1960‘s and 1970‘s
 Well-known mechanism: goto statement
 Major concern: Readability
 Some languages do not support goto statement (e.g., Java)
 C# offers goto statement (can be used in switch statements)
 Loop exit statements are restricted and somewhat camouflaged goto‘s

2.24 Guarded Commands


 Designed by Dijkstra
 Purpose: to support a new programming methodology that supported verification
(correctness) during development
 Basis for two linguistic mechanisms for concurrent programming (in CSP and Ada)
 Basic Idea: if the order of evaluation is not important, the program should not specify one
Selection Guarded Command
•Form
if <Boolean exp> -> <statement>
[] <Boolean exp> -> <statement>
...
[] <Boolean exp> -> <statement>
fi
Semantics: when construct is reached,
 Evaluate all Boolean expressions
 If more than one are true, choose one non-deterministically
 If none are true, it is a runtime error
Loop Guarded Command
•Form
do <Boolean> -> <statement>
[] <Boolean> -> <statement>
...
[] <Boolean> -> <statement>
Od
Semantics: for each iteration
 Evaluate all Boolean expressions
 If more than one are true, choose one non-deterministically; then start loop again
 If none are true, exit loop
Guarded Commands: Rationale
 Connection between control statements and program verification is intimate
 Verification is impossible with goto statements
 Verification is possible with only selection and logical pretest loops
 Verification is relatively simple with only guarded commands
UNIT -3
SUBPROGRAMS AND IMPLEMENTATIONS

Subprograms – design issues – local referencing – parameter passing – overloaded methods –


generic methods – design issues for functions – semantics of call and return – implementing
simple subprograms – stack and dynamic local variables – nested subprograms – blocks –
dynamic scoping
3.1 Subprograms
 Each subprogram has a single entry point
 The calling program is suspended during execution of the called subprogram
 Control always returns to the caller when the called subprogram‘s execution terminates
Basic Definitions
 A subprogram definition describes the interface to and the actions of the subprogram
abstraction
 A subprogram call is an explicit request that the subprogram be executed
 A subprogram header is the first part of the definition, including the name, the kind of
subprogram, and the formal parameters
 The parameter profile (aka signature) of a subprogram is the number, order, and types of
its parameters
 The protocol is a subprogram‘s parameter profile and, if it is a function, its return type
 Function declarations in C and C++ are often called prototypes
 A subprogram declaration provides the protocol, but not the body, of the subprogram
 A formal parameter is a dummy variable listed in the subprogram header and used in the
subprogram
 An actual parameter represents a value or address used in the subprogram call statement
Actual/Formal Parameter Correspondence
 Positional
– The binding of actual parameters to formal parameters is by position: the
first actual parameter is bound to the first formal parameter and so forth
– Safe and effective
 Keyword
– The name of the formal parameter to which an actual parameter is to be
bound is specified with the actual parameter
– Parameters can appear in any order
Formal Parameter Default Values
 In certain languages (e.g., C++, Ada), formal parameters can have default values (if not
actual parameter is passed)
– In C++, default parameters must appear last because parameters are
positionally associated
 C# methods can accept a variable number of parameters as long as they are of the same
type
Procedures and Functions
 There are two categories of subprograms
 Procedures are collection of statements that define parameterized computations
 Functions structurally resemble procedures but are semantically modeled on
mathematical functions
 They are expected to produce no side effects
 In practice, program functions have side effects
3.2 Design Issues for Subprograms
 What parameter passing methods are provided?
 Are parameter types checked?
 Are local variables static or dynamic?
 Can subprogram definitions appear in other subprogram definitions?
 Can subprograms be overloaded?
 Can subprogram be generic?
 The scope of a variable is the range of statements over which it is visible
 The nonlocal variables of a program unit are those that are visible but not declared there
 The scope rules of a language determine how references to names are associated with
variables
 Scope and lifetime are sometimes closely related, but are different concepts
 Consider a static variable in a C or C++ function

Static scope

– Based on program text


– To connect a name reference to a variable, you (or the compiler) must find the declaration
– Search process: search declarations, first locally, then in increasingly larger enclosing scopes,
until one is found for the given name
– Enclosing static scopes (to a specific scope) are called its static ancestors; the nearest static
ancestor is called a static parent
-Variables can be hidden from a unit by having a "closer" variable with the same name
-C++ and Ada allow access to these "hidden" variables
– In Ada: unit.name
– In C++: class_name::name
Blocks
– A method of creating static scopes inside program units--from ALGOL 60
– Examples:
C and C++: for (...)
{ int index;
...
}
Ada: declare LCL : FLOAT;
begin
...
end

Evaluation of Static Scoping


Consider the example:
Assume MAIN calls A and B
A calls C and D
B calls A and E
Suppose the spec is changed so that D must now access some data in B
Solutions:
– Put D in B (but then C can no longer call it and D cannot access A's variables)
– Move the data from B that D needs to MAIN (but then all procedures can access them)
Same problem for procedure access
Overall: static scoping often encourages many globals

Dynamic Scope
– Based on calling sequences of program units, not their textual layout (temporal versus spatial)
– References to variables are connected to declarations by searching back through the chain of
subprogram calls that forced execution to this point

Scope Example
MAIN
- declaration of x
SUB1
- declaration of x -
...
call SUB2
...
SUB2
...
- reference to x -
...
...
call SUB1

Static scoping
– Reference to x is to MAIN's x
Dynamic scoping
– Reference to x is to SUB1's x
Evaluation of Dynamic Scoping:
– Advantage: convenience
– Disadvantage: poor readability
3.3 Local Referencing Environments
 Def: The referencing environment of a statement is the collection of all names that are
visible in the statement
 In a static-scoped language, it is the local variables plus all of the visible variables in all
of the enclosing scopes
 A subprogram is active if its execution has begun but has not yet terminated
 In a dynamic-scoped language, the referencing environment is the local variables plus all
visible variables in all active subprograms
 Local variables can be stack-dynamic (bound to storage)
Advantages
 Support for recursion
 Storage for locals is shared among some subprograms
Disadvantages
 Allocation/de-allocation, initialization time
 Indirect addressing
 Subprograms cannot be history sensitive
 Local variables can be static
 More efficient (no indirection)
 No run-time overhead
3.4 Parameter Passing Methods
Ways in which parameters are transmitted to and/or from called subprograms
 Pass-by-value
 Pass-by-result
 Pass-by-value-result
 Pass-by-reference
 Pass-by-name
Figure Models of Parameter Passing

The value of the actual parameter is used to initialize the corresponding formal parameter
– Normally implemented by copying
– Can be implemented by transmitting an access path but not recommended (enforcing
write protection is not easy)
– When copies are used, additional storage is required
– Storage and copy operations can be costly
Pass-by-Result (Out Mode)
 When a parameter is passed by result, no value is transmitted to the subprogram;
the corresponding formal parameter acts as a local variable; its value is
transmitted to caller‘s actual parameter when control is returned to the caller
– Require extra storage location and copy operation
 Potential problem: sub(p1, p1); whichever formal parameter is copied back will
represent the current value of p1
Pass-by-Value-Result (Inout Mode)
 A combination of pass-by-value and pass-by-result
 Sometimes called pass-by-copy
 Formal parameters have local storage
 Disadvantages:
– Those of pass-by-result
– Those of pass-by-value
Pass-by-Reference (Inout Mode)
 Pass an access path
 Also called pass-by-sharing
 Passing process is efficient (no copying and no duplicated storage)
 Disadvantages
– Slower accesses (compared to pass-by-value) to formal parameters
– Potentials for un-wanted side effects
– Un-wanted aliases (access broadened)
Pass-by-Name (Inout Mode)
 By textual substitution
 Formals are bound to an access method at the time of the call, but actual binding
to a value or address takes place at the time of a reference or assignment
 Allows flexibility in late binding
Implementing Parameter-Passing Methods
 In most language parameter communication takes place thru the run-time stack
 Pass-by-reference are the simplest to implement; only an address is placed in the
stack
 A subtle but fatal error can occur with pass-by-reference and pass-by-value result:
a formal parameter corresponding to a constant can mistakenly be changed
Parameter Passing Methods of Major Languages
 Fortran
– Always used the inout semantics model
– Before Fortran 77: pass-by-reference
– Fortran 77 and later: scalar variables are often passed by value-result
 C
– Pass-by-value
– Pass-by-reference is achieved by using pointers as parameters
 C++
– A special pointer type called reference type for pass-by-reference
 Java
– All parameters are passed are passed by value
– Object parameters are passed by reference
 Ada
– Three semantics modes of parameter transmission: in, out, in out; in is the
default mode
– Formal parameters declared out can be assigned but not referenced; those
declared in can be referenced but not assigned; in out parameters can be
referenced and assigned
 C#
– Default method: pass-by-value
– Pass-by-reference is specified by preceding both a formal parameter and
its actual parameter with ref
 PHP: very similar to C#
 Perl: all actual parameters are implicitly placed in a predefined array named @_
Type Checking Parameters
 Considered very important for reliability
 FORTRAN 77 and original C: none
 Pascal, FORTRAN 90, Java, and Ada: it is always required
 ANSI C and C++: choice is made by the user
– Prototypes
 Relatively new languages Perl, JavaScript, and PHP do not require type checking
Multidimensional Arrays as Parameters
 If a multidimensional array is passed to a subprogram and the subprogram is
separately compiled, the compiler needs to know the declared size of that array to
build the storage mapping function
Multidimensional Arrays as Parameters: C and C++
 Programmer is required to include the declared sizes of all but the first subscript in
the actual parameter
 Disallows writing flexible subprograms
 Solution: pass a pointer to the array and the sizes of the dimensions as other
parameters; the user must include the storage mapping function in terms of the size
parameters
Multidimensional Arrays as Parameters: Pascal and Ada
 Pascal
– Not a problem; declared size is part of the array‘s type
 Ada
– Constrained arrays - like Pascal
– Unconstrained arrays - declared size is part of the object declaration
Multidimensional Arrays as Parameters: Fortran
- Formal parameter that are arrays have a declaration after the header
– For single-dimension arrays, the subscript is irrelevant
– For multi-dimensional arrays, the subscripts allow the storage-mapping
function
Multidimensional Arrays as Parameters: Java and C#
 Similar to Ada
 Arrays are objects; they are all single-dimensioned, but the elements can be arrays
 Each array inherits a named constant (length in Java, Length in C#) that is set to the
length of the array when the array object is created
Design Considerations for Parameter Passing
 Two important considerations
– Efficiency
– One-way or two-way data transfer
 But the above considerations are in conflict
– Good programming suggest limited access to variables, which means one
way whenever possible
– But pass-by-reference is more efficient to pass structures of significant
size

3.5 Overloaded Subprograms


 An overloaded subprogram is one that has the same name as another subprogram in
the same referencing environment
– Every version of an overloaded subprogram has a unique protocol
 C++, Java, C#, and Ada include predefined overloaded subprograms
 In Ada, the return type of an overloaded function can be used to disambiguate calls
(thus two overloaded functions can have the same parameters)
 Ada, Java, C++, and C# allow users to write multiple versions of subprograms with
the same name
3.6 Generic Subprograms
 A generic or polymorphic subprogram takes parameters of different types on
different activations
 Overloaded subprograms provide ad hoc polymorphism
 A subprogram that takes a generic parameter that is used in a type expression that
describes the type of the parameters of the subprogram provides parametric
polymorphism
Examples of parametric polymorphism: C++
template <class Type>
Type max(Type first, Type second) {
return first > second ? first : second;
}
The above template can be instantiated for any type for which operator > is defined
int max (int first, int second) {
return first > second? first : second;
}
3.7 Design Issues for Functions
 Are side effects allowed?
– Parameters should always be in-mode to reduce side effect (like Ada)
 What types of return values are allowed?
– Most imperative languages restrict the return types
– C allows any type except arrays and functions
– C++ is like C but also allows user-defined types
– Ada allows any type
– Java and C# do not have functions but methods can have any type

3.8 Semantics of Calls and Returns

Kindly refer Written Notes

3.9 Implementing Simple SubPrograms

Call Semantics:

1. Save the execution status of the caller

2. Carry out the parameter-passing process

3. Pass the return address

4. Transfer control to the callee

Return Semantics:

1. If pass-by-value-result parameters are used, move the current values of those parameters to
their corresponding actual parameters

2. If it is a function, move the functional value to a place the caller can get it

3. Restore the execution status of the caller


4. Transfer control back to the caller

Required Storage:

- Status information of the caller, parameters, return address, and functional value (if it is a
function)

- The format, or layout, of the noncode part of an executing subprogram is called an


activation record
- An activation record instance is a concrete example of an activation record (the collection
of data for a particular subprogram activation)
Diagram refer in written Notes

3.10 Stack-Dynamic Local Variables

- Parameters are often passed by two methods

- Local variables are often dynamically allocated

- Recursion must be supported

- Static scoping must be supported

A typical activation record for an ALGOL-like language: ( Diagram Refer Notes)

- The activation record format is static, but its size may be dynamic

- The static link points to the bottom of the activation record instance of an activation of the
static parent (used for access to nonlocal vars)

- The dynamic link points to the top of an instance of the activation record of the caller

- An activation record instance is dynamically created when a subprogram is called

- An example program:

program MAIN_1;

var P : real;

procedure A(X : integer);

var Y : boolean;

procedure C(Q : boolean);

begin { C }
... <------------------------3

end; { C }

begin { A }

... <--------------------------2

C(Y);

...

end; { A }

procedure B(R : real);

var S, T : integer;

begin { B }

... <------------------------1

A(S);

...

end; { B }

begin { MAIN_1 }

B(P);

end. { MAIN_1 }

Note that: MAIN_1 calls B

B calls A

A calls C

- The collection of dynamic links in the stack at a given time is called the dynamic chain, or call
chain

- Local variables can be accessed by their offset from the beginning of the activation record. This
offset is called the local_offset

- The local_offset of a local variable can be determined by the compiler


- Assuming all stack positions are the same size, the first local variable declared has an offset of
three plus the number of parameters e.g. In MAIN_1, the local_offset of Y in A is 4

- The activation record used in the previous

example supports recursion

e.g. int factorial(int n) {

<-----------------------------1

if (n <= 1)

return 1;

else return (n * factorial(n - 1));

<-----------------------------2

void main() {

int value;

value = factorial(3);

<-----------------------------3

Kindly refer notes for Diagram

3.11 Nested Subprogram

 All variables that can be nonlocally accessed reside in some activation record instance
in the stack
 The process of locating a nonlocal reference:
1. Find the correct activation record instance
2. Determine the correct offset within that activation record instance
Finding the offset is easy
Finding the correct activation record instance:
- Static semantic rules guarantee that all nonlocal variables that can be referenced have
been allocated in some activation record instance that is on the stack when the reference is made
Technique 1 - Static Chains
 A static chain is a chain of static links that connects certain activation record instances
 The static link in an activation record instance for subprogram A points to one of the
activation
 record instances of A's static parent
 The static chain from an activation record instance connects it to all of its static ancestors
 To find the declaration for a reference to a nonlocal variable:
 You could chase the static chain until the activation record instance (ari) that has the
variable is found, searching each ari as it is found
Def: Static_depth is an integer associated with a static scope whose value is the depth of
nesting of that scope
Kindly Refer diagram in written Notes
 Def: The chain_offset or nesting_depth of a nonlocal reference is the difference between
the static_depth of the reference and that of the scope where it is declared
 A reference can be represented by the pair: (chain_offset, local_offset) where local_offset
is the offset in the activation record of the variable being referenced
Example Program:
program MAIN_2;
var X : integer;
procedure BIGSUB;
var A, B, C : integer;
procedure SUB1;
var A, D : integer;
begin { SUB1 }
A := B + C; <-----------------------1
end; { SUB1 }
procedure SUB2(X : integer);
var B, E : integer;
procedure SUB3;
var C, E : integer;
begin { SUB3 }
SUB1;
E := B + A: <--------------------2
end; { SUB3 }
begin { SUB2 }
SUB3;
A := D + E; <-----------------------3
end; { SUB2 }
begin { BIGSUB }
SUB2(7);
end; { BIGSUB }
begin
BIGSUB;
end. { MAIN_2 }

Call sequence for MAIN_2


MAIN_2 calls BIGSUB
BIGSUB calls SUB2
SUB2 calls SUB3
SUB3 calls SUB1
At position 1 in SUB1:
A - (0, 3)
B - (1, 4)
C - (1, 5)
At position 2 in SUB3:
E - (0, 4)
B - (1, 4)
A - (2, 3)
At position 3 in SUB2:
A - (1, 3)
D - an error
E - (0, 5)
Static Chain Maintenance
At the call (assume there are no parameters that are subprograms and no pass-by-name
parameters:
- The activation record instance must be built
- The dynamic link is just the old stack top pointer
- The static link must point to the most recent ari of the static parent (in most situations)
Two Methods:
1. Search the dynamic chain until the first ari for the static parent is found--easy, but slow
2. Treat procedure calls and definitions like variable references and definitions (have the
compiler compute the nesting depth, or number of enclosing scopes between the caller
and the procedure that declared the called procedure; store this nesting depth and send it
with the call)
e.g. Look at MAIN_2 At the call to SUB1 in SUB3, this nesting depth is 2, which is sent to
SUB1 with the call. The static link in the new ari for SUB1 is set to point to the ari that is
pointed to by the second static link in the static chain from the ari for SUB3
Evaluation of the Static Chain Method Problems:
1. A nonlocal reference is slow if the number of scopes between the reference and the
declaration of the referenced variable is large
2. Time-critical code is difficult, because the costs of nonlocal references are not equal,
and can change with code upgrades
Technique 2 - Displays
The idea: Put the static links in a separate stack called a display. The entries in the display
are pointers to the ari's that have the variables in the referencing environment.
Represent references as
(display_offset, local_offset)
where display_offset is the same as chain_offset
Mechanics of references:
- Use the display_offset to get the pointer into the display to the ari with the variable
- Use the local_offset to get to the variable within the ari
Display maintenance (assuming no parameters that are subprograms and no pass-by-name
parameters)
- Note that display_offset depends only on the static depth of the procedure whose ari is being
built. It is exactly the static_depth of the procedure.
- There are k+1 entries in the display, where k is the static depth of the currently executing unit
(k=0 is for the main program)
- For a call to procedure P with a static_depth of k:
a. Save, in the new ari, a copy of the display pointer at position k
b. Put the link to the ari for P at position k in the display
At an exit, move the saved display pointer from the ari back into the display at position k
To see that this works:
- Let Psd be the static_depth of P, and Qsd be the static_depth of Q
- Assume Q calls P
- There are three possible cases:
1. Qsd = Psd
2. Qsd < Psd
3. Qsd > Psd
Example skeletal program:
program MAIN_3;
procedure BIGSUB;
procedure SUB1;
end; { SUB1 }
procedure SUB2;
procedure SUB3;
end; { SUB3 }
end; { SUB2 }
end; { BIGSUB }
end. { MAIN_3 }
MAIN_3 can illustrate all three cases
Kindly refer diagram in the written notes
Comparing the Static Chain and Display Methods
1. References to locals
- Not much difference
2. References to nonlocals
- If it is one level away, they are equal
- If it is farther away, the display is faster
- Display is better for time-critical code, because all nonlocal references cost the same
3. Procedure calls
- For one or two levels of depth, static chain is faster
- Otherwise, the display is faster
4. Procedure returns
- Both have fixed time, but the static chain is slightly faster
Overall: Static chain is better, unless the display can be kept in registers
3.12 Blocks
Two Methods:
1. Treat blocks as parameterless subprograms
- Use activation records
2. Allocate locals on top of the ari of the subprogram
- Must use a different method to access locals
- A little more work for the compiler writer
3.13 Dynamic Scoping
1. Deep Access - nonlocal references are found by searching the activation record instances on
the dynamic chain
- Length of chain cannot be statically determined
- Every activation record instance must have variable names
2. Shallow Access - put locals in a central place Methods:
a. One stack for each variable name (see p. 403)
b. Central table with an entry for each variable name
Implementing Parameters that are Subprogram Names (deep binding)
1. Static chain
- Compiler simply passes the link to the static parent of the parameter, along with the
parameter
2. Display
- All pointers to static ancestors must be saved, because none are necessarily in the
environment of the parameter
- In many implementations, the whole display is saved for calls that pass subprogram
parameters
UNIT – IV

OBJECT-ORIENTATION, CONCURRENCY, AND EVENT HANDLING


Object-orientation – design issues for OOP languages – implementation of object-oriented
constructs – concurrency – semaphores – monitors – message passing – threads –statement level
concurrency – exception handling – event handling
4.1 Object-Oriented Programming
 Abstract data types
 Inheritance
– Inheritance is the central theme in OOP and languages that support it
 Polymorphism
Inheritance
 Productivity increases can come from reuse
– ADTs are difficult to reuse—always need changes
– All ADTs are independent and at the same level
 Inheritance allows new classes defined in terms of existing ones, i.e., by allowing them to
inherit common parts
 Inheritance addresses both of the above concerns--reuse ADTs after minor changes and
define classes in a hierarchy
Object-Oriented Concepts
 ADTs are usually called classes
 Class instances are called objects
 A class that inherits is a derived class or a subclass
 The class from which another class inherits is a parent class or superclass
 Subprograms that define operations on objects are called methods
 Calls to methods are called messages
 The entire collection of methods of an object is called its message protocol or message
interface
 Messages have two parts--a method name and the destination object
 In the simplest case, a class inherits all of the entities of its parent
 Inheritance can be complicated by access controls to encapsulated entities
 A class can hide entities from its subclasses
 A class can hide entities from its clients
 A class can also hide entities for its clients while allowing its subclasses to
see them
 Besides inheriting methods as is, a class can modify an inherited method
 The new one overrides the inherited one
 The method in the parent is overriden
 There are two kinds of variables in a class:
 Class variables - one/class
 Instance variables - one/object
 There are two kinds of methods in a class:
 Class methods – accept messages to the class
 Instance methods – accept messages to objects
 Single vs. Multiple Inheritance
 One disadvantage of inheritance for reuse:
 Creates interdependencies among classes that complicate maintenance
Dynamic Binding
 A polymorphic variable can be defined in a class that is able to reference (or point to)
objects of the class and objects of any of its descendants
 When a class hierarchy includes classes that override methods and such methods are
called through a polymorphic variable, the binding to the correct method will be dynamic
 Allows software systems to be more easily extended during both development and
maintenance
Dynamic Binding Concepts
 An abstract method is one that does not include a definition (it only defines a protocol)
 An abstract class is one that includes at least one virtual method
 An abstract class cannot be instantiated
4.2 Design Issues for OOP Languages
 The Exclusivity of Objects
 Are Subclasses Subtypes?
 Type Checking and Polymorphism
 Single and Multiple Inheritance
 Object Allocation and DeAllocation
 Dynamic and Static Binding
 Nested Classes
The Exclusivity of Objects
 Everything is an object
– Advantage - elegance and purity
– Disadvantage - slow operations on simple objects
 Add objects to a complete typing system
– Advantage - fast operations on simple objects
– Disadvantage - results in a confusing type system (two kinds of entities)
 Include an imperative-style typing system for primitives but make everything else objects
– Advantage - fast operations on simple objects and a relatively small typing
system
– Disadvantage - still some confusion because of the two type systems
Are Subclasses Subtypes?
 Does an “is-a” relationship hold between a parent class object and an object of the
subclass?
– If a derived class is-a parent class, then objects of the derived class must
behave the same as the parent class object
 A derived class is a subtype if it has an is-a relationship with its parent class
– Subclass can only add variables and methods and override inherited
methods in “compatible” ways
Type Checking and Polymorphism
 Polymorphism may require dynamic type checking of parameters and the return value
– Dynamic type checking is costly and delays error detection
 If overriding methods are restricted to having the same parameter types and return type,
the checking can be static
Single and Multiple Inheritance
 Multiple inheritance allows a new class to inherit from two or more classes
 Disadvantages of multiple inheritance:
– Language and implementation complexity (in part due to name collisions)
– Potential inefficiency - dynamic binding costs more with multiple
inheritance (but not much)
 Advantage:
– Sometimes it is quite convenient and valuable
Allocation and DeAllocation of Objects
 From where are objects allocated?
– If they behave line the ADTs, they can be allocated from anywhere
 Allocated from the run-time stack
 Explicitly create on the heap (via new)
– If they are all heap-dynamic, references can be uniform thru a pointer or
reference variable
 Simplifies assignment - dereferencing can be implicit
– If objects are stack dynamic, there is a problem with regard to subtypes
 Is deallocation explicit or implicit?
Dynamic and Static Binding
 Should all binding of messages to methods be dynamic?
– If none are, you lose the advantages of dynamic binding
– If all are, it is inefficient
 Allow the user to specify
Nested Classes
 If a new class is needed by only one class, there is no reason to define so it can
 be seen by other classes
– Can the new class be nested inside the class that uses it?
– In some cases, the new class is nested inside a subprogram rather than
directly in another class
 Other issues:
– Which facilities of the nesting class should be visible to the nested class
and vice versa
4.3 Implementing OOPs Constructs
Two interesting and challenging parts
– Storage structures for instance variables
– Dynamic binding of messages to methods
Instance Data Storage
 Class instance records (CIRs) store the state of an object
– Static (built at compile time)
 If a class has a parent, the subclass instance variables are added to the parent CIR
 Because CIR is static, access to all instance variables is done as it is in records
– Efficient
Dynamic Binding of Methods Calls
 Methods in a class that are statically bound need not be involved in the CIR; methods that
will be dynamically bound must have entries in the CIR
– Calls to dynamically bound methods can be connected to the corresponding
code thru a pointer in the CIR
– The storage structure is sometimes called virtual method tables (vtable)
– Method calls can be represented as offsets from the beginning of the vtable
4.4 Concurrency
 Concurrency can occur at four levels:
– Machine instruction level
– High-level language statement level
– Unit level
– Program level
 Because there are no language issues in instruction- and program-level concurrency, they
are not addressed here
Multiprocessor Architectures
 Late 1950s - one general-purpose processor and one or more special purpose processors
for input and output operations
 Early 1960s - multiple complete processors, used for program-level concurrency
 Mid-1960s - multiple partial processors, used for instruction-level concurrency
 Single-Instruction Multiple-Data (SIMD) machines
 Multiple-Instruction Multiple-Data (MIMD) machines
– Independent processors that can be synchronized (unit-level concurrency)
Categories of Concurrency
 A thread of control in a program is the sequence of program points reached as
 control flows through the program
 Categories of Concurrency:
– Physical concurrency - Multiple independent processors (multiple threads
of control)
– Logical concurrency - The appearance of physical concurrency is
presented by time-sharing one processor (software can be designed as if
there were multiple threads of control)
 Coroutines (quasi-concurrency) have a single thread of control
Motivations for Studying Concurrency
 Involves a different way of designing software that can be very useful— many real-world
situations involve concurrency
 Multiprocessor computers capable of physical concurrency are now widely used
Subprogram-Level Concurrency
 A task or process is a program unit that can be in concurrent execution with other
program units
 Tasks differ from ordinary subprograms in that:
– A task may be implicitly started
– When a program unit starts the execution of a task, it is not necessarily
suspended
– When a task‘s execution is completed, control may not return to the caller
 Tasks usually work together
Two General Categories of Tasks
 Heavyweight tasks execute in their own address space
 Lightweight tasks all run in the same address space
 A task is disjoint if it does not communicate with or affect the execution of any other task
in the program in any way
Task Synchronization
 A mechanism that controls the order in which tasks execute
 Two kinds of synchronization
– Cooperation synchronization
– Competition synchronization
 Task communication is necessary for synchronization, provided by:
– Shared nonlocal variables
– Parameters
– Message passing
Kinds of synchronization
 Cooperation: Task A must wait for task B to complete some specific activity before task
A can continue its execution, e.g., the producer-consumer problem
 Competition: Two or more tasks must use some resource that cannot be simultaneously
used,
e.g., a shared counter
– Competition is usually provided by mutually exclusive access (approaches are
discussed later)

Need for Competition Synchronization


Scheduler
 Providing synchronization requires a mechanism for delaying task execution
 Task execution control is maintained by a program called the scheduler, which maps task
execution onto available processors
Task Execution States
 New - created but not yet started
 Ready - ready to run but not currently running (no available processor)
 Running
 Blocked - has been running, but cannot now continue (usually waiting for some event to
occur)
 Dead - no longer active in any sense
Liveness and Deadlock
 Liveness is a characteristic that a program unit may or may not have
– In sequential code, it means the unit will eventually complete its execution
 In a concurrent environment, a task can easily lose its liveness
 If all tasks in a concurrent environment lose their liveness, it is called deadlock
Design Issues for Concurrency
 Competition and cooperation synchronization
 Controlling task scheduling
 How and when tasks start and end execution
 How and when are tasks created
Methods of Providing Synchronization
 Semaphores
 Monitors
 Message Passing
4.5 Semaphores
 Dijkstra - 1965
 A semaphore is a data structure consisting of a counter and a queue for storing task
descriptors
 Semaphores can be used to implement guards on the code that accesses shared data
structures
 Semaphores have only two operations, wait and release (originally called P and V by
Dijkstra)
 Semaphores can be used to provide both competition and cooperation synchronization
Cooperation Synchronization with Semaphores
 Example: A shared buffer
 The buffer is implemented as an ADT with the operations DEPOSIT and FETCH as the
only ways to access the buffer
 Use two semaphores for cooperation: emptyspots and fullspots
 The semaphore counters are used to store the numbers of empty spots and full spots in the
buffer
 DEPOSIT must first check emptyspots to see if there is room in the buffer
 If there is room, the counter of emptyspots is decremented and the value is inserted
 If there is no room, the caller is stored in the queue of emptyspots
 When DEPOSIT is finished, it must increment the counter of fullspots
 FETCH must first check fullspots to see if there is a value
– If there is a full spot, the counter of fullspots is decremented and the value
is removed
– If there are no values in the buffer, the caller must be placed in the queue
of fullspots
– When FETCH is finished, it increments the counter of emptyspots
 The operations of FETCH and DEPOSIT on the semaphores are accomplished through
two semaphore operations named wait and release
Semaphores: Wait Operation
wait(aSemaphore)
if aSemaphore‘s counter > 0 then
decrement aSemaphore‘s counter
else
put the caller in aSemaphore‘s queue
attempt to transfer control to a ready task
-- if the task ready queue is empty,
-- deadlock occurs
end
Semaphores: Release Operation
release(aSemaphore)
if aSemaphore‘s queue is empty then
increment aSemaphore‘s counter
else
put the calling task in the task ready queue
transfer control to a task from aSemaphore‘s queue
end
Producer Consumer Code
semaphore fullspots, emptyspots;
fullstops.count = 0;
emptyspots.count = BUFLEN;
task producer;
loop
-- produce VALUE –-
wait (emptyspots); {wait for space}
DEPOSIT(VALUE);
release(fullspots); {increase filled}
end loop;
end producer;
Producer Consumer Code
task consumer;
loop
wait (fullspots);{wait till not empty}}
FETCH(VALUE);
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;
Competition Synchronization with Semaphores
 A third semaphore, named access, is used to control access (competition synchronization)
– The counter of access will only have the values 0 and 1
– Such a semaphore is called a binary semaphore
 Note that wait and release must be atomic!
Producer Consumer Code
semaphore access, fullspots, emptyspots;
access.count = 0;
fullstops.count = 0;
emptyspots.count = BUFLEN;
task producer;
loop
-- produce VALUE –-
wait(emptyspots); {wait for space}
wait(access); {wait for access)
DEPOSIT(VALUE);
release(access); {relinquish access}
release(fullspots); {increase filled}
end loop;
end producer;
Producer Consumer Code
task consumer;
loop
wait(fullspots);{wait till not empty}
wait(access); {wait for access}
FETCH(VALUE);
release(access); {relinquish access}
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;
Evaluation of Semaphores
 Misuse of semaphores can cause failures in cooperation synchronization,
e.g., the buffer will overflow if the wait of fullspots is left out
 Misuse of semaphores can cause failures in competition synchronization,
e.g., the program will deadlock if the release of access is left out
4.6 Monitors
 Ada, Java, C#
 The idea: encapsulate the shared data and its operations to restrict access
 A monitor is an abstract data type for shared data
Competition Synchronization
 Shared data is resident in the monitor (rather than in the client units)
 All access resident in the monitor
– Monitor implementation guarantee synchronized access by allowing only one access at
a time
– Calls to monitor procedures are implicitly queued if the monitor is busy at the time of
the call
Cooperation Synchronization
 Cooperation between processes is still a programming task
– Programmer must guarantee that a shared buffer does not experience underflow or
overflow

Cooperation Synchronization
Evaluation of Monitors
 A better way to provide competition synchronization than are semaphores
 Semaphores can be used to implement monitors
 Monitors can be used to implement semaphores
 Support for cooperation synchronization is very similar as with semaphores, so it has the
same problems
4.7 Message Passing
 Message passing is a general model for concurrency
– It can model both semaphores and monitors
– It is not just for competition synchronization
 Central idea: task communication is like seeing a doctor--most of the time she waits for
you or you wait for her, but when you are both ready, you get together, or rendezvous
Message Passing Rendezvous
 To support concurrent tasks with message passing, a language needs:
– A mechanism to allow a task to indicate when it is willing to accept
messages
– A way to remember who is waiting to have its message accepted and some
“fair” way of choosing the next message
 When a sender task‘s message is accepted by a receiver task, the actual message
transmission is called a rendezvous
Ada Support for Concurrency
 The Ada 83 Message-Passing Model
– Ada tasks have specification and body parts, like packages; the spec has the interface,
which is the collection of entry points:
task Task_Example is
entry ENTRY_1 (Item : in Integer);
end Task_Example;
Task Body
 The body task describes the action that takes place when a rendezvous occurs
 A task that sends a message is suspended while waiting for the message to be accepted
and during the rendezvous
 Entry points in the spec are described with accept clauses in the body accept
entry_name (formal parameters) do

end entry_name
Example of a Task Body
task body Task_Example is
begin
loop
accept Entry_1 (Item: in Float) do
...
end Entry_1;
end loop;
end Task_Example;
Ada Message Passing Semantics
 The task executes to the top of the accept clause and waits for a message
 During execution of the accept clause, the sender is suspended
 accept parameters can transmit information in either or both directions
 Every accept clause has an associated queue to store waiting messages

Rendezvous Time Lines


Message Passing: Server/Actor Tasks
 A task that has accept clauses, but no other code is called a server task (the example
above is a server task)
 A task without accept clauses is called an actor task
– An actor task can send messages to other tasks
– Note: A sender must know the entry name of the receiver, but not vice versa
(asymmetric)
Graphical Representation of a Rendezvous
Example: Actor Task
task Water_Monitor; -- specificationtask body body Water_Monitor is -- body
begin
loop
if Water_Level > Max_Level
then Sound_Alarm;
end if;
delay 1.0; -- No further execution
-- for at least 1 second
end loop;
end Water_Monitor;
Multiple Entry Points
 Tasks can have more than one entry point
– The specification task has an entry clause for each
– The task body has an accept clause for each entry clause, placed in a select clause,
which is in a loop
A Task with Multiple Entries
task body Teller is
loop
select
accept Drive_Up(formal params) do
...
end Drive_Up;
...
or
accept Walk_Up(formal params) do
...
end Walk_Up;
...
end select;
end loop;
end Teller;
Semantics of Tasks with Multiple accept Clauses
 If exactly one entry queue is nonempty, choose a message from it
 If more than one entry queue is nonempty, choose one, nondeterministically, from which
to accept a message
 If all are empty, wait
 The construct is often called a selective wait
 Extended accept clause - code following the clause, but before the next clause
– Executed concurrently with the caller
Cooperation Synchronization with Message Passing
 Provided by Guarded accept clauses
when not Full(Buffer) =>
accept Deposit (New_Value) do
 An accept clause with a with a when clause is either open or closed
– A clause whose guard is true is called open
– A clause whose guard is false is called closed
– A clause without a guard is always open
Semantics of select with Guarded accept Clauses:
 select first checks the guards on all clauses
 If exactly one is open, its queue is checked for messages
 If more than one are open, non-deterministically choose a queue among them to check for
messages
 If all are closed, it is a runtime error
 A select clause can include an else clause to avoid the error
– When the else clause completes, the loop repeats
Example of a Task with Guarded accept Clauses
Note: The station may be out of gas and there may or may not be a position
available in the garage
task Gas_Station_Attendant is
entry Service_Island (Car : Car_Type);
entry Garage (Car : Car_Type);
end Gas_Station_Attendant;
Example of a Task with Guarded accept Clauses
task body Gas_Station_Attendant is
begin
loop
select
when Gas_Available =>
accept Service_Island (Car : Car_Type) do
Fill_With_Gas (Car);
end Service_Island;
or
when Garage_Available =>
accept Garage (Car : Car_Type) do
Fix (Car);
end Garage;
else
Sleep;
end select;
end loop;
end Gas_Station_Attendant;
Competition Synchronization with Message Passing
 Modeling mutually exclusive access to shared data
 Example--a shared buffer
 Encapsulate the buffer and its operations in a task
 Competition synchronization is implicit in the semantics of accept clauses
– Only one accept clause in a task can be active at any given time
Task Termination
 The execution of a task is completed if control has reached the end of its code body
 If a task has created no dependent tasks and is completed, it is terminated
 If a task has created dependent tasks and is completed, it is not terminated until all its
dependent tasks are terminated
The terminate Clause
 A terminate clause in a select is just a terminate statement
 A terminate clause is selected when no accept clause is open
 When a terminate is selected in a task, the task is terminated only when its master and all
of the dependents of its master are either completed or are waiting at a terminate
 A block or subprogram is not left until all of its dependent tasks are terminated
Message Passing Priorities
 The priority of any task can be set with the pragma priority pragma Priority (expression);
 The priority of a task applies to it only when it is in the task ready queue
Binary Semaphores
For situations where the data to which access is to be controlled is NOT encapsulated in a task
task Binary_Semaphore is
entry Wait;
entry release;
end Binary_Semaphore;
task body Binary_Semaphore is
begin
loop
accept Wait;
accept Release;
end loop;
end Binary_Semaphore;
Concurrency in Ada 95
 Ada 95 includes Ada 83 features for concurrency, plus two new features
– Protected objects: A more efficient way of implementing shared data to allow access to
a shared data structure to be done without rendezvous
– Asynchronous communication
Ada 95: Protected Objects
 A protected object is similar to an abstract data type
 Access to a protected object is either through messages passed to entries, as with a task,
or through protected subprograms
 A protected procedure provides mutually exclusive read-write access to protected objects
 A protected function provides concurrent read-only access to protected objects
Asynchronous Communication
 Provided through asynchronous select structures
 An asynchronous select has two triggering alternatives, an entry clause or a delay
– The entry clause is triggered when sent a message
– The delay clause is triggered when its time limit is reached
Evaluation of the Ada
 Message passing model of concurrency is powerful and general
 Protected objects are a better way to provide synchronized shared data
 In the absence of distributed processors, the choice between monitors and tasks with
message passing is somewhat a matter of taste
 For distributed systems, message passing is a better model for concurrency
4.8 Java Threads
The concurrent units in Java are methods named run
– A run method code can be in concurrent execution with other such methods
– The process in which the run methods execute is called a thread
Class myThread extends Thread{
public void run () {…}
}

Thread myTh = new MyThread ();
myTh.start();
Controlling Thread Execution
 The Thread class has several methods to control the execution of threads
– The yield is a request from the running thread to voluntarily surrender the processor
– The sleep method can be used by the caller of the method to block the thread
– The join method is used to force a method to delay its execution until the run method of
another thread has completed its execution
Thread Priorities
 A thread‘s default priority is the same as the thread that create it
– If main creates a thread, its default priority is NORM_PRIORITY
 Threads defined two other priority constants, MAX_PRIORITY and MIN_PRIORITY
 The priority of a thread can be changed with the methods setPriority
Competition Synchronization with Java Threads
 A method that includes the synchronized modifier disallows any other method from
running on the object while it is in execution

public synchronized void deposit( int i) {…}
public synchronized int fetch() {…}

 The above two methods are synchronized which prevents them from interfering with each
other
 If only a part of a method must be run without interference, it can be synchronized
through synchronized statement
synchronized (expression)
statement
Cooperation Synchronization with Java Threads
 Cooperation synchronization in Java is achieved via wait, notify, and notifyAll methods
– All methods are defined in Object, which is the root class in Java, so all
objects inherit them
 The wait method must be called in a loop
 The notify method is called to tell one waiting thread that the event it was waiting has
happened
 The notifyAll method awakens all of the threads on the object‘s wait list
Java’s Thread Evaluation
 Java‘s support for concurrency is relatively simple but effective
 Not as powerful as Ada‘s tasks
C# Threads
 Loosely based on Java but there are significant differences
 Basic thread operations
– Any method can run in its own thread
– A thread is created by creating a Thread object
– Creating a thread does not start its concurrent execution; it must be requested through
the Start method
– A thread can be made to wait for another thread to finish with Join
– A thread can be suspended with Sleep
– A thread can be terminated with Abort
Synchronizing Threads
 Three ways to synchronize C# threads
– The Interlocked class
 Used when the only operations that need to be synchronized are incrementing or
decrementing of an integer
– The lock statement
 Used to mark a critical section of code in a thread lock (expression) {… }
– The Monitor class
 Provides four methods that can be used to provide more sophisticated synchronization
C#’s Concurrency Evaluation
 An advance over Java threads, e.g., any method can run its own thread
 Thread termination is cleaner than in Java
 Synchronization is more sophisticated
4.9 Statement-Level Concurrency
 Objective: Provide a mechanism that the programmer can use to inform compiler of ways
it can map the program onto multiprocessor architecture
 Minimize communication among processors and the memories of the other processors
High-Performance Fortran
 A collection of extensions that allow the programmer to provide information to the
compiler to help it optimize code for multiprocessor computers
 Specify the number of processors, the distribution of data over the memories of those
processors, and the alignment of data
Primary HPF Specifications
 Number of processors
!HPF$ PROCESSORS procs (n)
 Distribution of data
!HPF$ DISTRIBUTE (kind) ONTO procs :: identifier_list
kind can be BLOCK (distribute data to processors in blocks) or
CYCLIC (distribute data to processors one element at a time)
 Relate the distribution of one array with that of another
ALIGN array1_element WITH array2_element
Statement-Level Concurrency Example
REAL list_1(1000), list_2(1000)
INTEGER list_3(500), list_4(501)
!HPF$ PROCESSORS proc (10)
!HPF$ DISTRIBUTE (BLOCK) ONTO procs ::
list_1, list_2
!HPF$ ALIGN list_1(index) WITH
list_4 (index+1)

list_1 (index) = list_2(index)
list_3(index) = list_4(index+1)
 FORALL statement is used to specify a list of statements that may be executed
concurrently
FORALL (index = 1:1000)
list_1(index) = list_2(index)
 Specifies that all 1,000 RHSs of the assignments can be evaluated before any assignment
takes place
4.11 Exception Handling
 In a language without exception handling
– When an exception occurs, control goes to the operating system, where a
message is displayed and the program is terminated
 In a language with exception handling
– Programs are allowed to trap some exceptions, thereby providing the
possibility of fixing the problem and continuing
Basic Concepts
 Many languages allow programs to trap input/output errors (including EOF)
 An exception is any unusual event, either erroneous or not, detectable by either hardware
or software, that may require special processing
 The special processing that may be required after detection of an exception is called
exception handling
 The exception handling code unit is called an exception handler
Exception Handling Alternatives
 An exception is raised when its associated event occurs
 A language that does not have exception handling capabilities can still define, detect,
raise, and handle exceptions (user defined, software detected)
 Alternatives:
– Send an auxiliary parameter or use the return value to indicate the return status
of a subprogram
– Pass an exception handling subprogram to all subprograms
Advantages of Built-in Exception Handling
 Error detection code is tedious to write and it clutters the program
 Exception handling encourages programmers to consider many different possible errors
 Exception propagation allows a high level of reuse of exception handling code
Design Issues
 How are user-defined exceptions specified?
 Should there be default exception handlers for programs that do not provide their own?
 Can built-in exceptions be explicitly raised?
 Are hardware-detectable errors treated as exceptions that can be handled?
 Are there any built-in exceptions?
 How can exceptions be disabled, if at all?
 How and where exception handlers specified and what are their scope?
 How is an exception occurrence bound to an exception handler?
 Can information about the exception be passed to the handler?
 Where does execution continue, if at all, after an exception handler completes its
execution? (continuation vs. resumption)
 Is some form of finalization provided?
Exception Handling in C++ - CO3
 Added to C++ in 1990
 Design is based on that of CLU, Ada, and ML
C++ Exception Handlers
Exception Handlers Form:
try {
-- code that is expected to raise an exception
}
catch (formal parameter) {
-- handler code
}
...
catch (formal parameter) {
-- handler code
}
The catch Function
 catch is the name of all handlers--it is an overloaded name, so the formal parameter of
each must be unique
 The formal parameter need not have a variable
– It can be simply a type name to distinguish the handler it is in from others
 The formal parameter can be used to transfer information to the handler
 The formal parameter can be an ellipsis, in which case it handles all exceptions not yet
handled
Throwing Exceptions
 Exceptions are all raised explicitly by the statement: throw [expression];
 The brackets are metasymbols
 A throw without an operand can only appear in a handler; when it appears, it simply re-
raises the exception, which is then handled elsewhere
 The type of the expression disambiguates the intended handler
Unhandled Exceptions
 An unhandled exception is propagated to the caller of the function in which it is raised
 This propagation continues to the main function
Continuation
 After a handler completes its execution, control flows to the first statement afterthe last
handler in the sequence of handlers of which it is an element
 Other design choices
– All exceptions are user-defined
– Exceptions are neither specified nor declared
– Functions can list the exceptions they may raise
– Without a specification, a function can raise any exception (the throw clause)
Evaluation
 It is odd that exceptions are not named and that hardware- and system software-
detectable exceptions cannot be handled
 Binding exceptions to handlers through the type of the parameter certainly does
not promote readability
Exception Handling in Java
 Based on that of C++, but more in line with OOP philosophy
 All exceptions are objects of classes that are descendants of the Throwable class
Classes of Exceptions
The Java library includes two subclasses of Throwable :
– Error
o Thrown by the Java interpreter for events such as heap overflow
o Never handled by user programs
– Exception
o User-defined exceptions are usually subclasses of this
o Has two predefined subclasses, IOException and RuntimeException
e.g., ArrayIndexOutOfBoundsException and NullPointerException
Java Exception Handlers
 Like those of C++, except every catch requires a named parameter and all parameters
must be descendants of Throwable
 Syntax of try clause is exactly that of C++
 Exceptions are thrown with throw, as in C++, but often the throw includes the new
operator to create the object, as in: throw new MyException();
Binding Exceptions to Handlers
 Binding an exception to a handler is simpler in Java than it is in C++
– An exception is bound to the first handler with a parameter is the same
class as the thrown object or an ancestor of it
 An exception can be handled and rethrown by including a throw in the handler (a handler
could also throw a different exception)
Continuation
 If no handler is found in the method, the exception is propagated to the method‘s caller
 If no handler is found (all the way to main), the program is terminated
 To ensure that all exceptions are caught, a handler can be included in any try construct
that catches all exceptions
– Simply use an Exception class parameter
– Of course, it must be the last in the try construct
Checked and Unchecked Exceptions
 The Java throws clause is quite different from the throw clause of C++
 Exceptions of class Error and RunTimeException and all of their descendants are called
unchecked exceptions; all other exceptions are called checked exceptions
 Checked exceptions that may be thrown by a method must be either:
– Listed in the throws clause, or
– Handled in the method
Other Design Choices
 A method cannot declare more exceptions in its throws clause than the method it
overrides
 A method that calls a method that lists a particular checked exception in its throws clause
has three alternatives for dealing with that exception:
– Catch and handle the exception
– Catch the exception and throw an exception that is listed in its own throws clause
– Declare it in its throws clause and do not handle it
The finally Clause
 Can appear at the end of a try construct
 Form:
finally {..}
 Purpose: To specify code that is to be executed, regardless of what happens in the try
construct
Example
 A try construct with a finally clause can be used outside exception handling
try {
for (index = 0; index < 100; index++) {

if (…) {
return;
} //** end of if
} //** end of try clause
finally {

} //** end of try construct
Assertions
 Statements in the program declaring a boolean expression regarding the current state of
the computation
 When evaluated to true nothing happens
 When evaluated to false an AssertionError exception is thrown
 Can be disabled during runtime without program modification or recompilation
 Two forms
– assert condition;
– assert condition: expression;
Evaluation
 The types of exceptions makes more sense than in the case of C++
 The throws clause is better than that of C++ (The throw clause in C++ says little to the
programmer)
 The finally clause is often useful
 The Java interpreter throws a variety of exceptions that can be handled by user programs
UNIT – V
FUNCTIONAL AND LOGIC PROGRAMMING LANGUAGES

Introduction to lambda calculus – fundamentals of functional programming languages –


Programming with Scheme – Programming with ML – Introduction to logic and logic
programming – Programming with Prolog – multi-paradigm languages

5.1 Introduction to Lambda Calculus


 The design of the imperative languages is based directly on the von Neumann
architecture
– Efficiency is the primary concern, rather than the suitability of the
language for software development
 The design of the functional languages is based on mathematical functions
– A solid theoretical basis that is also closer to the user, but relatively unconcerned with
the architecture of the machines on which programs will run
Mathematical Functions
 A mathematical function is a mapping of members of one set, called the domain set, to
another set, called the range set
 A lambda expression specifies the parameter(s) and the mapping of a function in the
following form
(x) x * x * x
for the function cube (x) = x * x * x
Lambda Expressions
 Lambda expressions describe nameless functions
 Lambda expressions are applied to parameter(s) by placing the parameter(s) after the
expression
e.g., ( (x) x * x * x)(2)
which evaluates to 8
Functional Forms
 A higher-order function, or functional form, is one that either takes functions as
parameters or yields a function as its result, or both
Function Composition
 A functional form that takes two functions as parameters and yields a function
whose value is the first actual parameter function applied to the application of
the second
Form: h f ° g
which means h(x) f(g(x))
For f (x) x + 2 and g (x) 3 * x,
h f ° g yields (3 * x)+ 2
Apply-to-all
 A functional form that takes a single function as a parameter and yields a list of values
obtained by applying the given function to each element of a list of parameters
Form:
For h (x) x * x
(h, (2, 3, 4)) yields (4, 9, 16)
5.2 Fundamentals of Functional Programming Languages
 The objective of the design of a FPL is to mimic mathematical functions to the greatest
extent possible
 The basic process of computation is fundamentally different in a FPL than in an
imperative language
– In an imperative language, operations are done and the results are stored in variables for
later use
– Management of variables is a constant concern and source of complexity for imperative
programming
 In an FPL, variables are not necessary, as is the case in mathematics
Referential Transparency
 In an FPL, the evaluation of a function always produces the same result given the same
parameters
The First Functional Programming Language : LISP
LISP Data Types and Structures
 Data object types: originally only atoms and lists
 List form: parenthesized collections of sublists and/or atoms
e.g., (A B (C D) E)
 Originally, LISP was a typeless language
 LISP lists are stored internally as single-linked lists
LISP Interpretation
 Lambda notation is used to specify functions and function definitions.
Function applications and data have the same form.
e.g., If the list (A B C) is interpreted as data it is a simple list of three atoms, A, B and C
If it is interpreted as a function application, it means that the function named A is applied
to the two parameters, B and C
 The first LISP interpreter appeared only as a demonstration of the universality of the
computational capabilities of the notation
5.3 Programming With Scheme
- A mid-1970s dialect of LISP, designed to be cleaner, more modern, and simpler version than
the contemporary dialects of LISP
- Uses only static scoping
- Functions are first-class entities
- They can be the values of expressions and elements of lists
- They can be assigned to variables and passed as parameters
Primitive Functions
1. Arithmetic: +, -,*, /, ABS, SQRT
e.g., (+ 5 2) yields 7
2. QUOTE -takes one parameter; returns the parameter without evaluation
- QUOTE is required because the Scheme interpreter, named EVAL, always evaluates
parameters to function applications before applying the function.
QUOTE is used to avoid parameter evaluation when it is not appropriate
- QUOTE can be abbreviated with the apostrophe prefix operator
e.g., '(A B) is equivalent to (QUOTE (A B))
3. CAR takes a list parameter; returns the first element of that list
e.g., (CAR '(A B C)) yields A
(CAR '((A B) C D)) yields (A B)
4. CDR takes a list parameter; returns the list after removing its first element
e.g., (CDR '(A B C)) yields (B C)
(CDR '((A B) C D)) yields (C D)
5. CONS takes two parameters, the first of which can be either an atom or a list and the second
of which is a list; returns a new list that includes the first parameter as its first element and the
second parameter as the remainder of its result
e.g., (CONS 'A '(B C)) returns (A B C)
6. LIST - takes any number of parameters; returns a list with the parameters as elements
Predicate Functions: (#T and () are true and false)
1. EQ? takes two symbolic parameters; it returns #T if both parameters are atoms and the two
are the same
e.g., (EQ? 'A 'A) yields #T
(EQ? 'A '(A B)) yields ()
Note that if EQ? is called with list parameters, the result is not reliable Also, EQ? does not work
for numeric atoms
2. LIST? takes one parameter; it returns #T if the parameter is an list; otherwise ()
3. NULL? takes one parameter; it returns #T if the parameter is the empty list; otherwise ()
Note that NULL? returns #T if the parameter is ()
4. Numeric Predicate Functions
=, <>, >, <, >=, <=, EVEN?, ODD?, ZERO?
5. Output Utility Functions:
(DISPLAY expression)
(NEWLINE)
- Lambda Expressions
- Form is based on l notation
e.g.,
(LAMBDA (L) (CAR (CAR L)))
L is called a bound variable
- Lambda expressions can be applied

e.g.,
((LAMBDA (L) (CAR (CAR L))) '((A B) C D))
- A Function for Constructing Functions
DEFINE - Two forms:
1. To bind a symbol to an expression
e.g.,
(DEFINE pi 3.141593)
(DEFINE two_pi (* 2 pi))
2. To bind names to lambda expressions
e.g.,
(DEFINE (cube x) (* x x x))
- Example use:
(cube 4)
Evaluation process (for normal functions):
1. Parameters are evaluated, in no particular order
2. The values of the parameters are substituted into the function body
3. The function body is evaluated
4. The value of the last expression in the body is the value of the function
(Special forms use a different evaluation process)
Control Flow
- 1. Selection- the special form, IF
(IF predicate then_exp else_exp)
e.g.,
(IF (<> count 0)
(/ sum count)
0
)
- 2. Multiple Selection - the special form, COND
- General form:
(COND
(predicate_1 expr {expr})
(predicate_1 expr {expr})
...
(predicate_1 expr {expr})
(ELSE expr {expr})
)
Returns the value of the last expr in the first pair whose predicate evaluates to true
Example Scheme Functions
1. member - takes an atom and a list; returns #T if the atom is in the list; () otherwise
(DEFINE (member atm lis)
(COND
((NULL? lis) '())
((EQ? atm (CAR lis)) #T)
((ELSE (member atm (CDR lis)))
))
- 2. equalsimp - takes two simple lists as parameters; returns #T if the two simple lists
are equal; () otherwise
(DEFINE (equalsimp lis1 lis2)
(COND
((NULL? lis1) (NULL? lis2))
((NULL? lis2) '())
((EQ? (CAR lis1) (CAR lis2))
(equalsimp (CDR lis1) (CDR lis2)))
(ELSE '())
))
3. equal - takes two lists as parameters; returns #T if the two general lists are equal; () otherwise
(DEFINE (equal lis1 lis2)
(COND
((NOT (LIST? lis1)) (EQ? lis1 lis2))
((NOT (LIST? lis2)) '())
((NULL? lis1) (NULL? lis2))
((NULL? lis2) '())
((equal (CAR lis1) (CAR lis2))
(equal (CDR lis1) (CDR lis2)))
(ELSE '())
))
4. append - takes two lists as parameters; returns the first parameter list with the elements of the
second parameter list appended at the end
(DEFINE (append lis1 lis2)
(COND
((NULL? lis1) lis2)
(ELSE (CONS (CAR lis1)
(append (CDR lis1) lis2)))
))
Functional Forms
1. Composition
- The previous examples have used it
2. Apply to All - one form in Scheme is mapcar
- Applies the given function to all elements of the given list; result is a list of the results
(DEFINE mapcar fun lis)
(COND
((NULL? lis) '())
(ELSE (CONS (fun (CAR lis))
(mapcar fun (CDR lis))))
))
- It is possible in Scheme to define a function that builds Scheme code and requests its
interpretation
- This is possible because the interpreter is a user-available function, EVAL
e.g., suppose we have a list of numbers that must be added together
((DEFINE (adder lis)
(COND
((NULL? lis) 0)
(ELSE (EVAL (CONS '+ lis)))
))
The parameter is a list of numbers to be added; adder inserts a + operator and interprets the
resulting list
Scheme includes some imperative features:
1. SET! binds or rebinds a value to a name
2. SET-CAR! replaces the car of a list
3. SET-CDR! replaces the cdr part of a list

5.4 Programming with ML


 A static-scoped functional language with syntax that is closer to Pascal than to LISP
 Uses type declarations, but also does type inferencing to determine the types of
undeclared variables
 It is strongly typed (whereas Scheme is essentially typeless) and has no type coercions
 Includes exception handling and a module facility for implementing abstract data types
 Includes lists and list operations

ML Specifics
 Function declaration form:
fun name (parameters) = body;
e.g., fun cube (x : int) = x * x * x;
– The type could be attached to return value, as in
fun cube (x) : int = x * x * x;
– With no type specified, it would default to
int (the default for numeric values)
– User-defined overloaded functions are not allowed, so if we wanted a cube function for
real parameters, it would need to have a different name
– There are no type coercions in ML
ML selection
if expression then then_expression
else else_expression
where the first expression must evaluate to a Boolean value
 Pattern matching is used to allow a function to operate on different parameter
forms
fun fact(0) = 1
| fact(n : int) : int =n * fact(n – 1)
Lists
Literal lists are specified in brackets
[3, 5, 7]
[] is the empty list
CONS is the binary infix operator, ::
4 :: [3, 5, 7], which evaluates to [4, 3, 5, 7]
CAR is the unary operator hd
CDR is the unary operator tl
fun length([]) = 0
| length(h :: t) = 1 + length(t);
fun append([], lis2) = lis2
| append(h :: t, lis2) = h :: append(t, lis2);
 The val statement binds a name to a value (similar to DEFINE in Scheme)
val distance = time * speed;
– As is the case with DEFINE, val is nothing like an assignment statement in an
imperative language
5.5 Logic Programming Introduction
 Logic programming languages, sometimes called declarative programming languages
 Express programs in a form of symbolic logic
 Use a logical inferencing process to produce results
 Declarative rather that procedural:
– Only specification of results are stated (not detailed procedures for producing them)
Proposition
 A logical statement that may or may not be true
– Consists of objects and relationships of objects to each other
Symbolic Logic
 Logic which can be used for the basic needs of formal logic:
– Express propositions
– Express relationships between propositions
– Describe how new propositions can be inferred from other propositions
 Particular form of symbolic logic used for logic programming called predicate calculus
Object Representation
 Objects in propositions are represented by simple terms: either constants or variables
 Constant: a symbol that represents an object
 Variable: a symbol that can represent different objects at different times
– Different from variables in imperative languages
Compound Terms
 Atomic propositions consist of compound terms
 Compound term: one element of a mathematical relation, written like a mathematical
function
– Mathematical function is a mapping
– Can be written as a table
Parts of a Compound Term
 Compound term composed of two parts
– Functor: function symbol that names the relationship
– Ordered list of parameters (tuple)
Examples:
student(jon)
like(seth, OSX)
like(nick, windows)
like(jim, linux)
Forms of a Proposition
 Propositions can be stated in two forms:
– Fact: proposition is assumed to be true
– Query: truth of proposition is to be determined
 Compound proposition:
– Have two or more atomic propositions
– Propositions are connected by operators
Logical Operators
Kindly refer table in Written Notes
Quantifiers
Kindly refer table in Written Notes

Clausal Form
 Too many ways to state the same thing
 Use a standard form for propositions
 Clausal form:
– B1 B2 … Bn A1 A2 … Am
– means if all the As are true, then at least one B is true
 Antecedent: right side
 Consequent: left side
Predicate Calculus and Proving Theorems
 A use of propositions is to discover new theorems that can be inferred from known
axioms and theorems
 Resolution: an inference principle that allows inferred propositions to be computed from
given propositions
Resolution
 Unification: finding values for variables in propositions that allows matching process to
succeed
 Instantiation: assigning temporary values to variables to allow unification to succeed
 After instantiating a variable with a value, if matching fails, may need to backtrack and
instantiate with a different value
Theorem Proving
 Basis for logic programming
 When propositions used for resolution, only restricted form can be used
 Horn clause - can have only two forms
– Headed: single atomic proposition on left side
– Headless: empty left side (used to state facts)
 Most propositions can be stated as Horn clauses
5.6 Programming with ProLog
 Edinburgh Syntax
 Term: a constant, variable, or structure
 Constant: an atom or an integer
 Atom: symbolic value of Prolog
 Atom consists of either:
– a string of letters, digits, and underscores beginning with a lowercase letter
– a string of printable ASCII characters delimited by apostrophes
Terms: Variables and Structures
 Variable: any string of letters, digits, and underscores beginning with an uppercase letter
 Instantiation: binding of a variable to a value
– Lasts only as long as it takes to satisfy one complete goal
 Structure: represents atomic proposition
functor(parameter list)
Fact Statements
 Used for the hypotheses
 Headless Horn clauses
female(shelley).
male(bill).
father(bill, jake).
Rule Statements
 Used for the hypotheses
 Headed Horn clause
 Right side: antecedent (if part)
– May be single term or conjunction
 Left side: consequent (then part)
– Must be single term
 Conjunction: multiple terms separated by logical AND operations (implied)
Example Rules
ancestor(mary,shelley):- mother(mary,shelley).
Can use variables (universal objects) to generalize meaning:
parent(X,Y):- mother(X,Y).
parent(X,Y):- father(X,Y).
grandparent(X,Z):- parent(X,Y), parent(Y,Z).
sibling(X,Y):- mother(M,X), mother(M,Y),
father(F,X), father(F,Y).
Goal Statements
 For theorem proving, theorem is in form of proposition that we want system to prove or
disprove – goal statement
 Same format as headless Horn
man(fred)
 Conjunctive propositions and propositions with variables also legal goals
father(X,mike)
Inferencing Process of Prolog
 Queries are called goals
 If a goal is a compound proposition, each of the facts is a subgoal
 To prove a goal is true, must find a chain of inference rules and/or facts.
For goal Q:
B :- A
C :- B

Q :- P
 Process of proving a subgoal called matching, satisfying, or resolution
Approaches
 Bottom-up resolution, forward chaining
– Begin with facts and rules of database and attempt to find sequence that
leads to goal
– Works well with a large set of possibly correct answers
 Top-down resolution, backward chaining
– Begin with goal and attempt to find sequence that leads to set of facts in
database
– Works well with a small set of possibly correct answers
 Prolog implementations use backward chaining
Subgoal Strategies
 When goal has more than one subgoal, can use either
– Depth-first search: find a complete proof for the first subgoal before working on others
– Breadth-first search: work on all subgoals in parallel
 Prolog uses depth-first search
– Can be done with fewer computer resources
Backtracking
 With a goal with multiple subgoals, if fail to show truth of one of subgoals, reconsider
previous subgoal to find an alternative solution: backtracking
 Begin search where previous search left off
 Can take lots of time and space because may find all possible proofs to every subgoal
Simple Arithmetic
 Prolog supports integer variables and integer arithmetic
 is operator: takes an arithmetic expression as right operand and variable as left operand
A is B / 17 + C
 Not the same as an assignment statement!
Example
speed(ford,100).
speed(chevy,105).
speed(dodge,95).
speed(volvo,80).
time(ford,20).
time(chevy,21).
time(dodge,24).
time(volvo,24).
distance(X,Y) :- speed(X,Speed),
time(X,Time),
Y is Speed * Time.
Trace
• Built-in structure that displays instantiations at each step
• Tracing model of execution - four events:
– Call (beginning of attempt to satisfy goal)
– Exit (when a goal has been satisfied)
– Redo (when backtrack occurs)
– Fail (when goal fails)
Example
likes(jake,chocolate).
likes(jake,apricots).
likes(darcie,licorice).
likes(darcie,apricots).
trace.
likes(jake,X),
likes(darcie,X).

List Structures
 Other basic data structure (besides atomic propositions we have already seen): list
 List is a sequence of any number of elements
 Elements can be atoms, atomic propositions, or other terms (including otherlists)
[apple, prune, grape, kumquat]
[] (empty list)
[X | Y] (head X and tail Y)
Append Example
append([], List, List).
append([Head | List_1], List_2, [Head | List_3]) :-
append (List_1, List_2, List_3).
Reverse Example
reverse([], []).
reverse([Head | Tail], List) :-
reverse (Tail, Result),
append (Result, [Head], List).
Deficiencies of Prolog
 Resolution order control
 The closed-world assumption
 The negation problem
 Intrinsic limitations

You might also like