KEMBAR78
Software Engineering | PDF | Software Testing | Fault Tolerance
0% found this document useful (0 votes)
53 views22 pages

Software Engineering

The document contains questions from a software engineering exam at Madan Mohan Malaviya University of Technology related to software testing, quality assurance, complexity measures, standards, and formal technical reviews. It provides sample answers explaining key concepts like the goals of software quality assurance, ways to calculate cyclomatic complexity, the IEEE standard template for software requirements specifications, and the importance and process of formal technical reviews.

Uploaded by

devesh verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views22 pages

Software Engineering

The document contains questions from a software engineering exam at Madan Mohan Malaviya University of Technology related to software testing, quality assurance, complexity measures, standards, and formal technical reviews. It provides sample answers explaining key concepts like the goals of software quality assurance, ways to calculate cyclomatic complexity, the IEEE standard template for software requirements specifications, and the importance and process of formal technical reviews.

Uploaded by

devesh verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Madan Mohan Malaviya University of Technology

Gorakhpur

Software Engineering
(Question Bank)
CS (VIth Sem)

UNIT- III
Topics Covered:

Software Testing:Testing Objectives, Unit Testing, Integration Testing, 8


Acceptance Testing, Regression Testing, Testing for Functionality and
Testing for Performance, Top-Down and Bottom-Up Testing Strategies:
Test Drivers and Test Stubs, Structural Testing (White Box Testing),
Functional Testing (Black Box Testing), Test Data Suit Preparation, Alpha
and Beta Testing of Products.Static Testing Strategies: Formal Technical
Reviews (Peer Reviews), Walk Through, Code Inspection, Compliance
with Design and Coding Standards.

1)Write short notes on following


a) Software Quality Assurance
b) Cyclomatic complexity measures
c) IEEE standards for SRS
Answer:
a) Software Quality Assurance
Software quality assurance is an umbrella activity that is applied throughout the software process.
SQA encompasses:
(1) a quality management approach
(2) effective software engineering technology
(3) formal technical reviews
(4) a multi-tiered testing strategy
(5) document change control
(6) software development standard and its control procedure
(7) measurement and reporting mechanism
Quality --> refers to measurable characteristics of a software.
These items can be compared based on a given standard
Two types of quality control:
• Quality design -the characteristics that designers specify for an item.
includes: requirements, specifications, and the design of the system.
• Quality of conformance -the degree to which the design specification are followed. It focuses on
implementation based on the design.

Goal: to achieve high-quality software product


Quality definition:
“Conformance to explicitly stated functional and performance
requirements, explicitly documented development standards,
and implicit characteristics that expected of al professional
developed software.”
Three import points for quality measurement:
- Use requirements as the foundation
- Use specified standards as the criteria
- Considering implicit requirements

b) Cyclomatic complexity measures

Cyclomatic complexity is a software metric. It is used to indicate the complexity of a program. It is a


quantitative measure of logical strength of the program. It directly measures the number of linearly independent
paths through a program's source code.. Cyclomatic complexity is computed using the control flow graph of the
program: the nodes of the graph correspond to indivisible groups of commands of a program, and
a directed edge connects two nodes if the second command might be executed immediately after the first
command. Cyclomatic complexity may also be applied to
individual functions, modules, methods or classes within a program.
There are three different ways to compute the cyclomatic complexity.
Method 1:
Given a control flow graph G of a program, the cyclomatic complexity V(G) can be computed as:
V(G) = E – N + 2
where N is the number of nodes of the control flow graph and E is the number of edges in the control flow
graph.
Method 2:
An alternative way of computing the cyclomatic complexity of a program from an inspection of its control flow
graph is as follows:
V(G) = Total number of bounded areas + 1
In the program’s control flow graph G, any region enclosed by nodes and edges can be called as a
bounded area. This is an easy way to determine the McCabe’s cyclomatic complexity. But, what if the graph G
is not planar, i.e. however you draw the graph, two
The cyclomatic complexity of a program can also be easily computed by computing the number of decision
statements of the program. If N is the number of decision statement of a program, then the McCabe’s metric is
equal to N+1.

c) IEEE standards for SRS


Answer:
IEEE standard 830 offers a useful tutorial for creating a software requirements specification (SRS).
We must follow the following template for SRS
1. Introduction
1.1 Purpose
1.2 Document Conventions
1.3 Intended Audience and Reading Suggestions
1.4 Product Scope
1.5 References
2. Overall Description
2.1 Product Perspective
2.2 Product Functions
2.3 User Classes and Characteristics
2.4 Operating Environment
2.5 Design and Implementation Constraints
2.6 User Documentation
2.7 Assumptions and Dependencies
3. External Interface Requirements
3.1 User Interfaces
3.2 Hardware Interfaces
3.3 Software Interfaces
3.4 Communications Interfaces
4. System Features
4.1 System Feature 1
4.2 System Feature 2 (and so on)
5. Other Nonfunctional Requirements
5.1 Performance Requirements
5.2 Safety Requirements
5.3 Security Requirements
5.4 Software Quality Attributes
5.5 Business Rules
6. Other Requirements

2) What is Formal Technical review? the importance of FTR in software development.

OR
Write short notes on Formal Technical review.
Answer:
Answer: A formal technical review is a software quality assurance activity performed by software engineers
(and others). The objectives of the FTR are
(1) to uncover errors in function, logic, or implementation for any representation of the software;
(2) to verify that the software under review meets its requirements;
(3) to ensure that the software has been represented according to predefined standards;
(4) to achieve software that is developed in a uniform manner; and
(5) to make projects more manageable.

In addition, the FTR serves as a training ground, enabling junior engineers to observe different approaches to
software analysis, design, and implementation. The FTR also serves to promote backup and continuity because
a number of people become familiar with parts of the software that they may not have otherwise seen.

The FTR is actually a class of reviews that includes walkthroughs, inspections, round-robin reviews and other
small group technical assessments of software. Each FTR is conducted as a meeting and will be successful only
if it is properly planned, controlled, and attended. In the sections that follow, guidelines similar to those for a
walk through are presented as a representative formal technical review.

Steps in FTR
1. The review meeting

• Every review meeting should be conducted by considering the following constraints-


1. Involvement of people
Between 3 and 5 people should be involve in the review.
2. Advance preparation Advance preparation should occur but it should be very short that is at the most 2
hours of work for each person can be spent in this preparation
3. Short duration The short duration of the review meeting should be less than two hour.

• Rather than attempting to review the entire design walkthrough are conducted for modules or for small
group of modules.
• The focus of the FTR is on work product (a software component to be reviewed). The review meeting is
attended by the review leader, all reviewers and the producer.
• The review leader is responsible for evaluating for product for its deadlines. The copies of product
material is then distributed to reviewers. -The producer organises “walkthrough” the product, explaining
the material, while the reviewers raise the issues based on theirs advance preparation.
• One of the reviewers become recorder who records all the important issues raised during the review.
When error are discovered, the recorder notes each.
• At the end of the review, the attendees decide whether to accept the product or not, with or without
modification.
2. Review reporting and record keeping

• During the FTR, the reviewer actively record all the issues that have been raised.
• At the end of meeting these all raised issues are consolidated and review issue list is prepared.
• Finally, formal technical review summary report is produced.
3. Review guidelines

• Guidelines for the conducting of formal technical review must be established in advance. These
guidelines must be distributed to all reviewers, agreed upon, and then followed.
• For example,
Guideline for review may include following things

1. Concentrate on work product only. That means review the product not the producers.
2. Set an agenda of a review and maintain it.
3. When certain issues are raised then debate or arguments should be limited. Reviews should not
ultimately results in some hard feelings.
4. Find out problem areas, but don’t attempt to solve every problem noted.
5. Take written notes (it is for record purpose)
6. Limit the number of participants and insists upon advance preparation.
7. Develop a checklist for each product that is likely to be reviewed.
8. Allocate resources and time schedule for FTRs in order to maintain time schedule.
9. Conduct meaningful trainings for all reviewers in order to make reviews effective.
10. Reviews earlier reviews which serve as the base for the current review being conducted.
3) Consider a program that input two integers having values in range (10, 250) and classifies them as
even or odd. For this program generate )Test cases using boundary values analysis:

There are 9 Test cases using boundary value analysis:


(10,125)
(11,125)
(250,125)
(249,125)
(125,249)
(125,250)
(125,10)
(125,11)
(125,125)

4) What is difference between reengineering and reverse engineering ? Explain different steps of
reengineering.

To reverse engineer a product is to examine it and probe it in order to reconstruct a plan from which it
could be built, and the way it works. For instance if I took my clock apart, measured all the gears, and
developed a plan for a clock, understanding how the gears meshed together, this would be reverse engineering.
Reverse engineering is often used by companies to copy and understand parts of a competitors product, which is
illegal, to find out how their own products work in the event that the original plans were lost, in order to effect
repair or alter them. Reverse engineering products is illegal under the laws of many countries, however it does
happen. There have been celebrated cases of reverse engineering in the third world.
Re-engineering is the adjustment, alteration, or partial replacement of a product in order to change its
function, adapting it to meet a new need.
For instance welding a dozer blade into the frame of my ford fiesta car is an example of re-engineering,
in order to clear snow, or drive through my neighbors kitchen.
Re-engineering is often used by companies to adapt generic products for a specific environment (e.g. add
suspension for rally car, change shape of conveyor belt to fit a factory shape, alter frequencies of a radio
transmitter to fit a new countries laws).
There are following 5 steps in Re-engineering-
1. Determining the Need for Change and Setting the Vision
The first, and perhaps most important step, is to get very dear on why the company needs to reengineer and
where you need to be in the future. Getting people to accept the idea that their work lives will undergo radical
change is no easy task. It requires a selling job that begins when you recognize that reengineering is essential to
the future success of the company and doesn’t wind down until your redesigned processes are in place.
2. Putting Together the Reengineering Team
Companies don’t reengineer: people do. The people you choose to lead your reengineering effort will ultimately
determine its success or failure. Hammer and Champy have identified five roles that, either distinctly or in
various combinations, are critical to implementing the reengineering process.
• The Leader: A senior executive who authorizes and motivates the overall reengineering effort.
• The Process Owner: A manager with responsibility for a specific process and the reengineering effort
focused on it.
• The Reengineering Team: A group of individuals dedicated to the reengineering of a particular process
who diagnose the existing process and oversee its redesign and implementation.
• The Steering Committee: A policy-making body of senior managers who develop the organization’s
overall reengineering strategy and monitor its progress.
3. Identifying the Processes to Be Reengineered
You now have the vision, the compelling argument and the right people in place. Next comes the burning
question: what is going to get reengineered?No company can reengineer all its high-level processes at once. So
it’s important to choose the right process, or processes, to begin with.
4. Understanding the Process
Before you can successfully reengineer a process, you must thoroughly understand it. You need to know what it
does, how well or poorly it performs, and the critical issues that govern its performance. But the goal here is not
to analyze the process in intimate detail. Rather, you are looking for a high-level view that will provide team
members with the intuition and insight to create a totally new and superior design.
5. Redesigning the Process and Integrating the New Information Technology.
A downside to redesigning a work process is that it isn’t algorithmic or routine. There are no set procedures that
will mechanically produce a radical new design process. On the other hand, you don’t have to start with an
entirely blank slate.

5) Does fault necessarily lead to failures ? Justify your answer with an example.

No fault need not to lead failure always. As we can make our system fault tolerant that can work even with
some faults and will not lead to failure.
Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of
(or one or more faults within) some of its components. If its operating quality decreases at all, the decrease is
proportional to the severity of the failure, as compared to a naively designed system in which even a small
failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability or life-critical
systems. The ability of maintaining functionality when portions of a system break down is referred to
as graceful degradation.
Example:- Hardware fault tolerance sometimes requires that broken parts be taken out and replaced with new
parts while the system is still operational (in computing known as hot swapping). Such a system implemented
with a single backup is known as single point tolerant, and represents the vast majority of fault-tolerant
systems. In such systems the mean time between failures should be long enough for the operators to have time
to fix the broken devices (mean time to repair) before the backup also fails. It helps if the time between failures
is as long as possible, but this is not specifically required in a fault-tolerant system.

6) What are the attributes of a good software test ? Why is regression testing an important part of any
integration testing procedure?
Here, is a list of qualities of a good Software Test -
• Has “test to break” attitude
• An ability to take the point of view of the customer
• A strong desire for quality
• Should be tactful & diplomatic and should be able to maintaining a cooperative relationship with
developers – people skills.
• An ability to communicate with both technical and non-technical people is useful.
An understanding of software development process is helpful. Without knowing development, they will not be
able to understand the kinds of bugs that come into application and the likeliest place to find them. The tester
who doesn't know programming will always be restricted to the use of ad-hoc techniques and the most
simplistic tools.
• There's never enough time to test "completely", so all software testing is a compromise between
available resources and thoroughness. The tester must optimize scarce resources and that means
focusing on where the bugs are likely to be. If you don't know programming, you're unlikely to have
useful intuition about where to look.
• A tester must have deep insights into how the users will exploit the program's i.e. domain knowledge.
Regression means retesting the unchanged parts of the application. Test cases are re-executed in order to
check whether previous functionality of application is working fine and new changes have not introduced any
new bugs. This test can be performed on a new build when there is significant change in original functionality
or even a single bug fix.
This is the method of verification. Verifying that the bugs are fixed and the newly added features have not
created in problem in previous working version of software.
regression testing an important part of any integration testing procedure because testers perform
functional testing when new build is available for verification. The intend of this test is to verify the changes
made in the existing functionality and newly added functionality. When this test is done tester should verify if
the existing functionality is working as expected and new changes have not introduced any defect in
functionality that was working before this change. Regression test should be the part of release cycle and must
be considered in test estimation. Regression testing is usually performed after verification of changes or new
functionality. But this is not the case always. For the release taking months to complete, regression tests must be
incorporated in the daily test cycle. For weekly releases regression tests can be performed when functional
testing is over for the changes.

7) Write short notes


i) What is the difference between coding standards & coding guidelines?

ii) Verification and validation

A rule about how you are to write your code so that it will be consistent, easily understood and robust and so
that it will be acceptable to the entity that is paying you to write the code.
The following are some representative coding standards :-
Rules for limiting the use of global: These rules list what types of data can be declared global and what
cannot.
Contents of the headers preceding codes for different modules: The information contained in the headers of
different modules should be standard for an organization. The exact format in which the header information is
organized in the header can also be specified. The following are some standard header data:
• Name of the module.
• Date on which the module was created.
• Author’s name.
• Modification history.
• Synopsis of the module.
• Different functions supported, along with their input/output parameters.
• Global variables accessed/modified by the module.

Naming conventions for global variables, local variables, and constant


identifiers: A possible naming convention can be that global variable names always start with a capital letter,
local variable names are made of small letters and constant names are always capital letters.
Error return conventions and exception handling mechanisms: The way error conditions are reported by
different functions in a program are handled should be standard within an organization. For example, different
functions while encountering an error condition should either return a 0 or 1 consistently.
Coding convention(guideline):
A rule about how you are to write your code so that it will be consistent, easily understood and robust.
Following are some representative coding guidelines recommended by many software development
organizations :-
Do not use a coding style that is too clever or too difficult to understand: Code should be easy to
understand. Many inexperienced engineers actually take pride in writing cryptic and incomprehensible code.
Clever coding can obscure meaning of the code and hamper understanding. It also makes maintenance difficult.
Avoid obscure side effects: The side effects of a function call include modification of parameters passed by
reference, modification of global variables and I/O operations. An obscure side effect is one that is not obvious
from a casual examination of the code.
Do not use an identifier for multiple purposes: Programmers often use the same identifier to denote several
temporary entities. For example, some programmers use a temporary loop variable for computing and a storing
the final result.
The code should be well-documented: As a rule of thumb, there must be atleast one comment line on the
average for every three-source line.
The length of any function should not exceed 10 source lines: A function that is very lengthy is usually very
difficult to understand as it probably carries out many different functions. For the same reason, lengthy
functions are likely to have disproportionately larger number of bugs.

ii)Verification and validation


• Verification is a process of evaluating the intermediary work products of a software development
lifecycle to check if we are in the right track of creating the final product.
• Evaluates the intermediary products to check whether it meets the specific requirements of the particular
phase.

• Validation is the process of evaluating the final product to check whether the software meets the
business needs. In simple words the test execution which we do in our day to day life are actually the validation
activity which includes smoke testing, functional testing, regression testing, systems testing etc.
• Evaluates the final product to check whether it meets the business needs

8) Explain the decision table. Discuss the difference between decision table and decision tree?

Decision Table
A decision table is a good way to deal with different combination inputs with their associated outputs and also
called cause-effect table. Reason to call cause-effect table is an associated logical diagramming technique called
cause-effect graphing that is basically use to derive the decision table.

Decision table testing is black box test design technique to determine the test scenarios for complex
business logic. We can apply Equivalence Partitioning and Boundary Value Analysis techniques to only
specific conditions or inputs. Although, if we have dissimilar inputs that result in different actions being taken
or secondly we have a business rule to test that there are different combination of inputs which result in
different actions. We use decision table to test these kinds of rules or logic.

Example:- Suppose a company has different discount scheme with some conditions that can be given by
following decision table:-

Decision tree
A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible
consequences, including chance event outcomes, resource costs, and utility. It is one way to display
an algorithm.
Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a
strategy most likely to reach agoal, but are also a popular tool in machine learning.

9) What is Stub and Driver

The concept of Stubs and Drivers are mostly used in the case of component testing. Component testing may be
done in isolation with the rest of the system depending upon the context of the development cycle.

Stubs and drivers are used to replace the missing software and simulate the interface between the software
components in a simple manner.

Suppose you have a function (Function A) that calculates the total marks obtained by a student in a particular
academic year. Suppose this function derives its values from another function (Function b) which calculates the
marks obtained in a particular subject.
You have finished working on Function A and wants to test it. But the problem you face here is that you can't
seem to run the Function A without input from Function B; Function B is still under development. In this case,
you create a dummy function to act in place of Function B to test your function. This dummy function gets
called by another function. Such a dummy is called a Stub.
To understand what a driver is, suppose you have finished Function B and is waiting for Function A
to be developed. In this case you create a dummy to call the Function B. This dummy is called the driver.

10) Which approach is better for testing Bottom -up or Top –down?

Top down approach is better for testing because bottom-up has one terrible weakness that we need to use a lot
of intuition to decide exactly what functionality a module should provide.
Top down Testing: In this approach testing is conducted from main module to sub module. if the sub module is
not developed a temporary program called STUB is used for simulate the submodule.
Bottom up testing: In this approach testing is conducted from sub module to main module, if the main module
is not developed a temporary program called DRIVERS is used to simulate the main module.

11) Design a black-box test suite for a program that computes the intersection point of two straight lines
and displays the result as “Parallel lines”/ “Intersecting lines”/ “Coincident lines”. It reads two integer
pairs (m1, c1) and (m2, c2) defining the two straight lines of the form y=mx + c. The lines are Parallel if
m1=m2, c1≠c2; Intersecting if m1≠m2; and Coincident if m1=m2, c1=c2.

The equivalence classes are the following:


• Parallel lines (m1=m2, c1≠c2)
• Intersecting lines (m1≠m2)
• Coincident lines (m1=m2, c1=c2)
Now, selecting one representative value from each equivalence class, the test
suit (2, 2) (2, 5), (5, 5) (7, 7), (10, 10) (10, 10) are obtained.

12) Discuss walkthrough and inspection as software review technique?


Inspection: is used to verify the compliance of the product with specified standards and requirements. It is done
by examining, comparing the product with the designs, code, artefacts and any other documentation available. It
needs proper planning and overviews are done on the planning to ensure that inspections are held properly. Lots
of preparations are needed, meetings are held to do inspections and then on the basis of the feedback of the
inspection, rework is done.
Inspection is deserving method with careful consideration of an organization, which concerns about the quality
of the product. The process is being done by the quality control department. Inspection is a disciplined practice
for correcting defects in software artifacts.

Walkthroughs: Author presents their developed artifact to an audience of peers. Peers question and comment
on the artifact to identify as many defects as possible. It involves no prior preparation by the audience. Usually
involves minimal documentation of either the process or any arising issues. Defect tracking in walkthroughs is
inconsistent. A walk through is an evaluation process which is an informal meeting, which does not require
preparation.

13) Why are three different levels of testing, unit testing, integration testing and system testing necessary
? Discuss the main purpose of each of these testing.

Software testing is a process of executing a program or application with the intent of finding the software bugs.
It can also be stated as the process of validating and verifying that a software program or application or product:
Meets the business and technical requirements that guided it's design and development. There are following 5
levels of testing-
Unit Testing:-A level of the software testing process where individual units/components of a software/system
are tested. The purpose is to validate that each unit of the software performs as designed.
Integration testing:- A level of the software testing process where individual units are combined and tested as
a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.
System testing:-A level of the software testing process where a complete, integrated system/software is tested.
The purpose of this test is to evaluate the system’s compliance with the specified requirements.
Acceptance testing:- A level of the software testing process where a system is tested for acceptability. The
purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it
is acceptable for delivery.
14) What categories of errors are traceable using black-box testing ? Explain the black-box testing in
detail.
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal
structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or
application.

Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is
also used for black box testing and white box testing is also sometimes called ‘structural testing’. Behavioral
test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly
forbidden, but it’s still discouraged.

Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using
only black box or only white box. Majority of the applicationa are tested by black box testing method. We need
to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.
Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration,
System, Acceptance and regression testing stages.

Tools used for Black Box testing:


Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to
check whether new build has created any bug in previous working application functionality. These record and
playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.
Advantages of Black Box Testing
– Tester can be non-technical.
– Used to verify contradictions in actual system and the specifications.
– Test cases can be designed as soon as the functional specifications are complete
Disadvantages of Black Box Testing
– The test inputs needs to be from large sample space.
– It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
– Chances of having unidentified paths during this testing

15)Describe the white-box testing in detail. Discuss the cyclomatic complexity with suitable example.

White-box testing (also known as clear box testing, glass box testing, transparent box testing, and
structural testing) is a method of testing software that tests internal structures or workings of an application, as
opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as
well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the
code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing
(ICT). White-box testing can be applied at the unit, integration and system levels of the software testing
process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is
used for integration and system testing more frequently today. It can test paths within a unit, paths between
units during integration, and between subsystems during a system–level test. Though this method of test design
can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or
missing requirements.
White-box test design techniques include the following code coverage criteria:
• Control flow testing
• Data flow testing
• Branch testing
• Statement coverage
• Decision coverage
• Modified condition/decision coverage
• Prime path testing
• Path testing
white-box testing is one of the two biggest testing methodologies used today. It has several major advantages
• Side effects of having the knowledge of the source code is beneficial to thorough testing.
• Optimization of code by revealing hidden errors and being able to remove these possible defects.
• Gives the programmer introspection because developers carefully describe any new implementation.
• Provides traceability of tests from the source, allowing future changes to the software to be easily
captured in changes to the tests.
• White box tests are easy to automate.
• White box testing give clear, engineering-based, rules for when to stop testing
Cyclomatic complexity is a source code complexity measurement that is being correlated to a number of coding
errors. It is calculated by developing a Control Flow Graph of the code that measures the number of linearly-
independent paths through a program module.

example:-
i = 0;
n=4; //N-Number of nodes present in the graph
while (i<n-1) do
j = i + 1;
while (j<n) do
if A[i]<A[j] then
swap(A[i], A[j]);
end do;
i=i+1;
end do;

V(G) = 9 - 7 + 2 = 4

16)Write short notes about debugging approaches?


Debugging is the process of finding and resolving of defects that prevent correct operation of computer software
or a system.
1) Brute Force Method: This method is most common and least efficient for isolating the cause of a software
error. We apply this method when all else fail. In this method, a printout of all registers and relevant memory
locations is obtained and studied. All dumps should be well documented and retained for possible use on
subsequent problems.
2) Back Tracking Method: It is a quite popular approach of debugging which is used effectively in case of
small applications. The process starts from the site where a particular symptom gets detected, from there on
backward tracing is done across the entire source code till we are able to lay our hands on the site being the
cause. Unfortunately, as the number of source lines increases, the number of potential backward paths may
become unmanageably large.
3) Cause Elimination: The third approach to debugging, cause elimination, is manifested by induction or
deduction and introduces the concept of binary partitioning. This approach is also called induction and
deduction.

UNIT- IV

Topics Covered:

Software Maintenance and Software Project Management


Software as an Evolutionary Entity, Need for Maintenance, Categories of
Maintenance: Preventive, Corrective and Perfective Maintenance, Cost of
Maintenance, Software Re-Engineering, Reverse Engineering. Software
Configuration Management Activities, Change Control Process, Software
Version Control, An Overview of CASE Tools. Estimation of Various
Parameters such as Cost, Efforts, Schedule/Duration, Constructive Cost
Models (COCOMO), Resource Allocation Models, Software Risk Analysis
and Management.

1)What is CASE tools? Explain in brief.


Answer:
CASE (Computer aided software engineering):
• CASE tool is a generic term used to denote any form of automated support for software engineering
• CASE tool can mean any tool used to automate some activity associated with software
development
• Some CASE tools are used in phase- related task such as structured analysis, design, coding testing
etc
• Other are related to non phase activities such as project management and configuration
management etc
• Primary objectives of deploying case tools are:-
– To increase productivity
– To produce better quality software at low cost
Benefits of CASE
• Cost saving through all development phases. CASE put the effort reduction between 30% to 40%
• Use of CASE tool leads to considerable improvement to quality
• CASE tools help to produces high quality and consistent documents
• Use of CASE environment has an impact on style of working of a company, and makes it conscious of
structured and orderly approach

Fig: CASE Environment

Code generation and CASE tools


As far as code generation is concerned, the general expectation of a CASE tool is quite low. A reasonable
requirement is traceability from source file to design data. More pragmatic supports expected from a CASE tool
during code generation phase are the following:
• The CASE tool should support generation of module skeletons or templates in one or more popular
languages. It should be possible to include copyright message, brief description of the module, author
name and the date of creation in some selectable format.
• The tool should generate records, structures, class definition automatically from the contents of the data
dictionary in one or more popular languages.
• It should generate database tables for relational database management systems.
• The tool should generate code for user interface from prototype definition for X window and MS
window based applications.

2) Why is software maintenance required? Describe various categories of software maintenance. Which
category consumes maximum effort and why?

Answer:
Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities,
deletion of obsolete capabilities, and optimization.
Need for Software Maintenance:
• Software Maintenance is a very broad activity that includes error corrections, enhancements of
capabilities, deletion of obsolete capabilities, and optimization.
• Software maintenance is needed to correct errors, enhance features, port the software to new platforms,
etc.
• 40-70% percent of the cost of software is devoted to maintenance.
Categories of Maintenance
Corrective maintenance: This refers to modifications initiated by defects in the software.
Adaptive maintenance: It includes modifying the software to match changes in the ever changing
environment.
Perfective maintenance: It means improving processing efficiency or performance, or restructuring the
software to improve changeability. It requires maximum effort as this may include enhancement of existing
system functionality, improvement in computational efficiency etc.
Preventive maintenance: There are long term effects of corrective, adaptive and perfective changes. This leads
to increase in the complexity of the software, which reflects deteriorating structure. The work is required to be
done to maintain it or to reduce it, if possible. This work may be named as preventive maintenance.

Fig: Distribution of Maintenance Effort

Software Maintenance Models:


The IEEE standard organizes the maintenance process in seven phases, as demonstrated in following figure.
Figure: IEEE-1219 Software Maintenance Models

In addition to identifying the phases and their order of execution, for each phase the standard indicates input and
output deliverables, the activities grouped, related and supporting processes, the control, and a set of metrics.

Problem/modification identification, classification, and prioritization. This is the phase in which the request
for change (MR – modification request) issued by a user, a customer, a programmer, or a manager is assigned a
maintenance category, a priority and a unique identifier. The phase also includes activities to determine whether
to accept or reject the request and to assign it to a batch of modifications scheduled for implementation.

Analysis. This phase devises a preliminary plan for design, implementation, test, and delivery. Analysis is
conducted at two levels: feasibility analysis and detailed analysis. Feasibility analysis identifies alternative
solutions and assesses their impacts and costs, whereas detailed analysis defines the requirements for the
modification, devises a test strategy, and develops an implementation plan.

Design: The modification to the system is actually designed in this phase. This entails using all current system
and project documentation, existing software and databases, and the output of the analysis phase. Activities
include the identification of affected software modules, the modification of software module documentation, the
creation of test cases for the new design,
and the identification of regression tests.

Implementation: This phase includes the activities of coding and unit testing, integration of the modified code,
integration and regression testing, risk analysis, and review. The phase also includes a test-readiness review to
asses preparedness fort system and regression testing.

Regression/system testing: This is the phase in which the entire system is tested to ensure compliance to the
original requirements plus the modifications. In addition to functional and interface testing, the phase includes
regression testing to validate that no new faults have been added. Finally, this phase is responsible for verifying
preparedness for acceptance testing.

Acceptance testing: This level of testing is concerned with the fully integrated system and involves users,
customers, or a third party designated by the customer. Acceptance testing comprises functional tests,
interoperability tests, and regression tests.
Delivery: This is the phase in which the modified systems is released for installation and operation. It includes
the activity of notifying the user community, performing installation and training, and preparing and archival
version for backup.

Maintenance Process:- Process implementation: This activity includes the tasks for developing plans and
procedures for software maintenance, creating procedures for receiving, recording, and tracking maintenance
requests, and establishing an organizational interface with the configuration management process. Process
implementation begins early in the system life cycle; Maintenance plans should be prepared in parallel with the
development plans. The activity entails the definition of the scope of maintenance and the identification and
analysis of alternatives, including offloading to a third party; it also comprises organizing and staffing the
maintenance team and assigning responsibilities and resources.

Problem and modification analysis: The first task of this activity is concerned with the analysis of the
maintenance request, either a problem report or a modification request, to classify it, to determine its scope in
term of size, costs, and time required, and to assess its criticality. It is recommended that the maintenance
organization replicates the problem or verifies the request. The other tasks regard the development and the
documentation of alternatives for change implementation and the approval of the selected option as specified in
the contract.
Modification implementation: This activity entails the identification of the items that need to be modified and
the invocation of the development process to actually implement the changes. Additional requirements of the
development process are concerned with testing procedures to ensure that the new/modified requirements are
completely and correctly implemented and the original unmodified requirements are not affected.
Maintenance review/acceptance: The tasks of this activity are devoted to assessing the integrity of the
modified system and end when the maintenance organization obtains the approval for the satisfactory
completion of the maintenance request. Several supporting processes may be invoked, including the quality
assurance process, the verification process, the validation process, and the joint review process.
Migration: This activity happens when software systems are moved from one environment to another. It is
required that migration plans be developed and the users/customers of the system be given visibility of them,
the reasons why the old environment is no longer supported, and a description of the new environment and its
date of availability. Other tasks are concerned with the parallel operations of the new and old environment and
the post-operation review to assess the impact of moving to the new environment.
Software retirement: The last maintenance activity consists of retiring a software system and requires the
development of a retirement plan and its notification to users.

3) What is Reverse engineering?


Reverse engineering, also called back engineering, is the processes of
extracting knowledge or design information from anything man-made and re-producing it or re-producing
anything based on the extracted information.[1]:3 The process often involves disassembling something
(a mechanical device, electronic component, computer program, or biological, chemical, or organic matter) and
analyzing its components and workings in detail.
The reasons and goals for obtaining such information vary widely from everyday or socially beneficial actions,
to criminal actions, depending upon the situation. Often no intellectual property rights are breached, such as
when a person or business cannot recollect how something was done, or what something does, and needs to
reverse engineer it to work it out for themselves. Reverse engineering is also beneficial in crime prevention,
where suspected malware is reverse engineered to understand what it does, and how to detect and remove it, and
to allow computers and devices to work together ("interoperate") and to allow saved files on obsolete systems to
be used in newer systems. By contrast, reverse engineering can also be used to "crack" software and media to
remove their copy protection,[1]:5 or to create a (possibly improved) copy or even a knockoff; this is usually the
goal of a competitor.

4) what is testability?
We can define the testability of a system or component as
• The ease with which a system or component can be tested.
• the extent to which testing gives us confidence about correctness.

5) Justify the statement “maintenance is unavailable in software system”?

Software maintenance is a very broad activity that includes error corrections, enhancement of capabilities,
deletion of obsolete capabilities and optimization. Any work done to change the software after it is in operation
is considered to be maintenance work. The purpose is to preserve the value of software overtime. Hence
“Maintenance is unavailable in software system”.

6) Describe the relevance of CASE tools in software engineering? Which phase of SDLC you can take
help of CASE tools? Name few CASE tools used in SDLC?

CASE stands for Computer Aided Software Engineering. It means, development and maintenance of software
projects with help of various automated software tools.
CASE Tools
CASE tools are set of software application programs, which are used to automate SDLC activities. CASE tools
are used by software project managers, analysts and engineers to develop software system.
There are number of CASE tools available to simplify various stages of Software Development Life Cycle such
as Analysis tools, Design tools, Project management tools, Database Management tools, Documentation tools
are to name a few.
Use of CASE tools accelerates the development of project to produce desired result and helps to uncover flaws
before moving ahead with next stage in software development.
CASE tools can be broadly divided into the following parts based on their use at a particular SDLC stage
Central Repository - CASE tools require a central repository, which can serve as a source of common,
integrated and consistent information. Central repository is a central place of storage where product
specifications, requirement documents, related reports and diagrams, other useful information regarding
management is stored. Central repository also serves as data dictionary.
Upper Case Tools - Upper CASE tools are used in planning, analysis and design stages of SDLC.
Lower Case Tools - Lower CASE tools are used in implementation, testing and maintenance.
Integrated Case Tools - Integrated CASE tools are helpful in all the stages of SDLC, from Requirement
gathering to Testing and documentation.
Examples of CASE tools include diagram tools, documentation tools, process modeling tools, analysis and
design tools, system software tools, project management tools, design tools, prototyping tools, configuration
manage tools, programming tools, Web development tools, testing tools, maintenance tools, quality assurance
tools, database management tools and re-engineering tools.

Upper CASE tools support the analysis and design phase of a software system and include tools such as report
generators and analysis tools. Examples of lower CASE tools are code designers and program editors, and these
tools support the coding, testing and debugging phase. Integrated CASE tools support the analysis, design and
coding phase.
Examples of CASE Tools:
1) Software Requirement Tools
2) Software Design Tools
3) Software Construction tools

7) What are legacy systems? Why do they require re-engineering? Describe briefly the steps required for
re-engineering a software product?

In computing, a legacy system is an old method, technology, computer system, or application program, "of,
relating to, or being a previous or outdated computer system." Often a pejorative term, referencing a system as
"legacy" means that it paved the way for the standards that would follow it.
Re-engineering is the adjustment, alteration, or partial replacement of a product in order to change its function,
adapting it to meet a new need.
For instance welding a dozer blade into the frame of my ford fiesta car is an example of re-engineering, in order
to clear snow, or drive through my neighbors kitchen.
Re-engineering is often used by companies to adapt generic products for a specific environment (e.g. add
suspension for rally car, change shape of conveyor belt to fit a factory shape, alter frequencies of a radio
transmitter to fit a new countries laws).
There are following 5 steps in Re-engineering-
1. Determining the Need for Change and Setting the Vision
The first, and perhaps most important step, is to get very dear on why the company needs to reengineer and
where you need to be in the future. Getting people to accept the idea that their work lives will undergo radical
change is no easy task. It requires a selling job that begins when you recognize that reengineering is essential to
the future success of the company and doesn’t wind down until your redesigned processes are in place.
2. Putting Together the Reengineering Team
Companies don’t reengineer: people do. The people you choose to lead your reengineering effort will ultimately
determine its success or failure. Hammer and Champy have identified five roles that, either distinctly or in
various combinations, are critical to implementing the reengineering process.
• The Leader: A senior executive who authorizes and motivates the overall reengineering effort.
• The Process Owner: A manager with responsibility for a specific process and the reengineering effort
focused on it.
• The Reengineering Team: A group of individuals dedicated to the reengineering of a particular process
who diagnose the existing process and oversee its redesign and implementation.
• The Steering Committee: A policy-making body of senior managers who develop the organization’s
overall reengineering strategy and monitor its progress.
3. Identifying the Processes to Be Reengineered
You now have the vision, the compelling argument and the right people in place. Next comes the burning
question: what is going to get reengineered?No company can reengineer all its high-level processes at once. So
it’s important to choose the right process, or processes, to begin with.
4. Understanding the Process
Before you can successfully reengineer a process, you must thoroughly understand it. You need to know what it
does, how well or poorly it performs, and the critical issues that govern its performance. But the goal here is not
to analyze the process in intimate detail. Rather, you are looking for a high-level view that will provide team
members with the intuition and insight to create a totally new and superior design.
5. Redesigning the Process and Integrating the New Information Technology.
A downside to redesigning a work process is that it isn’t algorithmic or routine. There are no set procedures that
will mechanically produce a radical new design process. On the other hand, you don’t have to start with an
entirely blank slate.

8)Discuss the various categories of software development projects according to COCOMO model?

Answer:
COCOMO is one of the most widely used software estimation models in the world.This model is
developed in 1981 by Barry Boehm to give estimation of number of man-months it will take to develop a
software product.COCOMO predicts the efforts and schedule of software product based on size of software.
COCOMO has three different models that reflect complexity
• Basic Model
• Intermediate Model
• Detailed Model
Similarly, there are three classes of software projects.
Organic mode In this mode, relatively simple, small software projects with a small team are handled. Such
team should have good application experience to less rigid requirements.
Semi-detached projects In this class intermediate project in which team with mixed experience level are
handled. Such project may have mix of rigid and less than rigid requirements.
Embedded projects In this class, project with tight hardware, software and operational constraints are handled.

1. Basic Model
The basic COCOMO model estimate the software development effort using only Lines of code
Various equations in this model are
E=a(KLOC)b
D=c(Effort)d
Where, E is the effort applied in person-months,
D is the development time in chronological months and
KLOC is the estimated number of delivered lines of code for the project
2. Intermediate Model
This is extension of COCOMO model.
This estimation model makes use of set of “Cost Driver Attributes” to compute the cost of software.
I. Product attributes
a. required software reliability
b. size of application data base
c. complexity of the product
II. Hardware attributes
a. run-time performance constraints
b. memory constraints
c. volatility of the virtual machine environment
d. required turnaround time
III. Personnel attributes
a. analyst capability
b. software engineer capability
c. applications experience
d. virtual machine experience
e. programming language experience
IV. Project attributes
a. use of software tools
b. application of software engineering methods
c. required development schedule
Each of the 15 attributes is rated on a 6 point scale that ranges from "very low" to "extra high" (in importance or
value).
The intermediate COCOMO model takes the form
E=a(KLOC)b * EAF
D=c(Effort)d
Where, E is the effort applied in person-months and
KLOC is the estimated number of delivered lines of code for the project
EAF is effort adjustment factor
3. Detailed COCOMO Model
The detailed model uses the same equation for estimation as the intermediate Model.
But detailed model can estimate the effort (E), duration (D), and person (P) of each of development phases,
subsystem and models.

9) What do you understand by the term CASE tools ?Discuss the benefits of using CASE tools.

Computer-aided software engineering (CASE) is the domain of software tools used to design and implement
applications. CASE tools are similar to and were partly inspired by computer-aided design (CAD) tools used for
designing hardware products. CASE tools are used for developing high-quality, defect-free, and maintainable
software.[1] CASE software is often associated with methods for the development of information systems
together with automated tools that can be used in the software development process.
1. Increased Speed.
CASE Tools provide automation and reduce the time to complete many tasks, especially those involving
diagramming and associated specifications. Estimates of improvements in productivity after application range
from 355 to more than 200%.
2. Increased Accuracy.
CASE Tools can provide ongoing debugging and error checking which is very vital for early defect removal,
which actually played a major role in shaping modern software.
3. Reduced Lifetime Maintenance
As a result of better design, better analysis and automatic code generation, automatic testing and debugging
overall systems quality improves. There is better documentation also. Thus, the net effort and cost involved
with maintenance is reduced. (Brathwaite)Also, more resources can be devoted to new systems development.
4. Better Documentation
By using CASE Tools, vast amounts of documentation are produced along the way. Most tools have revisions
for comments and notes on systems development and maintenance.
5. Programming in the hands of non programmers
With the increased movement towards object oriented technology and client server bases, programming can
also be done by people who don't have a complete programming background.
6. Intangible Benefits
CASE Tools can be used to allow for greater user participation, which can lead to better acceptance f the new
system. This can reduce the initial learning curve.

23) Write short notes on Software Risk Analysis?


Risk analysis in software testing is an approach to software testing where software risk is analyzed and
measured. Traditional software testing normally looks at relatively straight-forward function testing (e.g. 2 + 2
= 4). A software risk analysis looks at code violations that present a threat to the stability, security, or
performance of the code.
Software risk is measured during testing by using code analyzers that can assess the code for both risks within
the code itself and between units that must interact inside the application. The greatest software risk presents
itself in these interactions. Complex applications using multiple frameworks and languages can present flaws
that are extremely difficult to find and tend to cause the largest software disruptions.
10) Describe any two of the following with examples
i)Software Risk Management
ii)Software Configuration Management activities
i)Software Risk Management
Risk is an expectation of loss, a potential problem that may or may not occur in the future. It is generally caused
due to lack of information, control or time.A possibility of suffering from loss in software development process
is called a software risk. Loss can be anything, increase in production cost, development of poor quality
software, not being able to complete the project on time. Software risk exists because the future is uncertain and
there are many known and unknown things that cannot be incorporated in the project plan. A software risk can
be of two types
(a) internal risks that are within the control of the project manager and
(b) external risks that are beyond the control of project manager. Risk management is carried out to:
• Identify the risk
• Reduce the impact of risk
• Reduce the probability or likelihood of risk
• Risk monitoring
Risk Management comprises of following processes:
• Software Risk Identification
• Software Risk Analysis
• Software Risk Planning
• Software Risk Monitoring
example:- To ensure that risks remain in the forefront of project management activities, it’s best to keep the
risk management plan as simple as possible. For both conventional and agile software project management
methodologies, a risk register is a proven tool for organizing and referring to known projects risks. A
comprehensive risk register would contain consist of the following attributes:
Description of risk - Summary description of the risk—easy to understand.
Recognition Date — Date on which stakeholders identify and acknowledge the risk.
Probability of occurrence — Estimate of probability that this risk will materialize (%).
Severity — The intensity of undesirable impact to the project—if the risk materializes.
Owner — This person monitors the risk and takes action if necessary.Action — The contingent response if the
risk materializes.
Status — current team view of the risk: potential, monitoring, occurring, or eliminated.
Loss Size — Given in hours or days, this is a measure of the negative impact to the project.
Risk Exposure — Given in hours or days, this is is a product of probability and loss size.
Priority (optional) — This is either an independent ranking, or the product of probability and severity.
Typically, a higher-severity risk with high probability has higher relative priority.
ii)Software Configuration Management activities
In software engineering, software configuration management (SCM or S/W CM) is the task of tracking and
controlling changes in the software, part of the larger cross-disciplinary field of configuration
management.SCM practices include revision control and the establishment of baselines. If something goes
wrong, SCM can determine what was changed and who changed it. If a configuration is working well, SCM can
determine how to replicate it across many hosts.
SCM can be considered as having three major components:
• Software configuration identification
• Change control
• Status accounting and auditing
Configuration identification:
The first requirement for any change management is to have clearly agreed-on basis for change. That is, when a
change is done, it should be clear to what changes has been applied. This requires baselines to be established. A
baseline change is the changing of the established baseline, which is controlled by SCM.
After baseline changes the state of the software is defined by the most recent baseline and the changes that were
made. Some of the common baselines are functional or requirements baseline, design baseline, and product or
system baseline. Functional or requirement baseline is generally the requirements document that specifies the
functional requirements for the software. Design baseline consists of the different components in the software
and their designs. Product or system baseline represents the developed system.
Change control:
Most of the decisions regarding the change are generally taken by the configuration control board (CCB), which
is a group of people responsible for configuration management, headed by the configuration manager. For
smaller projects, the CCB might consist of just one person. A change is initiated by a change request.
The reason for change can be anything. However, the most common reasons are requirement changes, changes
due to bugs, platform changes, and enhancement changes. The CR for change generally consists of three parts.
The first part describes the change, reason for change, the SCIs that are affected, the priority of the change, etc.
Status accounting and auditing:
For status accounting, the main source of information is the CRs and FRs themselves. Generally, a field in the
CR/FR is added that specifies its current status. The status could be active, complete, or not scheduled.
Information about dates and efforts can also be added to the CR, the information from the CRs/FRs can be used
to prepare a summary, which can be used by the project manager and the CCB to track all the changes.

You might also like