Software Engineering
Software Engineering
Gorakhpur
Software Engineering
(Question Bank)
CS (VIth Sem)
UNIT- III
Topics Covered:
OR
Write short notes on Formal Technical review.
Answer:
Answer: A formal technical review is a software quality assurance activity performed by software engineers
(and others). The objectives of the FTR are
(1) to uncover errors in function, logic, or implementation for any representation of the software;
(2) to verify that the software under review meets its requirements;
(3) to ensure that the software has been represented according to predefined standards;
(4) to achieve software that is developed in a uniform manner; and
(5) to make projects more manageable.
In addition, the FTR serves as a training ground, enabling junior engineers to observe different approaches to
software analysis, design, and implementation. The FTR also serves to promote backup and continuity because
a number of people become familiar with parts of the software that they may not have otherwise seen.
The FTR is actually a class of reviews that includes walkthroughs, inspections, round-robin reviews and other
small group technical assessments of software. Each FTR is conducted as a meeting and will be successful only
if it is properly planned, controlled, and attended. In the sections that follow, guidelines similar to those for a
walk through are presented as a representative formal technical review.
Steps in FTR
1. The review meeting
• Rather than attempting to review the entire design walkthrough are conducted for modules or for small
group of modules.
• The focus of the FTR is on work product (a software component to be reviewed). The review meeting is
attended by the review leader, all reviewers and the producer.
• The review leader is responsible for evaluating for product for its deadlines. The copies of product
material is then distributed to reviewers. -The producer organises “walkthrough” the product, explaining
the material, while the reviewers raise the issues based on theirs advance preparation.
• One of the reviewers become recorder who records all the important issues raised during the review.
When error are discovered, the recorder notes each.
• At the end of the review, the attendees decide whether to accept the product or not, with or without
modification.
2. Review reporting and record keeping
• During the FTR, the reviewer actively record all the issues that have been raised.
• At the end of meeting these all raised issues are consolidated and review issue list is prepared.
• Finally, formal technical review summary report is produced.
3. Review guidelines
• Guidelines for the conducting of formal technical review must be established in advance. These
guidelines must be distributed to all reviewers, agreed upon, and then followed.
• For example,
Guideline for review may include following things
1. Concentrate on work product only. That means review the product not the producers.
2. Set an agenda of a review and maintain it.
3. When certain issues are raised then debate or arguments should be limited. Reviews should not
ultimately results in some hard feelings.
4. Find out problem areas, but don’t attempt to solve every problem noted.
5. Take written notes (it is for record purpose)
6. Limit the number of participants and insists upon advance preparation.
7. Develop a checklist for each product that is likely to be reviewed.
8. Allocate resources and time schedule for FTRs in order to maintain time schedule.
9. Conduct meaningful trainings for all reviewers in order to make reviews effective.
10. Reviews earlier reviews which serve as the base for the current review being conducted.
3) Consider a program that input two integers having values in range (10, 250) and classifies them as
even or odd. For this program generate )Test cases using boundary values analysis:
4) What is difference between reengineering and reverse engineering ? Explain different steps of
reengineering.
To reverse engineer a product is to examine it and probe it in order to reconstruct a plan from which it
could be built, and the way it works. For instance if I took my clock apart, measured all the gears, and
developed a plan for a clock, understanding how the gears meshed together, this would be reverse engineering.
Reverse engineering is often used by companies to copy and understand parts of a competitors product, which is
illegal, to find out how their own products work in the event that the original plans were lost, in order to effect
repair or alter them. Reverse engineering products is illegal under the laws of many countries, however it does
happen. There have been celebrated cases of reverse engineering in the third world.
Re-engineering is the adjustment, alteration, or partial replacement of a product in order to change its
function, adapting it to meet a new need.
For instance welding a dozer blade into the frame of my ford fiesta car is an example of re-engineering,
in order to clear snow, or drive through my neighbors kitchen.
Re-engineering is often used by companies to adapt generic products for a specific environment (e.g. add
suspension for rally car, change shape of conveyor belt to fit a factory shape, alter frequencies of a radio
transmitter to fit a new countries laws).
There are following 5 steps in Re-engineering-
1. Determining the Need for Change and Setting the Vision
The first, and perhaps most important step, is to get very dear on why the company needs to reengineer and
where you need to be in the future. Getting people to accept the idea that their work lives will undergo radical
change is no easy task. It requires a selling job that begins when you recognize that reengineering is essential to
the future success of the company and doesn’t wind down until your redesigned processes are in place.
2. Putting Together the Reengineering Team
Companies don’t reengineer: people do. The people you choose to lead your reengineering effort will ultimately
determine its success or failure. Hammer and Champy have identified five roles that, either distinctly or in
various combinations, are critical to implementing the reengineering process.
• The Leader: A senior executive who authorizes and motivates the overall reengineering effort.
• The Process Owner: A manager with responsibility for a specific process and the reengineering effort
focused on it.
• The Reengineering Team: A group of individuals dedicated to the reengineering of a particular process
who diagnose the existing process and oversee its redesign and implementation.
• The Steering Committee: A policy-making body of senior managers who develop the organization’s
overall reengineering strategy and monitor its progress.
3. Identifying the Processes to Be Reengineered
You now have the vision, the compelling argument and the right people in place. Next comes the burning
question: what is going to get reengineered?No company can reengineer all its high-level processes at once. So
it’s important to choose the right process, or processes, to begin with.
4. Understanding the Process
Before you can successfully reengineer a process, you must thoroughly understand it. You need to know what it
does, how well or poorly it performs, and the critical issues that govern its performance. But the goal here is not
to analyze the process in intimate detail. Rather, you are looking for a high-level view that will provide team
members with the intuition and insight to create a totally new and superior design.
5. Redesigning the Process and Integrating the New Information Technology.
A downside to redesigning a work process is that it isn’t algorithmic or routine. There are no set procedures that
will mechanically produce a radical new design process. On the other hand, you don’t have to start with an
entirely blank slate.
5) Does fault necessarily lead to failures ? Justify your answer with an example.
No fault need not to lead failure always. As we can make our system fault tolerant that can work even with
some faults and will not lead to failure.
Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of
(or one or more faults within) some of its components. If its operating quality decreases at all, the decrease is
proportional to the severity of the failure, as compared to a naively designed system in which even a small
failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability or life-critical
systems. The ability of maintaining functionality when portions of a system break down is referred to
as graceful degradation.
Example:- Hardware fault tolerance sometimes requires that broken parts be taken out and replaced with new
parts while the system is still operational (in computing known as hot swapping). Such a system implemented
with a single backup is known as single point tolerant, and represents the vast majority of fault-tolerant
systems. In such systems the mean time between failures should be long enough for the operators to have time
to fix the broken devices (mean time to repair) before the backup also fails. It helps if the time between failures
is as long as possible, but this is not specifically required in a fault-tolerant system.
6) What are the attributes of a good software test ? Why is regression testing an important part of any
integration testing procedure?
Here, is a list of qualities of a good Software Test -
• Has “test to break” attitude
• An ability to take the point of view of the customer
• A strong desire for quality
• Should be tactful & diplomatic and should be able to maintaining a cooperative relationship with
developers – people skills.
• An ability to communicate with both technical and non-technical people is useful.
An understanding of software development process is helpful. Without knowing development, they will not be
able to understand the kinds of bugs that come into application and the likeliest place to find them. The tester
who doesn't know programming will always be restricted to the use of ad-hoc techniques and the most
simplistic tools.
• There's never enough time to test "completely", so all software testing is a compromise between
available resources and thoroughness. The tester must optimize scarce resources and that means
focusing on where the bugs are likely to be. If you don't know programming, you're unlikely to have
useful intuition about where to look.
• A tester must have deep insights into how the users will exploit the program's i.e. domain knowledge.
Regression means retesting the unchanged parts of the application. Test cases are re-executed in order to
check whether previous functionality of application is working fine and new changes have not introduced any
new bugs. This test can be performed on a new build when there is significant change in original functionality
or even a single bug fix.
This is the method of verification. Verifying that the bugs are fixed and the newly added features have not
created in problem in previous working version of software.
regression testing an important part of any integration testing procedure because testers perform
functional testing when new build is available for verification. The intend of this test is to verify the changes
made in the existing functionality and newly added functionality. When this test is done tester should verify if
the existing functionality is working as expected and new changes have not introduced any defect in
functionality that was working before this change. Regression test should be the part of release cycle and must
be considered in test estimation. Regression testing is usually performed after verification of changes or new
functionality. But this is not the case always. For the release taking months to complete, regression tests must be
incorporated in the daily test cycle. For weekly releases regression tests can be performed when functional
testing is over for the changes.
A rule about how you are to write your code so that it will be consistent, easily understood and robust and so
that it will be acceptable to the entity that is paying you to write the code.
The following are some representative coding standards :-
Rules for limiting the use of global: These rules list what types of data can be declared global and what
cannot.
Contents of the headers preceding codes for different modules: The information contained in the headers of
different modules should be standard for an organization. The exact format in which the header information is
organized in the header can also be specified. The following are some standard header data:
• Name of the module.
• Date on which the module was created.
• Author’s name.
• Modification history.
• Synopsis of the module.
• Different functions supported, along with their input/output parameters.
• Global variables accessed/modified by the module.
• Validation is the process of evaluating the final product to check whether the software meets the
business needs. In simple words the test execution which we do in our day to day life are actually the validation
activity which includes smoke testing, functional testing, regression testing, systems testing etc.
• Evaluates the final product to check whether it meets the business needs
8) Explain the decision table. Discuss the difference between decision table and decision tree?
Decision Table
A decision table is a good way to deal with different combination inputs with their associated outputs and also
called cause-effect table. Reason to call cause-effect table is an associated logical diagramming technique called
cause-effect graphing that is basically use to derive the decision table.
Decision table testing is black box test design technique to determine the test scenarios for complex
business logic. We can apply Equivalence Partitioning and Boundary Value Analysis techniques to only
specific conditions or inputs. Although, if we have dissimilar inputs that result in different actions being taken
or secondly we have a business rule to test that there are different combination of inputs which result in
different actions. We use decision table to test these kinds of rules or logic.
Example:- Suppose a company has different discount scheme with some conditions that can be given by
following decision table:-
Decision tree
A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible
consequences, including chance event outcomes, resource costs, and utility. It is one way to display
an algorithm.
Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a
strategy most likely to reach agoal, but are also a popular tool in machine learning.
The concept of Stubs and Drivers are mostly used in the case of component testing. Component testing may be
done in isolation with the rest of the system depending upon the context of the development cycle.
Stubs and drivers are used to replace the missing software and simulate the interface between the software
components in a simple manner.
Suppose you have a function (Function A) that calculates the total marks obtained by a student in a particular
academic year. Suppose this function derives its values from another function (Function b) which calculates the
marks obtained in a particular subject.
You have finished working on Function A and wants to test it. But the problem you face here is that you can't
seem to run the Function A without input from Function B; Function B is still under development. In this case,
you create a dummy function to act in place of Function B to test your function. This dummy function gets
called by another function. Such a dummy is called a Stub.
To understand what a driver is, suppose you have finished Function B and is waiting for Function A
to be developed. In this case you create a dummy to call the Function B. This dummy is called the driver.
10) Which approach is better for testing Bottom -up or Top –down?
Top down approach is better for testing because bottom-up has one terrible weakness that we need to use a lot
of intuition to decide exactly what functionality a module should provide.
Top down Testing: In this approach testing is conducted from main module to sub module. if the sub module is
not developed a temporary program called STUB is used for simulate the submodule.
Bottom up testing: In this approach testing is conducted from sub module to main module, if the main module
is not developed a temporary program called DRIVERS is used to simulate the main module.
11) Design a black-box test suite for a program that computes the intersection point of two straight lines
and displays the result as “Parallel lines”/ “Intersecting lines”/ “Coincident lines”. It reads two integer
pairs (m1, c1) and (m2, c2) defining the two straight lines of the form y=mx + c. The lines are Parallel if
m1=m2, c1≠c2; Intersecting if m1≠m2; and Coincident if m1=m2, c1=c2.
Walkthroughs: Author presents their developed artifact to an audience of peers. Peers question and comment
on the artifact to identify as many defects as possible. It involves no prior preparation by the audience. Usually
involves minimal documentation of either the process or any arising issues. Defect tracking in walkthroughs is
inconsistent. A walk through is an evaluation process which is an informal meeting, which does not require
preparation.
13) Why are three different levels of testing, unit testing, integration testing and system testing necessary
? Discuss the main purpose of each of these testing.
Software testing is a process of executing a program or application with the intent of finding the software bugs.
It can also be stated as the process of validating and verifying that a software program or application or product:
Meets the business and technical requirements that guided it's design and development. There are following 5
levels of testing-
Unit Testing:-A level of the software testing process where individual units/components of a software/system
are tested. The purpose is to validate that each unit of the software performs as designed.
Integration testing:- A level of the software testing process where individual units are combined and tested as
a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.
System testing:-A level of the software testing process where a complete, integrated system/software is tested.
The purpose of this test is to evaluate the system’s compliance with the specified requirements.
Acceptance testing:- A level of the software testing process where a system is tested for acceptability. The
purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it
is acceptable for delivery.
14) What categories of errors are traceable using black-box testing ? Explain the black-box testing in
detail.
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal
structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or
application.
Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is
also used for black box testing and white box testing is also sometimes called ‘structural testing’. Behavioral
test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly
forbidden, but it’s still discouraged.
Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using
only black box or only white box. Majority of the applicationa are tested by black box testing method. We need
to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.
Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration,
System, Acceptance and regression testing stages.
15)Describe the white-box testing in detail. Discuss the cyclomatic complexity with suitable example.
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and
structural testing) is a method of testing software that tests internal structures or workings of an application, as
opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as
well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the
code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing
(ICT). White-box testing can be applied at the unit, integration and system levels of the software testing
process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is
used for integration and system testing more frequently today. It can test paths within a unit, paths between
units during integration, and between subsystems during a system–level test. Though this method of test design
can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or
missing requirements.
White-box test design techniques include the following code coverage criteria:
• Control flow testing
• Data flow testing
• Branch testing
• Statement coverage
• Decision coverage
• Modified condition/decision coverage
• Prime path testing
• Path testing
white-box testing is one of the two biggest testing methodologies used today. It has several major advantages
• Side effects of having the knowledge of the source code is beneficial to thorough testing.
• Optimization of code by revealing hidden errors and being able to remove these possible defects.
• Gives the programmer introspection because developers carefully describe any new implementation.
• Provides traceability of tests from the source, allowing future changes to the software to be easily
captured in changes to the tests.
• White box tests are easy to automate.
• White box testing give clear, engineering-based, rules for when to stop testing
Cyclomatic complexity is a source code complexity measurement that is being correlated to a number of coding
errors. It is calculated by developing a Control Flow Graph of the code that measures the number of linearly-
independent paths through a program module.
example:-
i = 0;
n=4; //N-Number of nodes present in the graph
while (i<n-1) do
j = i + 1;
while (j<n) do
if A[i]<A[j] then
swap(A[i], A[j]);
end do;
i=i+1;
end do;
V(G) = 9 - 7 + 2 = 4
UNIT- IV
Topics Covered:
2) Why is software maintenance required? Describe various categories of software maintenance. Which
category consumes maximum effort and why?
Answer:
Software Maintenance is a very broad activity that includes error corrections, enhancements of capabilities,
deletion of obsolete capabilities, and optimization.
Need for Software Maintenance:
• Software Maintenance is a very broad activity that includes error corrections, enhancements of
capabilities, deletion of obsolete capabilities, and optimization.
• Software maintenance is needed to correct errors, enhance features, port the software to new platforms,
etc.
• 40-70% percent of the cost of software is devoted to maintenance.
Categories of Maintenance
Corrective maintenance: This refers to modifications initiated by defects in the software.
Adaptive maintenance: It includes modifying the software to match changes in the ever changing
environment.
Perfective maintenance: It means improving processing efficiency or performance, or restructuring the
software to improve changeability. It requires maximum effort as this may include enhancement of existing
system functionality, improvement in computational efficiency etc.
Preventive maintenance: There are long term effects of corrective, adaptive and perfective changes. This leads
to increase in the complexity of the software, which reflects deteriorating structure. The work is required to be
done to maintain it or to reduce it, if possible. This work may be named as preventive maintenance.
In addition to identifying the phases and their order of execution, for each phase the standard indicates input and
output deliverables, the activities grouped, related and supporting processes, the control, and a set of metrics.
Problem/modification identification, classification, and prioritization. This is the phase in which the request
for change (MR – modification request) issued by a user, a customer, a programmer, or a manager is assigned a
maintenance category, a priority and a unique identifier. The phase also includes activities to determine whether
to accept or reject the request and to assign it to a batch of modifications scheduled for implementation.
Analysis. This phase devises a preliminary plan for design, implementation, test, and delivery. Analysis is
conducted at two levels: feasibility analysis and detailed analysis. Feasibility analysis identifies alternative
solutions and assesses their impacts and costs, whereas detailed analysis defines the requirements for the
modification, devises a test strategy, and develops an implementation plan.
Design: The modification to the system is actually designed in this phase. This entails using all current system
and project documentation, existing software and databases, and the output of the analysis phase. Activities
include the identification of affected software modules, the modification of software module documentation, the
creation of test cases for the new design,
and the identification of regression tests.
Implementation: This phase includes the activities of coding and unit testing, integration of the modified code,
integration and regression testing, risk analysis, and review. The phase also includes a test-readiness review to
asses preparedness fort system and regression testing.
Regression/system testing: This is the phase in which the entire system is tested to ensure compliance to the
original requirements plus the modifications. In addition to functional and interface testing, the phase includes
regression testing to validate that no new faults have been added. Finally, this phase is responsible for verifying
preparedness for acceptance testing.
Acceptance testing: This level of testing is concerned with the fully integrated system and involves users,
customers, or a third party designated by the customer. Acceptance testing comprises functional tests,
interoperability tests, and regression tests.
Delivery: This is the phase in which the modified systems is released for installation and operation. It includes
the activity of notifying the user community, performing installation and training, and preparing and archival
version for backup.
Maintenance Process:- Process implementation: This activity includes the tasks for developing plans and
procedures for software maintenance, creating procedures for receiving, recording, and tracking maintenance
requests, and establishing an organizational interface with the configuration management process. Process
implementation begins early in the system life cycle; Maintenance plans should be prepared in parallel with the
development plans. The activity entails the definition of the scope of maintenance and the identification and
analysis of alternatives, including offloading to a third party; it also comprises organizing and staffing the
maintenance team and assigning responsibilities and resources.
Problem and modification analysis: The first task of this activity is concerned with the analysis of the
maintenance request, either a problem report or a modification request, to classify it, to determine its scope in
term of size, costs, and time required, and to assess its criticality. It is recommended that the maintenance
organization replicates the problem or verifies the request. The other tasks regard the development and the
documentation of alternatives for change implementation and the approval of the selected option as specified in
the contract.
Modification implementation: This activity entails the identification of the items that need to be modified and
the invocation of the development process to actually implement the changes. Additional requirements of the
development process are concerned with testing procedures to ensure that the new/modified requirements are
completely and correctly implemented and the original unmodified requirements are not affected.
Maintenance review/acceptance: The tasks of this activity are devoted to assessing the integrity of the
modified system and end when the maintenance organization obtains the approval for the satisfactory
completion of the maintenance request. Several supporting processes may be invoked, including the quality
assurance process, the verification process, the validation process, and the joint review process.
Migration: This activity happens when software systems are moved from one environment to another. It is
required that migration plans be developed and the users/customers of the system be given visibility of them,
the reasons why the old environment is no longer supported, and a description of the new environment and its
date of availability. Other tasks are concerned with the parallel operations of the new and old environment and
the post-operation review to assess the impact of moving to the new environment.
Software retirement: The last maintenance activity consists of retiring a software system and requires the
development of a retirement plan and its notification to users.
4) what is testability?
We can define the testability of a system or component as
• The ease with which a system or component can be tested.
• the extent to which testing gives us confidence about correctness.
Software maintenance is a very broad activity that includes error corrections, enhancement of capabilities,
deletion of obsolete capabilities and optimization. Any work done to change the software after it is in operation
is considered to be maintenance work. The purpose is to preserve the value of software overtime. Hence
“Maintenance is unavailable in software system”.
6) Describe the relevance of CASE tools in software engineering? Which phase of SDLC you can take
help of CASE tools? Name few CASE tools used in SDLC?
CASE stands for Computer Aided Software Engineering. It means, development and maintenance of software
projects with help of various automated software tools.
CASE Tools
CASE tools are set of software application programs, which are used to automate SDLC activities. CASE tools
are used by software project managers, analysts and engineers to develop software system.
There are number of CASE tools available to simplify various stages of Software Development Life Cycle such
as Analysis tools, Design tools, Project management tools, Database Management tools, Documentation tools
are to name a few.
Use of CASE tools accelerates the development of project to produce desired result and helps to uncover flaws
before moving ahead with next stage in software development.
CASE tools can be broadly divided into the following parts based on their use at a particular SDLC stage
Central Repository - CASE tools require a central repository, which can serve as a source of common,
integrated and consistent information. Central repository is a central place of storage where product
specifications, requirement documents, related reports and diagrams, other useful information regarding
management is stored. Central repository also serves as data dictionary.
Upper Case Tools - Upper CASE tools are used in planning, analysis and design stages of SDLC.
Lower Case Tools - Lower CASE tools are used in implementation, testing and maintenance.
Integrated Case Tools - Integrated CASE tools are helpful in all the stages of SDLC, from Requirement
gathering to Testing and documentation.
Examples of CASE tools include diagram tools, documentation tools, process modeling tools, analysis and
design tools, system software tools, project management tools, design tools, prototyping tools, configuration
manage tools, programming tools, Web development tools, testing tools, maintenance tools, quality assurance
tools, database management tools and re-engineering tools.
Upper CASE tools support the analysis and design phase of a software system and include tools such as report
generators and analysis tools. Examples of lower CASE tools are code designers and program editors, and these
tools support the coding, testing and debugging phase. Integrated CASE tools support the analysis, design and
coding phase.
Examples of CASE Tools:
1) Software Requirement Tools
2) Software Design Tools
3) Software Construction tools
7) What are legacy systems? Why do they require re-engineering? Describe briefly the steps required for
re-engineering a software product?
In computing, a legacy system is an old method, technology, computer system, or application program, "of,
relating to, or being a previous or outdated computer system." Often a pejorative term, referencing a system as
"legacy" means that it paved the way for the standards that would follow it.
Re-engineering is the adjustment, alteration, or partial replacement of a product in order to change its function,
adapting it to meet a new need.
For instance welding a dozer blade into the frame of my ford fiesta car is an example of re-engineering, in order
to clear snow, or drive through my neighbors kitchen.
Re-engineering is often used by companies to adapt generic products for a specific environment (e.g. add
suspension for rally car, change shape of conveyor belt to fit a factory shape, alter frequencies of a radio
transmitter to fit a new countries laws).
There are following 5 steps in Re-engineering-
1. Determining the Need for Change and Setting the Vision
The first, and perhaps most important step, is to get very dear on why the company needs to reengineer and
where you need to be in the future. Getting people to accept the idea that their work lives will undergo radical
change is no easy task. It requires a selling job that begins when you recognize that reengineering is essential to
the future success of the company and doesn’t wind down until your redesigned processes are in place.
2. Putting Together the Reengineering Team
Companies don’t reengineer: people do. The people you choose to lead your reengineering effort will ultimately
determine its success or failure. Hammer and Champy have identified five roles that, either distinctly or in
various combinations, are critical to implementing the reengineering process.
• The Leader: A senior executive who authorizes and motivates the overall reengineering effort.
• The Process Owner: A manager with responsibility for a specific process and the reengineering effort
focused on it.
• The Reengineering Team: A group of individuals dedicated to the reengineering of a particular process
who diagnose the existing process and oversee its redesign and implementation.
• The Steering Committee: A policy-making body of senior managers who develop the organization’s
overall reengineering strategy and monitor its progress.
3. Identifying the Processes to Be Reengineered
You now have the vision, the compelling argument and the right people in place. Next comes the burning
question: what is going to get reengineered?No company can reengineer all its high-level processes at once. So
it’s important to choose the right process, or processes, to begin with.
4. Understanding the Process
Before you can successfully reengineer a process, you must thoroughly understand it. You need to know what it
does, how well or poorly it performs, and the critical issues that govern its performance. But the goal here is not
to analyze the process in intimate detail. Rather, you are looking for a high-level view that will provide team
members with the intuition and insight to create a totally new and superior design.
5. Redesigning the Process and Integrating the New Information Technology.
A downside to redesigning a work process is that it isn’t algorithmic or routine. There are no set procedures that
will mechanically produce a radical new design process. On the other hand, you don’t have to start with an
entirely blank slate.
8)Discuss the various categories of software development projects according to COCOMO model?
Answer:
COCOMO is one of the most widely used software estimation models in the world.This model is
developed in 1981 by Barry Boehm to give estimation of number of man-months it will take to develop a
software product.COCOMO predicts the efforts and schedule of software product based on size of software.
COCOMO has three different models that reflect complexity
• Basic Model
• Intermediate Model
• Detailed Model
Similarly, there are three classes of software projects.
Organic mode In this mode, relatively simple, small software projects with a small team are handled. Such
team should have good application experience to less rigid requirements.
Semi-detached projects In this class intermediate project in which team with mixed experience level are
handled. Such project may have mix of rigid and less than rigid requirements.
Embedded projects In this class, project with tight hardware, software and operational constraints are handled.
1. Basic Model
The basic COCOMO model estimate the software development effort using only Lines of code
Various equations in this model are
E=a(KLOC)b
D=c(Effort)d
Where, E is the effort applied in person-months,
D is the development time in chronological months and
KLOC is the estimated number of delivered lines of code for the project
2. Intermediate Model
This is extension of COCOMO model.
This estimation model makes use of set of “Cost Driver Attributes” to compute the cost of software.
I. Product attributes
a. required software reliability
b. size of application data base
c. complexity of the product
II. Hardware attributes
a. run-time performance constraints
b. memory constraints
c. volatility of the virtual machine environment
d. required turnaround time
III. Personnel attributes
a. analyst capability
b. software engineer capability
c. applications experience
d. virtual machine experience
e. programming language experience
IV. Project attributes
a. use of software tools
b. application of software engineering methods
c. required development schedule
Each of the 15 attributes is rated on a 6 point scale that ranges from "very low" to "extra high" (in importance or
value).
The intermediate COCOMO model takes the form
E=a(KLOC)b * EAF
D=c(Effort)d
Where, E is the effort applied in person-months and
KLOC is the estimated number of delivered lines of code for the project
EAF is effort adjustment factor
3. Detailed COCOMO Model
The detailed model uses the same equation for estimation as the intermediate Model.
But detailed model can estimate the effort (E), duration (D), and person (P) of each of development phases,
subsystem and models.
9) What do you understand by the term CASE tools ?Discuss the benefits of using CASE tools.
Computer-aided software engineering (CASE) is the domain of software tools used to design and implement
applications. CASE tools are similar to and were partly inspired by computer-aided design (CAD) tools used for
designing hardware products. CASE tools are used for developing high-quality, defect-free, and maintainable
software.[1] CASE software is often associated with methods for the development of information systems
together with automated tools that can be used in the software development process.
1. Increased Speed.
CASE Tools provide automation and reduce the time to complete many tasks, especially those involving
diagramming and associated specifications. Estimates of improvements in productivity after application range
from 355 to more than 200%.
2. Increased Accuracy.
CASE Tools can provide ongoing debugging and error checking which is very vital for early defect removal,
which actually played a major role in shaping modern software.
3. Reduced Lifetime Maintenance
As a result of better design, better analysis and automatic code generation, automatic testing and debugging
overall systems quality improves. There is better documentation also. Thus, the net effort and cost involved
with maintenance is reduced. (Brathwaite)Also, more resources can be devoted to new systems development.
4. Better Documentation
By using CASE Tools, vast amounts of documentation are produced along the way. Most tools have revisions
for comments and notes on systems development and maintenance.
5. Programming in the hands of non programmers
With the increased movement towards object oriented technology and client server bases, programming can
also be done by people who don't have a complete programming background.
6. Intangible Benefits
CASE Tools can be used to allow for greater user participation, which can lead to better acceptance f the new
system. This can reduce the initial learning curve.