Jai Shree Ram
Software Engineering
Software Engineering?
• Software Engineering is the process of designing, developing, testing,
and maintaining software. It is a systematic and disciplined approach to
software development that aims to create high-quality, reliable, and
maintainable software.
• Software engineering includes a variety of techniques, tools, and
methodologies, including requirements analysis, design, testing, and
maintenance.
• The main goal of Software Engineering is to develop software
applications for improving quality, budget, and time efficiency.
Key Principles of Software Engineering
Modularity: Breaking the software into smaller, reusable components that
can be developed and tested independently.
Abstraction: Hiding the implementation details of a component and
exposing only the necessary functionality to other parts of the software.
Encapsulation: Wrapping up the data and functions of an object into a
single unit, and protecting the internal state of an object from external
modifications.
Reusability: Creating components that can be used in multiple projects,
which can save time and resources.
Maintenance: Regularly updating and improving the software to fix bugs,
add new features, and address security vulnerabilities.
Testing: Verifying that the software meets its requirements and is free of
bugs.
Design Patterns: Solving recurring problems in software design by
providing templates for solving them.
Agile methodologies: Using iterative and incremental development
processes that focus on customer satisfaction, rapid delivery, and flexibility.
Continuous Integration & Deployment: Continuously integrating the
code changes and deploying them into the production environment.
Main Attributes of Software Engineering
Software Engineering is a systematic, disciplined, quantifiable study and
approach to the design, development, operation, and maintenance of a
software system. There are four main Attributes of Software Engineering.
Efficiency: It provides a measure of the resource requirement of a software
product in an efficient way.
Reliability: It provides the assurance that the product will deliver the same
results when used in similar working environment.
Reusability: This attribute makes sure that the module can be used in
multiple applications.
Maintainability: It is the ability of the software to be modified, repaired, or
enhanced easily with changing requirements.
Dual Role of Software
There is a dual role of software in the industry. The first one is as a product and
the other one is as a vehicle for delivering the product.
As a Product
•It delivers computing potential across networks of Hardware.
•It enables the Hardware to deliver the expected functionality.
•It acts as an information transformer because it produces, manages,
acquires, modifies, displays, or transmits information.
As a Vehicle for Delivering a Product
•It provides system functionality (e.g., payroll system).
•It controls other software (e.g., an operating system).
•It helps build other software (e.g., software tools).
Program Product
A program is a set of instructions that are given to a computer in order to
achieve a specific task.
Program is one of the stages involved in the development of the software.
Software Product
Software is when a program is made available for commercial business and is
properly documented along with its licensing.
Software Product = Program + Documentation + Licensing.
Software Development usually follows a life cycle, which involves the feasibility
study of the project, requirement gathering, development of a prototype,
system design, coding, and testing.
Advantages of Software Engineering
Improved Quality
Increased Productivity
Better Maintainability
Reduced Costs:
Increased Customer Satisfaction:
Better Team Collaboration
Better Security
Disadvantages of Software Engineering
High upfront costs:
Limited flexibility:
Complexity:
High learning curve
High dependence on tools:
High maintenance:
What is Software Re-Engineering?
Software Re-Engineering is basically a process of software development that
helps in maintaining the quality of the system.
Software Reengineering is preferable for software products having high failure
rates, poor design, and/or poor code structure.
Software Reverse Engineering is the process of analyzing software with the
objective of recovering its design and requirement specification.
A software configuration management tool helps in maintaining different
versions of the configurable items.
Software Development Life Cycle (SDLC)
SDLC, or Software Development Life Cycle, is a systematic process for
planning, creating, testing, deploying, and maintaining software. It provides a
framework for developers to produce high-quality software that meets user
expectations and project requirements.
Planning -> Defining/analysis -> Designing -> Coding/implementation ->
Testing -> Deployment/Mentenance
The goal of the SDLC life cycle model is to deliver high-quality, maintainable
software that meets the user’s requirements.
SDLC Models | Software Development Models
SDLC Models or Software Development Life Cycle (SDLC) models are
frameworks that guide the development process of software applications from
initiation to deployment. Various SDLC models in software engineering exist,
each with its approach to the phases of development.
Some impotant model :-
Waterfall Model
Spiral Model
Prototype Model
Rad Model
V-Model
Incremental Model
Angile Model
Iterative Model
Waterfall Models :
The Waterfall models is one of the oldest and most straightforward approaches
to software development.
The Waterfall models follows a linear and sequential approach to software
development. Each phase in the development process must be completed
before moving on to the next one, resembling the downward flow of a waterfall.
The models is highly structured, making it easy to understand and use.
Phases of Waterfall SDLC Models:
Requirement analusis -> software design -> Implementation -> Testing ->
Deployment -> Maintenance
Requirement analusis:- In this initial phase, the project team works with
stakeholders to gather and document the software requirements. This phase
defines what the software is expected to do and sets the foundation for the
entire project.
Software Design:- During this phase, the software architecture and system
design are developed based on the gathered requirements. It includes creating
detailed specifications for the software's components and how they will interact
with each other.
Implementation:- In this phase, the actual coding and programming of the
software take place. Developers write the code based on the design
specification and build the software.
Testing:- Once the coding is completed, the software is thoroughly tested to
identify and rectify defects and issues. Testing may include unit testing,
integration testing, system testing, and user acceptance testing..
Deployment:- After successful testing, the software is deployed and released to
the end-users or customers. This phase involves installation, training and
providing support for users.
Maintenance:- In the final phase, ongoing maintenance and support are
provided for the software. Any issues, bugs, or necessary updates are
addressed in this phase
Advantages:-
• Simplicity
• clear documentation
• stable requirements
• The start and end points for each phase is fixed, which makes it easy to
cover progress.
• The release date for the complete product, as well as its final cost, can be
determined before development.
Disadvantages:-
• Rigidity: The models is highly inflexible once a phase is completed,
making it challenging to accommodate changes.
• Late Testing: Testing is performed after the implementation phase,
which means that defects might not be discovered until late in the
process.
• Limited Client Involvement: Clients are involved mainly in the
initial phase, and significant changes cannot be easily accommodated
later in the development process.
• No Prototyping: The models lacks the provision for creating
prototypes, which could be a disadvantage in projects where user
feedback is crucial.
When to use Waterfall Model?
• When the requirements are constant and not changed regularly.
• Where the tools and technology used is consistent and is not changing
• Not use for real Project.
• Development Approach:- Sequential and Linear
• Phases:-Requirements, Design, Implementation, Testing, Deployment
• Limited User Involvement until the Testing Phase
RAD (Rapid Application Development) Model:-
If the requirements are well understood and described, and the project scope is a
constraint, the RAD process enables a development team to create a fully functional
system within a concise time period.
RAD is designed to address the need for quicker software development and the
ability to adapt to changing requirements. It is often used for projects where time-to-
market and flexibility are critical.
Some key feature of RAD
• Iterative and Prototyping Approach:
• Close Customer Involvement:
• Rapid Development:
• Reusable Components:
• Emphasis on Software Quality:
• Dynamic Requirements:
Prototype Model
This model is used when the customers do not know the exact project
requirements beforehand. In this model, a prototype of the end product is first
developed, tested, and refined as per customer feedback repeatedly till a final
acceptable prototype is achieved which forms the basis for developing the final
product.
Phases of Prototype Model:
Advantage of Prototype Model
• Reduce the risk of incorrect user requirement
• Good where requirement are changing/uncommitted
• Support early product marketing
• Reduce Maintenance cost.
• Errors can be detected much earlier as the system is made side by side.
When to use Prototype Model?
• Customer not clear with idea.
• User requirement not clear
• Throwaway model
• good for technical and requirement risks
• Increase in cost of development
• High user involvement
• Reuseability
Spiral Model
The spiral model is a systems development lifecycle (SDLC) method used for
risk management that combines the iterative development process model with
elements of the Waterfall model. The spiral model is used by software
engineers and is favored for large, expensive and complicated projects.
Phase :-
Planning, Risk Analysis, Engineering, Testing (Cyclical)
Advantages
• High amount of risk analysis
• Useful for large and mission-critical projects.
Disadvantages
• Can be a costly model to use.
• Risk analysis needed highly particular expertise
• Doesn't work well for smaller projects.
When to use Spiral Model?
• When deliverance is required to be frequent.
• When the project is large
• When requirements are unclear and complex
• When changes may require at any time
• Large and high budget projects
• Continuous risk assessment, Proactive mitigation
• Flecibility:-High
V-Model
V-Model also referred to as the Verification and Validation Model. In this, each
phase of SDLC must come before the next phase starts. It follows a sequential
design process same as the waterfall model. Testing the device is planned in
parallel with a corresponding stage of development.
It is used to emphasize the relationship between each phase of development
and its corresponding test phase. The V-Model is often represented in the shape
of a "V," which illustrates the parallel corresponding phases of development
and testing. It is particularly useful for projects where a high demand of
reliability and predictability is required.
The key characteristic of the V-Model is the systematic approach to verification
and validation, which ensures that the software is thoroughly tested and
validated each stage of development.
main components and phases of the V-Model:
Requirements Phase:- In this phase, the system and software requirements
are gathered and documented. These requirements serve as the foundation for
the entire development process.
System Design:-
Unit Testing:-
Integration Testing:-
System Testing:-
Acceptance Testing:-
Incremental Model
Incremental Model is a process of software development where requirements
divided into multiple standalone modules of the software development cycle. In
this model, each module goes through the requirements, design,
implementation and testing phases. Every subsequent release of the module
adds function to the previous release. The process continues until the complete
system achieved.
Increment model benefit :
• When the requirements are superior.
• A project has a lengthy development schedule.
• When the customer demands a quick release of the product.
• You can develop prioritized requirements
• Errors are easy to be recognized.
• Easier to test and debug
• More flexible.
• Simple to manage risk because it handled during its iteration.
• The Client gets important functionality early.
Agile Model
Agile methods break tasks into smaller iterations, or parts do not directly
involve long term planning.
The project scope and requirements are laid down at the beginning of the
development process.
Plans regarding the number of iterations, the duration and the scope of each
iteration are clearly defined in advance
Phases of Agile Model:
• Requirements gathering
• Design the requirements
• Construction/ iteration
• Testing/ Quality assurance
• Deployment
• Feedback
Highlight of agile model
• When a highly qualified and experienced team is available.
• When a customer is ready to have a meeting with a software team all the
time.
• When project size is small.
• Frequent Delivery
• Face-to-Face Communication with clients.
• Efficient design and fulfils the business requirement.
• Anytime changes are acceptable.
• It reduces total development time.
What is the Waterfall models, and when is it used?
The Waterfall models is a linear and sequential approach to software
development, where each phase must be completed before moving to
the next. It is used when project requirements are well-defined and
unlikely to change.
When is the Iterative models appropriate?
The Iterative models is appropriate when a project requires flexibility,
and requirements may evolve. It involves repeating cycles of
development, testing, and feedback until the software meets the
desired level of quality.
What is the Agile models, and why is it popular?
Agile is an iterative and incremental approach to software
development that prioritizes flexibility, collaboration, and customer
satisfaction. It is popular for its adaptability to changing requirements,
frequent releases, and continuous customer involvement.
When is the Spiral models used?
The Spiral models is used when a project has high uncertainty and
complexity. It incorporates risk analysis and management into the
development process and allows for iterative enhancements based on
feedback.
What is the V-Models, and how does it differ from Waterfall?
The V-models is an extension of the Waterfall models, where each
development phase has a corresponding testing phase. It emphasizes
verification and validation activities in parallel, resulting in a V-shaped
development and testing process.
When is the Incremental models suitable?
The Incremental models is suitable when a project can be divided into
increments, each providing a portion of the functionality. It allows for
partial implementation and testing, leading to faster time-to-market.
Can I use multiple SDLC models in a single project?
Yes, organizations often adopt hybrid approaches, combining
elements from different SDLC models based on project requirements.
This flexibility allows for a tailored development process.
How do I choose the right SDLC models for my project?
The choice of an SDLC models depends on factors such as project
size, complexity, requirements stability, and team dynamics. Consider
the unique characteristics of your project and select a models that
aligns with its specific needs.
Software Metrics
A software metric is a measure of software characteristics which are
measurable or countable. Software metrics are valuable for many reasons,
including measuring software performance, planning work items, measuring
productivity, and many other uses.
Within the software development process, many metrics are that are all
connected. Software metrics are similar to the four functions of management:
Planning, Organization, Control, or Improvement.
Classification of Software Metrics
Software metrics can be classified into two types as follows:
1. Product Metrics: These are the measures of various characteristics of the
software product. The two important software characteristics are.
Size and complexity of software.
Quality and reliability of software.
These metrics can be computed for different stages of SDLC.
2. Process Metrics: These are the measures of various characteristics of the
software development process. For example, the efficiency of fault detection.
They are used to measure the characteristics of methods, techniques, and tools
that are used for developing software.
Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring
properties that are viewed to be of greater importance to a software developer.
For example, Lines of Code (LOC) measure.
External metrics: External metrics are the metrics used for measuring
properties that are viewed to be of greater importance to the user, e.g.,
portability, reliability, functionality, usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product,
process, and resource metrics. For example, cost per FP where FP stands for
Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager
to check the project's progress. Data from the past projects are used to collect
various metrics, like time and cost.
Size Oriented Metrics
LOC Metrics
It is one of the earliest and simpler metrics for calculating the size of the
computer program. It is generally used in calculating and comparing the
productivity of programmers. These metrics are derived by normalizing the
quality and productivity measures by considering the size of the product as a
metric.
Following are the points regarding LOC measures:
• In size-oriented metrics, LOC is considered to be the normalization value.
• It is an older method that was developed when FORTRAN and COBOL
programming were very popular.
• Productivity is defined as KLOC / EFFORT, where effort is measured in
person-months.
• Size-oriented metrics depend on the programming language used.
• As productivity depends on KLOC, so assembly language code will have
more productivity.
• LOC measure requires a level of detail which may not be practically
achievable.
• The more expressive is the programming language, the lower is the
productivity.
• LOC method of measurement does not apply to projects that deal with
visual (GUI-based) programming. As already explained, Graphical User
Interfaces (GUIs) use forms basically. LOC metric is not applicable here.
• It requires that all organizations must use the same method for counting
LOC. This is so because some organizations use only executable
statements, some useful comments, and some do not. Thus, the standard
needs to be established.
• These metrics are not universally accepted.
Advantages of LOC
• Simple to measure
Disadvantage of LOC
• It is defined on the code. For example, it cannot measure the size of the
specification.
• It characterizes only one specific view of size, namely length, it takes no
account of functionality or complexity
• Bad software design may cause an excessive line of code
• It is language dependent
• Users cannot easily understand it
Halstead’s Software Metrics
Halstead’s Software metrics are a set of measures proposed by Maurice
Halstead to evaluate the complexity of a software program. These metrics are
based on the number of distinct operators and operands in the program and
are used to estimate the effort required to develop and maintain the program.
Field of Halstead Metrics
• Program length (N): This is the total number of operator and
operand occurrences in the program.
• Vocabulary size (n): This is the total number of distinct operators
and operands in the program.
• Program volume (V): This is the product of program length (N) and
the logarithm of vocabulary size (n), i.e., V = N*log2(n).
• Program level (L): This is the ratio of the number of operator
occurrences to the number of operand occurrences in the program,
i.e., L = n1/n2, where n1 is the number of operator occurrences and
n2 is the number of operand occurrences.
• Program difficulty (D): This is the ratio of the number of unique
operators to the total number of operators in the program, i.e., D =
(n1/2) * (N2/n2).
• Program effort (E): This is the product of program volume (V) and
program difficulty (D), i.e., E = V*D.
• Time to implement (T): This is the estimated time required to
implement the program, based on the program effort (E) and a
constant value that depends on the programming language and
development environment.
Advantages of Halstead Metrics
• It is simple to calculate.
• It measures the overall quality of the programs.
• It predicts the rate of error.
• It predicts maintenance effort.
• It does not require a full analysis of the programming structure.
• It is useful in scheduling and reporting projects.
• It can be used for any programming language.
• Easy to use: The metrics are simple and easy to understand and can
be calculated quickly using automated tools.
• Quantitative measure: The metrics provide a quantitative measure of
the complexity and effort required to develop and maintain a software
program, which can be useful for project planning and estimation.
• Language independent: The metrics can be used for different
programming languages and development environments.
• Standardization: The metrics provide a standardized way to compare
and evaluate different software programs.
Disadvantages of Halstead Metrics
• It depends on the complete code.
• It has no use as a predictive estimating model.
• Limited scope: The metrics focus only on the complexity and effort required to
develop and maintain a software program, and do not take into account other
important factors such as reliability, maintainability, and usability.
• Limited applicability: The metrics may not be applicable to all types of
software programs, such as those with a high degree of interactivity or real-
time requirements.
• Limited accuracy: The metrics are based on a number of assumptions and
simplifications, which may limit their accuracy in certain situations.
Function Point Analysis(FP):
Function Point Analysis was initially developed by Allan J. Albrecht in 1979 at
IBM and has been further modified by the International Function Point User’s
Group (IFPUG). Allan J. Albrecht gave the initial definition.
In this method, the number and type of functions supported by the software
are utilized to find FPC(function point count). The steps in function point
analysis are:
• Count the number of functions of each proposed type.
• Compute the Unadjusted Function Points(UFP).
• Find the Total Degree of Influence(TDI).
• Compute Value Adjustment Factor(VAF).
• Find the Function Point Count(FPC).
Advantages:
• It can be easily used in the early stages of project planning.
• It is independent of the programming language.
• It can be used to compare different projects even if they use different
technologies(database, language, etc).
Disadvantages:
• It is not good for real-time systems and embedded systems.
• Many cost estimation models like COCOMO use LOC and hence FPC
must be converted to LOC.
What do you mean by Functional Point?
Functional Point basically determines the size of the application
system on the basis of the functionality of the system.
How do you find the Functional Point?
The functional Point is calculated with the total count factor. It is
simply calculated using the formula FP = TC * [0.65 +
0.01*⅀(Xi)].
List the five components of the Functional Point?
The five components of the functional point are listed below:
• Internal Logical Files (ILF)
• External Interface Files (EIF)
• External Inputs (EI)
• External Outputs (EO)
• External Enquiries (EQ)
Differentiate between FP and LOC
FP LOC
1. FP is specification based. 1. LOC is an analogy based.
2. FP is language independent. 2. LOC is language dependent.
3. FP is user-oriented. 3. LOC is design-oriented.
4. It is extendible to LOC. 4. It is convertible to FP (backfiring)
Software Project Management (SPM)
SPM is an art and science of planning and leading software projects.
Main goal is to enable a group of developers to work effectively towards
successful completion of project.
Project manager is an administrative leader of the team.
Various factors make this job very complex(ex:- changeability, complexity,
uniqueness, Possibility of multiple solutions etc).
Job responsibilities of project manager:-
• Planning
• Organizing
• Staffing
• directing
• Monitoring
• Controlling
• Innovating
• Representing
Skills required for project manager
• Managerial Skills
• Technical Skills
• Problem Solving Skills
• Coping Skills
• Conceptual Skills
• Leadership Skills
• Communication Skills
Project Planning:-
• Estimation(Cost, Duration, Effort)
• Staffing(Staff organization, Staff plans)
• Scheduling manpower & other resources
• Risk Management
• Miscellaneous Plans(Quality assurance plan, configuration & Installation plans)
Estimation:-
Estimation of various projects parameters is an important project planning
activity. The different parameters of a project that need to be estimated
include:-
• Project Size
• Effort required to complete the project,
• Project Duration
• Cost
Cost Estimation
Estimation Techniques can be classified as:-
• Empirical (based on past experience),
• Heuristic (using educated guesses)
• Analytical (using mathematical models like COCOMO).
Empirical Estimation Techniques:-----------------------
• Empirical estimation techniques are based on making an educated guess
of the project parameters and common sense.
• This technique is based on prior experience of development of similar
products and projects.
• An educated guess based on past experience.
• Two popular empirical estimation techniques are:
Expert Judgment Technique
Delphi Cost Estimation
• Expert Judgment Technique -
• In this an expert makes an educated guess of the problem size after
analyzing the problem thoroughly.
• The expert estimates the cost of the different components of the system:
e.g. GUI, database module, communication module, billing module, etc.
• Combines them to arrive at the overall estimate.
• Delphi Cost Estimation-
• Is carried out by a team comprising of a group of experts and a
coordinator.
• The coordinator provides each estimator with a copy of the SRS.
• document and a form for recording his cost estimate. Estimators
complete their individual estimates anonymously and submit to the
coordinator.
Heuristic Techniques:----------------------------
• In Heuristic Techniques the relationship that exist among the different
project parameters can modeled using suitable mathematical
expressions.
•
• Once the independent parameters are known, the dependent parameters
can be easily determined by substituting the values of the independent
parameters in the corresponding mathematical expressions.
• assume that the characteristics to be estimated can be expressed in
terms of
• some mathematical expression. Can be classified as
• Single variable
• Multivariable models.
• The popular heuristic technique is given by Constructive Cost Model
(COCOMO).
COCOMO(constructive cost model)
• Was first proposed by Dr. Barry Boehm in 1981.
• Is a heuristic estimation technique- this technique assumes that
relationship among different parameters can be modeled using some
mathematical expression.
• This approach implies that size is primary factor for cost, other factors
have lesser effect.
• "constructive" implies that the complexity.
COCOMO prescribes a three stage process for project estimation.
• An initial estimate is obtained, and over next two stages the initial estimate is
refined to arrive at a more accurate estimate.
• projects used in this model have following attributes :-
1. ranging in size from 2,000 to 100,000 lines of code
2. programming languages ranging from assemb, to PL/I.
3. These projects were based on the waterfall model of software development.
Boehm stated that any software development project can be classified into
three categories :-
1. Organic:
• If the project deals with developing a well understood application
program.
• The size of development is reasonably small and experienced.
• The team members are experienced in developing similar kind of
projects.
• Project size : 2-50 KLOC
• Development environment = familar & in house.
2. Semi detached:-
• Medium size project, Medium size team.
• Average previous experience on similar project(ex:- compiler, database
system).
• Project size : 50- 300 KLOC
3. Embedded:-
• Large Project, Real time systems, complex interfaces.
• Very little previous expreience.(ex:- ATMs, Air Traffic control)
• Project size : over 300 KLOC
BASIC COCOMO MODEL
Basic COCOMO model computes software development effort, time and cost as
a function of program size. Program size is expressed in estimated d thousands
of source lines of code (SLOC, KLOC).
E = a(KLOC)^b
Time = c(Effort)^d
Person required = Effort/ time
(formula is used for the cost estimation of the basic COCOMO model and also is
used in the subsequent models. The constant values a, b, c, and d for the Basic
Model for the different categories of the system:)
Delphi Method
Delphi Method is a structured communication technique, originally developed
as a systematic, interactive forecasting method which relies on a panel of
experts. The experts answer questionnaires in two or more rounds. After each
round, a facilitator provides an anonymous summary of the experts’ forecasts
from the previous round with the reasons for their judgments. Experts are then
encouraged to revise their earlier answers in light of the replies of other
members of the panel.
Project Size Estimation
Estimating the size of the Software
Estimation of the size of the software is an essential part of Software Project
Management. It helps the project manager to further predict the effort and time
that will be needed to build the project. Various measures are used in project
size estimation. Some of these are:
•Lines of Code(LOC)
•Number of entities in the ER diagram
•Total number of processes in detailed data flow diagram
•Function points(FP)
Lines of Code (LOC):
LOC counts the total number of lines of source code in a project. The units of
LOC are:
•KLOC- Thousand lines of code
•NLOC- Non-comment lines of code
•KDSI- Thousands of delivered source instruction
Features of Lines of Code (LOC):
•Change Tracking:
•Limited Representation of Complexity:
•Ease of Computation:
•Easy to Understand:
Advantages
•Universally accepted and is used in many models like COCOMO.
•Estimation is closer to the developer’s perspective.
•Both people throughout the world utilize and accept it.
•At project completion, LOC is easily quantified.
•It has a specific connection to the result.
•Simple to use.
Disadvantages:
•Different programming languages contain a different number of lines.
•No proper industry standard exists for this technique.
•It is difficult to estimate the size using this technique in the early stages of
the project.
•When platforms and languages are different, LOC cannot be used to
normalize.
Putnam Resource Allocation Model
The Lawrence Putnam model describes the time and effort requires finishing a
software project of a specified size.
Software Requirements:
• It is the description of features and functionalities of the target system.
• It is the description of what the system should do.
• Requirements engineering (RE) refers to the process of defining, documenting,
and maintaining requirements in the engineering design process.
• It is a four step process, which includes -
▪ Feasibility Study
▪ Requirement Gathering/Elicitation
▪ Software Requirement Specification
▪ software Requirement Validation
Tool support for Requirements Engineering
• Observation reports (user observation)
• Questionnaires (interviews, surveys and polls),
• Use cases
• User stories
• Requirement workshops
• Mind mapping, Role-playing, Prototyping
Software Requirements Specification(SRS)
• SRS is a description of a software system to be developed.
• It lays out functional and non-functional requirements of the software to be
developed.
• It may include a set of use cases that describe user interactions that the
software must provide to the user for perfect interaction.
SRS Structure
1. Introduction
Purpose, Intended Audience, Scope, Definitions, References
2. Overall Description
User Interfaces, System Interfaces,
Constraints, assumptions and dependencies
User Characteristics
3. System Features and Requirements
• Functional Requirements,
• Use Cases
• External Interface Requirements
• Logical database requirement
• Nonfunctional Requirements
4. Deliver for Approval
User Requirements
• Easy & Simple to Operate
• Quick Response
• effectively handling operational errors
• Customer support
User Requirement Specification
• The user requirement(s) document (URD) or user requirement specification
(URS) is a document usually used in software engineering that specifies what
the user expects the software to be able to do.
• It is a contractual agreement.
Why is it important to define the scope of an SRS document?
Defining the scope in an SRS document helps the customer
understand the goals and worth of the software. It also has details
about how much it will cost to create and how long it will take, so that
the project’s limits are clear.
What are functional requirements in an SRS document, and why are they
important?
Functional requirements describe how the software system is
supposed to work, including how it should react to inputs and make
outputs. They help you figure out what the software needs to do and
give you a place to start building and testing it.
SRS document
Main types of software requirement can be of 3 types:
•Functional requirements
•Non-functional requirements
•Domain requirements
Functional vs Non-Functional Requirements
• Requirements, which are related to functional/Working aspect of software fall into
this category.
• Non-Functional Requirements are expected characteristics of target software.
(Security, Storage, Configuration, Performance, Cost, Interoperability, Flexibility,
Disaster covery, Accessibility)
Software Design:
Software design is a phase in the software development process that involves
conceptualizing and planning the structure and components of a software
system before it is built.
Software design aims to create a roadmap that ensures the final product is
scalable, maintainable, and meets the users' requirements and expectations.
Elements of a System
Architecture Design: This is the conceptual model that defines the
structure, behavior, and views of a system. We can use flowcharts to
represent and illustrate the architecture.
Modules Design: These are components that handle one specific task in a
system. A combination of the modules makes up the system.
Components Design: This provides a particular function or group of
related functions. They are made up of modules.
Interfaces Design: This is the shared boundary across which the
components of a system exchange information and relate.
Data Design: This is the management of the information and data flow.
The software design process can be divided into the following three levels or
phases of design:
1.Interface Design
2.Architectural Design
3.Detailed Design
Interface Design
• Interface design is the specification of the interaction between a system and its
environment.
• This phase proceeds at a high level of abstraction with respect to the inner
workings of the system.
• The internal of the systems are completely ignored, and the system is treated as
a black box. Attention is focused on the dialogue between the target system and
the users, devices, and other systems with which it interacts.
• The design problem statement produced during the problem analysis step
should identify the people, other systems, and devices which are collectively
called agents.
Architectural Design
• Architectural design is the specification of the major components of a
system, their responsibilities, properties, interfaces, and the relationships
and interactions between them.
• In architectural design, the overall structure of the system is chosen, but
the internal details of major components are ignored.
• The architectural design adds important details ignored during the
interface design. Design of the internals of the major components is
ignored until the last phase of the design.
Detailed Design
• Design is the specification of the internal elements of all major system
components, their properties, relationships, processing, and often their
algorithms and the data structures.
Software Design Approaches
Software design approaches refer to the methodologies and processes that
guide how software systems are conceived, structured, and planned.
some of the most prominent software design approaches:
• Structured Design/Function-oriented software design
• Object-Oriented Design(OOD)
Component-Based Design
• Domain Driven Design(DDD)
• Model View Controller(MVC)
• Service Oriented Architecture(SOA)
• Agile Design
Function Oriented Design/Structured Design
Function Oriented design is a method to software design where the model is
decomposed into a set of interacting units or modules where each unit or
module has a clearly defined function. Thus, the system is designed from a
functional viewpoint.
Function-oriented software design, also known as procedural or structured
design.
Function Oriented Design Strategies
Data Flow Diagram (DFD): A DFD maps out the flow of information for
any process or system. It uses defined symbols like rectangles, circles and
arrows, plus short text labels, to show data inputs, outputs, storage points
and the routes between each destination.
Data Dictionaries: Data dictionaries are simply repositories to store
information about all data items defined in DFDs. At the requirement stage,
data dictionaries contains data items. Data dictionaries include Name of the
item, Aliases (Other names for items), Description / purpose, Related data
items, Range of values, Data structure definition / form.
Structure Charts: Structure Charts is the hierarchical representation of
system which partitions the system into black boxes (functionality is known
to users, but inner details are unknown). Components are read from top to
bottom and left to right. When a module calls another, it views the called
module as a black box, passing required parameters and receiving results.
Pseudo Code: Pseudo Code is system description in short English like
phrases describing the function. It uses keyword and indentation.
Pseudocodes are used as replacement for flow charts. It decreases the
amount of documentation required.
Advantages
• Simplicity: For small to moderately complex systems, function-oriented design
can be straightforward and easy to understand, especially for developers
familiar with procedural programming paradigms.
• Modularity: Breaking down a system into functional modules can make it
easier to manage, develop, and test, as each module can be dealt with
independently.
• Efficiency: In some cases, especially where operations are primarily
procedural or linear in nature, function-oriented designs can lead to efficient
implementations.
Disadvantages
• Scalability: As systems grow in complexity, managing the interrelations and
data flow between modules can become challenging, making the system harder
to understand and modify.
• Reusability: Object-oriented design tends to offer higher reusability of
components compared to function-oriented design. Functions are often
designed with specific contexts in mind, making them less flexible and
reusable in different scenarios.
• Maintainability: For large systems, making changes can be difficult as
modifications in one function may have unforeseen effects on others due to the
tight coupling of data flows.
Data flow Diagrams
• A graphical tool, useful for communicating with users, managers, and other
personnel.
• Useful for analyzing existing as well as proposed systems.
• Focus on the movement of data between external entities and processes, and
between processes and data stores.
• A relatively simple technique to learn and use.
Why DFD:-
It provides an overview of
• What data is system processes.
• What transformation are performed.
• What data are stored.
• What results are produced ,
• what data are stored.
• User and analyst.
Components of DFD
• Source/Sinks(External entities)
• Data flows
• Processes
• Data store
External entities:-
• A Rectangle represents an external entity.
• They either supply or receive data.
• They do not process data.
Source :- Entity that supplies data to the system.
Sink:- Entity that receives data from the system.
Data-Flow:-
• Marks movement of data through the system - a pipeline to carry data
• Connects the processes, external entines and data stores
• Generally unidirectional, if same data flows in both directions
• double-headed arrow can be used.
Processes:-
• A circle represents a process
• Straight line with incoming arrows are input data flows
• Straight lines with outgoing arrows are output dataflows
• Labels are assigned to Data flow
Data Store
• A Data Store is a repository of data
• Data can be written into the data store. This is depicted by an incoming arrow
• Data can be read from a data store. This is depicted by an outgoing arrow
• External entity cannot read or write to the data store
Rules of Data Flow
Data can flow from
• External entity to process
• Process to external entity
• Process to store and back
• Process to process
• Data cannot flow from
• External entity to external entity
• External entity to store
• Store to external entity
• Store to store
Levels of DFD
DFD uses hierarchy to maintain transparency thus multilevel DFD’s can be
created. Levels of DFD are as follows:
• 0-level DFD: It represents the entire system as a single bubble and
provides an overall picture of the system.
• 1-level DFD: It represents the main functions of the system and how
they interact with each other.
• 2-level DFD: It represents the processes within each function of the
system and how they interact with each other.
• 3-level DFD: It represents the data flow within each process and how
the data is transformed and stored.
Advantages of DFD
• It helps us to understand the functioning and the limits of a system.
•
It is a graphical representation which is very easy to understand as it
helps visualize contents.
• Data Flow Diagram represent detailed and well explained diagram of
system components.
• It is used as the part of system documentation file.
• Data Flow Diagrams can be understood by both technical or
nontechnical person because they are very easy to understand.
Disadvantages of DFD
• At times DFD can confuse the programmers regarding the system.
• Data Flow Diagram takes long time to be generated, and many
times due to this reasons analysts are denied permission to work
on it.
Structured Analysis and Structured Design (SA/SD)
Structured Analysis and Structured Design (SA/SD) is a diagrammatic notation
that is designed to help people understand the system. The basic goal of SA/SD
is to improve quality and reduce the risk of system failure. It establishes
concrete management specifications and documentation. It focuses on the
solidity, pliability, and maintainability of the system.
the approach of SA/SD is based on the Data Flow Diagram. It is easy to
understand SA/SD but it focuses on well-defined system boundary whereas the
JSD approach is too complex and does not have any graphical representation.
SA/SD is combined known as SAD.
it mainly focuses on the following 3 points:
• System
• Process
• Technology
SA/SD involves 2 phases:
• Analysis Phase: It uses Data Flow Diagram, Data Dictionary, State
Transition diagram and ER diagram.
• Design Phase: It uses Structure Chart and Pseudo Code.
Object-Oriented Design
Object-oriented design (OOD) is a methodology in software engineering that
revolves around the concept of designing software by modeling it around
objects rather than functions or logic. It is a part of the object-oriented
programming (OOP) paradigm that includes languages such as Java, C++,
Python, and others. The core idea behind OOD is to use objects as the
fundamental building blocks for creating software systems. Here are the key
concepts and components involved in object-oriented design:
Object:- An object is an instance of a class and represents a specific entity in
the software system. Objects have attributes (data fields or properties) and
behaviors (methods or functions) that define their state and what they can do.
Class:- A class is a blueprint or template for creating objects. It defines the
attributes and behaviors that its objects will have. Every object is an instance of
some class.
Encapsulation:- This principle involves bundling the data (attributes) and
methods that operate on the data into a single unit or class and restricting
access to some of the object's components. This concept is also known as
information hiding.
Abstraction:- Abstraction means representing essential features without
including the background details or explanations. It allows designers to focus
on interactions at a higher level, making the design more manageable and
reducing complexity.
Inheritance:- Inheritance is a mechanism whereby a new class (called a
subclass or derived class) is created from an existing class (called a superclass
or base class). The subclass inherits attributes and behaviors from the
superclass, allowing for code reuse and the creation of a hierarchy of classes.
Polymorphism:- It is the ability for different classes to respond to the same
message (or method call) in different ways. This can be achieved through
method overriding (where a subclass provides a specific implementation of a
method that is already provided by its superclass) or method overloading
(where two or more methods in the same class have the same name but
different parameters).
Modulartiy:- The software is divided into separate modules, where each
module focuses on a single aspect of the software, making it easier to manage
and understand.
Design patterns:- These are proven solutions to common design problems.
Design patterns provide a template for how to solve a problem in a way that
has been proven to work, thus improving the design quality and facilitating the
design process.
Software testing
Software testing is the process of evaluating and verifying that a software
application or system meets specified requirements and works as intended. It
involves executing the software to identify any bugs or errors in the code,
ensuring the product's quality before it is released to the end-users.
The main goal of software testing is to maintain a high standard of quality,
enhance performance, and ensure the reliability and security of the software
product.
Unit Testing:-
• Unit testing is the process of all checking small pieces of code to ensure that
the individual parts of a program work properly on their own.
• Unit tests are used to test individual blocks (unit) of functionality.
• Unit Testing is done by developers.
Note: Some popular frameworks and tools that are used for unit testing
include Junit, Nunit, and xUnit.
White-box testing
• White-box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) is a method of software testing
that tests internal structures or workings of an application.
• White-box testing can be applied at the unit, integration and system levels of
the software testing process.
• Here are some of the top white box testing tools to use: Veracode, CppUnit,
Nunit, RCUNIT etc.
White-box test design techniques:-
• Control flow testing
• Data flow testing
• Branch testing
• Statement coverage
• Decision coverage
• Path testing
Black Box Testing:-
Black box techniques of testing in which the tester doesn’t have access to the
source code of the software and is conducted at the software interface without
any concern with the internal logical structure of the software known as black-
box testing.
Different between white an black box testing:-
White-box testing Black box testing
The developers can perform white box testing. The test engineers perform the black box testing.
what the software is supposed to do, also aware what the software is supposed to do but is not
of how it does it. aware of how it does it.
To perform WBT, we should have an To perform BBT, there is no need to have an
understanding of the programming languages. understanding of the programming languages.
In this, we will look into the source code and In this, we will verify the functionality of the
test the logic of the code. application based on the requirement
specification.
In this, the developer should know about the In this, there is no need to know about the
internal design of the code. internal design of the code.
Test design techniques: Control flow testing, Test design techniques: Decision table testing,
Data flow testing, Branch testing, statement All-pairs testing, Equivalence partitioning,
coverage, Decision coverage, Path testing. Boundary value analysis, Cause-effect graph
Can be applied mainly at unit testing level but Can be applied virtually to every level of
now in integration, system level also. software testing: unit, integration, system and
acceptance.
Grey Box Testing:-
Grey box technique is testing in which the testers should have knowledge of
implementation, however, they need not be experts.
Integration Testing:-
• Integration testing is conducted to evaluate the compliance of a system or
component with specified functional requirements.
• It occurs after unit testing and before system testing.
Type of Integration Testing:
• Big-bang
• Mixed(Sandwich)
• Top-down
• Bottom-up
Different Ways of Performing Integration Testing:
• Top-down integration testing: It starts with the highest-level modules
and differentiates them from lower-level modules.
• Bottom-up integration testing: It starts with the lowest-level modules
and integrates them with higher-level modules.
• Big-Bang integration testing: It combines all the modules and
integrates them all at once.
• Incremental integration testing: It integrates the modules in small
groups, testing each group as it is added.
System Testing:-
• System Testing is a level of testing that validates the complete and fully
integrated software product.
• The purpose of a system test is to evaluate the end-to-end system
specifications.
• System Testing is a black-box testing.
• System testing categories based on: Who is Doing the Testing?
• System testing categories based on: Functional/Non- Functional
Requirements?
System testing is testing conducted on a complete integrated system to evaluate the
system's compliance with its specified requirements.
System Testing is basically performed by a testing team that is independent of the
development team that helps to test the quality of the system impartial. It has both
functional and non-functional testing.
Types of System Testing
• Performance Testing: Performance Testing is a type of software testing that is
carried out to test the speed, scalability, stability, and reliability of the software
product or application.
• Load Testing: Load Testing is a type of software testing which is carried out to
determine the behavior of a system or software product under extreme load. It
evaluates the product's ability to perform under increased load. For example,
Maximum capacity of a web server.
• Stress Testing: Stress Testing is a type of software testing performed to check
the robustness of the system under varying loads. It is normally used to
understand the upper limits of capacity within the system. For Example,
Capacity of a website or an e-commerce platform during peak traffic hours,
such as during a major sale event
• Scalability Testing: Scalability Testing is a type of software testing which is
carried out to check the performance of a software application or system in
terms of its capability to scale up or scale down the number of user request
loads.
• Volume testing: It is basically testing the ability of a database management
system to handle a large amount of data.
• Endurance testing: This type of testing evaluates the system's performance
over a long period of time, usually under normal or expected load conditions,
to identify any memory leaks or performance degradation over time.
• Response time testing: This type of testing evaluates the time it takes for the
system to respond to a request, with the goal of determining the acceptable
response time for end-users.
• Spike testing: tests the product reaction to sudden increase & decrease in the
load
Alpha Testing
Alpha testing is a type of validation testing. It is a type of acceptance testing
that is done before the product is released to customers. It is typically done by
QA people. Ex:- When software testing is performed internally within
the organisation.
Beta Testing
The beta test is conducted at one or more customer sites by the end-user of
the software. This version is released for a limited number of users for testing
in a real-time environment. Ex:- When software testing is performed for
the limited number of people.
Acceptance Testing
Acceptance Testing is done by the customers to check whether the delivered
products perform the desired tasks or not, as stated in the requirements. We
use Object-Oriented Testing for discussing test plans and for executing the
projects.
Object-Oriented Testing
Object-Oriented Testing testing is a combination of various testing techniques
that help to verify and validate object-oriented software. This testing is done in
the following manner:
• Testing of Requirements,
• Design and Analysis of Testing,
• Testing of Code,
• Integration testing,
• System testing,
• User Testing.
Regression Testing:-
Regression testing is a type of software testing that verifies that changes made to the
system, such as bug fixes or new features, do not impact previously working
functionality
There are several types of regression testing, including:
Full Regression Testing: Testing the entire application from start to finish after
changes have been made.
Partial Regression Testing: Testing only those parts of the application that were
affected by the changes.
Regression testing can be performed in different ways:-
• Retesting: This involves testing the entire application or specific
functionality that was affected by the changes.
• Re–execution: This involves running a previously executed test suite
to ensure that the changes did not break any existing functionality.
• Comparison: This involves comparing the current version of the
software with a previous version to ensure that the changes did not
break any existing functionality.
Advantages of Software Testing
• Improved software quality and reliability.
• Early identification and fixing of defects.
• Improved customer satisfaction.
• Increased stakeholder confidence.
• Reduced maintenance costs.
Disadvantages of Software Testing
• Time-Consuming and adds to the project cost.
• This can slow down the development process.
• Not all defects can be found.
• Can be difficult to fully test complex systems.
• Potential for human error during the testing process.
Condition Coverage technique
• It is used to cover all conditions.
• Condition coverage is also known as Predicate Coverage in which each one of
the Boolean expression have been evaluated to both TRUE and FALSE.
Boundary Value Testing
• The black box testing techniques are helpful for detecting any errors or threats
that happened at the boundary values of valid or invalid partitions rather than
focusing on the center of the input data.
Statement coverage technique
• Statement coverage technique is used to design white box test cases.
• This technique involves execution of all statements of the source code at least
once.
• It is used to calculate the total number of executed statements in the source
code out of total statements present in the source code.
• This technique covers dead code, unused code and branches.
Data Flow Testing
• It is one of the white-box testing techniques.
• Dataflow Testing focuses on two points:
▪ In which statement the variables are defined.
▪ In which statement the variables are used.
• It designs the test cases that cover control flow paths around variable
definitions and their uses in the modules.
Advantages of Data Flow Testing
• A variable that is declared but never used within the program.
• A variable that is used but never declared.
• A variable that is defined multiple times before it is used.
• Deallocating a variable before it is used
Debugging
Debugging is the process of identifying and resolving errors, or bugs, in a
software system. It is an important aspect of software engineering because
bugs can cause a software system to malfunction, and can lead to poor
performance or incorrect results. Debugging can be a time-consuming and
complex task, but it is essential for ensuring that a software system is
functioning correctly.
There are several common methods and techniques used in debugging,
including:
Code Inspection: This involves manually reviewing the source code of a
software system to identify potential bugs or errors.
Debugging Tools: There are various tools available for debugging such as
debuggers, trace tools, and profilers that can be used to identify and resolve
bugs.
Unit Testing: This involves testing individual units or components of a
software system to identify bugs or errors.
Integration Testing: This involves testing the interactions between
different components of a software system to identify bugs or errors.
System Testing: This involves testing the entire software system to
identify bugs or errors.
Monitoring: This involves monitoring a software system for unusual
behavior or performance issues that can indicate the presence of bugs or
errors.
Logging: This involves recording events and messages related to the
software system, which can be used to identify bugs or errors.
Debugging Approaches/Strategies:
Brute Force: Study the system for a longer duration to understand the
system. It helps the debugger to construct different representations of
systems to be debugged depending on the need. A study of the system is
also done actively to find recent changes made to the software.
Backtracking: Backward analysis of the problem which involves tracing the
program backward from the location of the failure message to identify the
region of faulty code. A detailed study of the region is conducted to find the
cause of defects.
Forward analysis of the program involves tracing the program forwards
using breakpoints or print statements at different points in the program and
studying the results. The region where the wrong outputs are obtained is the
region that needs to be focused on to find the defect.
Using A debuggingexperience with the software debug the software with
similar problems in nature. The success of this approach depends on the
expertise of the debugger.
Cause elimination: it introduces the concept of binary partitioning. Data
related to the error occurrence are organized to isolate potential causes.
Static analysis: Analyzing the code without executing it to identify
potential bugs or errors. This approach involves analyzing code syntax, data
flow, and control flow.
Dynamic analysis: Executing the code and analyzing its behavior at
runtime to identify errors or bugs. This approach involves techniques like
runtime debugging and profiling.
Collaborative debugging: Involves multiple developers working together
to debug a system. This approach is helpful in situations where multiple
modules or components are involved, and the root cause of the error is not
clear.
Logging and Tracing: Using logging and tracing tools to identify the
sequence of events leading up to the error. This approach involves collecting
and analyzing logs and traces generated by the system during its execution.
Automated Debugging: The use of automated tools and techniques to
assist in the debugging process. These tools can include static and dynamic
analysis tools, as well as tools that use machine learning and artificial
intelligence to identify errors and suggest fixes.
Debugging Tools:
A debugging tool is a computer program that is used to test and debug other
programs. A lot of public domain software like gdb and dbx are available for
debugging. They offer console-based command-line interfaces. Examples of
automated debugging tools include code-based tracers, profilers, interpreters,
etc. Some of the widely used debuggers are:
•Radare2
•WinDbg
•Valgrind
Advantages of debugging in software engineering:
• Improved system quality:
• Reduced system downtime:
• Increased user satisfaction:
• Reduced development costs:
• Increased security:
• Facilitates change:
• Better understanding of the system:
• Facilitates testing:
Disadvantages of Debugging:
• Time-consuming:
• Requires specialized skills:
• Can be difficult to reproduce:
• Can be difficult to diagnose:
• Can be difficult to fix:
• Can be expensive:
• Limited insight:
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc
whereas debugging starts after a bug has been identified in the software.
Testing is used to ensure that the program is correct and it was supposed to do
with a certain minimum success rate. Testing can be manual or automated.
There are several different types of testing unit testing, integration testing,
alpha, and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be
supported by some automated tools available but is more of a manual process
as every bug is different and requires a different technique, unlike a pre-
defined testing mechanism.
Program Analysis Tool?
Program Analysis Tool is an automated tool whose input is the source code or
the executable code of a program and the output is the observation of
characteristics of the program. It gives various characteristics of the program
such as its size, complexity, adequacy of commenting, adherence to
programming standards and many other characteristics. These tools are
essential to software engineering because they help programmers
comprehend, improve and maintain software systems over the course of the
whole development life cycle.
Importance of Program Analysis Tools
Finding faults and Security Vulnerabilities in the Code: Automatic
programme analysis tools can find and highlight possible faults, security
flaws and bugs in the code. This lowers the possibility that bugs will get it
into production by assisting developers in identifying problems early in the
process.
Memory Leak Detection: Certain tools are designed specifically to find
memory leaks and inefficiencies. By doing so, developers may make sure
that their software doesn’t gradually use up too much memory.
Vulnerability Detection: Potential vulnerabilities like buffer overflows,
injection attacks or other security flaws can be found using programme
analysis tools, particularly those that are security-focused. For the
development of reliable and secure software, this is essential.
Dependency analysis: By examining the dependencies among various
system components, tools can assist developers in comprehending and
controlling the connections between modules. This is necessary in order to
make well-informed decisions during refactoring.
Automated Testing Support: To automate testing procedures, CI/CD
pipelines frequently combine programme analysis tools. Only well-tested,
high-quality code is released into production thanks to this integration,
helping in identifying problems early in the development cycle.
Program Analysis Tools are classified into two categories:
1. Static Program Analysis Tools
Static Program Analysis Tool is such a program analysis tool that evaluates and
computes various characteristics of a software product without executing it.
Normally, static program analysis tools analyze some structural representation
of a program to reach a certain analytical conclusion. Basically some structural
properties are analyzed using static program analysis tools. The structural
properties that are usually analyzed are:
1.Whether the coding standards have been fulfilled or not.
2.Some programming errors such as uninitialized variables.
3.Mismatch between actual and formal parameters.
4.Variables that are declared but never used.
Dynamic Program Analysis Tools
Dynamic Program Analysis Tool is such type of program analysis tool that
require the program to be executed and its actual behavior to be observed. A
dynamic program analyzer basically implements the code. It adds additional
statements in the source code to collect the traces of program execution. When
the code is executed, it allows us to observe the behavior of the software for
different test cases. Once the software is tested and its behavior is observed,
the dynamic program analysis tool performs a post execution analysis and
produces reports which describe the structural coverage that has been
achieved by the complete testing process for the program.
Software Maintenance
• Its primary goal is to modify and update software applications after delivery to correct errors and
improve performance.
• Software Maintenance is an inclusive activity that includes:
• Error corrections
• Deletion of obsolete capabilities
• Enhancement of capabilities
• Optimization
(if any project divide then)
Requirement=3%
Designing=8%
implementation=7%
Testing=15%
Maintenance=67%
Types of software maintenance:-
1. Corrective Maintenance(20%)
2. Adaptive Maintenance(25%)
3. Preventive Maintenance(5%)
4. Perfective Maintenance(50%)
Several Key Aspects of Software Maintenance
1.Bug Fixing: The process of finding and fixing errors and problems in the
software.
2.Enhancements: The process of adding new features or improving existing
features to meet the evolving needs of the users.
3.Performance Optimization: The process of improving the speed,
efficiency, and reliability of the software.
4.Porting and Migration: The process of adapting the software to run on
new hardware or software platforms.
5.Re-Engineering: The process of improving the design and architecture of
the software to make it more maintainable and scalable.
6.Documentation: The process of creating, updating, and maintaining the
documentation for the software, including user manuals, technical
specifications, and design documents.
Several Types of Software Maintenance
Corrective Maintenance: This involves fixing errors and bugs in the
software system.
Adaptive Maintenance: This involves modifying the software system to
adapt it to changes in the environment, such as changes in hardware or
software, government policies, and business rules.
Perfective Maintenance: This involves improving functionality,
performance, and reliability, and restructuring the software system to
improve changeability.
Preventive Maintenance: This involves taking measures to prevent future
problems, such as optimization, updating documentation, reviewing and
testing the system, and implementing preventive measures such as
backups.
Maintenance Metrics
1. Mean Time Between Failure (MTBF):- It is the average time available for a system or
component to perform its normal operations between failures.
• MTBF= (SUM of operational time/total number of failures)
2. Mean Time To Repair (MTTR):- It is the average time required to repair a failed component.
MTTR (SUM of downtime periods total number of failures)
Availability = MTBF (MTBF + MTTR
Reverse Engineering/ Backward Engineering:-
• Software reverse engineering can help to improve the understanding of the underlying source code
for the maintenance and improvement of the software.
• In some cases the goal of the reverse engineering process can simply be a redocumentation of
legacy systems.
Code → Module Specification →Design → Requirement Specifications
Why Reverse Engineering?
•Providing proper system documentation.
•Recovery of lost information.
•Assisting with maintenance.
•The facility of software reuse.
•Discovering unexpected flaws or faults.
•Implements innovative processes for specific use.
•Easy to document the things how efficiency and power can be improved.
Uses of Software Reverse Engineering
•Software Reverse Engineering is used in software design, reverse
engineering enables the developer or programmer to add new features to
the existing software with or without knowing the source code.
•Reverse engineering is also useful in software testing, it helps the testers to
study or detect the virus and other malware code.
•Software reverse engineering is the process of analyzing and understanding
the internal structure and design of a software system. It is often used to
improve the understanding of a software system, to recover lost or
inaccessible source code, and to analyze the behavior of a system for
security or compliance purposes.
•Malware analysis: Reverse engineering is used to understand how
malware works and to identify the vulnerabilities it exploits, in order to
develop countermeasures.
•Legacy systems: Reverse engineering can be used to understand and
maintain legacy systems that are no longer supported by the original
developer.
•Intellectual property protection: Reverse engineering can be used to
detect and prevent intellectual property theft by identifying and preventing
the unauthorized use of code or other assets.
•Security: Reverse engineering is used to identify security vulnerabilities in
a system, such as backdoors, weak encryption, and other weaknesses.
•Compliance: Reverse engineering is used to ensure that a system meets
compliance standards, such as those for accessibility, security, and privacy.
•Reverse-engineering of proprietary software: To understand how a
software works, to improve the software, or to create new software with
similar features.
•Reverse-engineering of software to create a competing product: To
create a product that functions similarly or to identify the features that are
missing in a product and create a new product that incorporates those
features.
•It’s important to note that reverse engineering can be a complex and time-
consuming process, and it is important to have the necessary skills, tools,
and knowledge to perform it effectively. Additionally, it is important to
consider the legal and ethical implications of reverse engineering, as it may
be illegal or restricted in some jurisdictions.
Advantages of Software Maintenance
•Improved Software Quality: Regular software maintenance helps to
ensure that the software is functioning correctly and efficiently and that it
continues to meet the needs of the users.
•Enhanced Security: Maintenance can include security updates and
patches, helping to ensure that the software is protected against potential
threats and attacks.
•Increased User Satisfaction: Regular software maintenance helps to
keep the software up-to-date and relevant, leading to increased user
satisfaction and adoption.
•Extended Software Life: Proper software maintenance can extend the
life of the software, allowing it to be used for longer periods of time and
reducing the need for costly replacements.
•Cost Savings: Regular software maintenance can help to prevent larger,
more expensive problems from occurring, reducing the overall cost of
software ownership.
•Better Alignment with business goals: Regular software maintenance
can help to ensure that the software remains aligned with the changing
needs of the business. This can help to improve overall business efficiency
and productivity.
•Competitive Advantage: Regular software maintenance can help to keep
the software ahead of the competition by improving functionality,
performance, and user experience.
•Compliance with Regulations: Software maintenance can help to ensure
that the software complies with relevant regulations and standards. This is
particularly important in industries such as healthcare, finance, and
government, where compliance is critical.
•Improved Collaboration: Regular software maintenance can help to
improve collaboration between different teams, such as developers, testers,
and users. This can lead to better communication and more effective
problem-solving.
•Reduced Downtime: Software maintenance can help to reduce downtime
caused by system failures or errors. This can have a positive impact on
business operations and reduce the risk of lost revenue or customers.
•Improved Scalability: Regular software maintenance can help to ensure
that the software is scalable and can handle increased user demand. This
can be particularly important for growing businesses or for software that is
used by a large number of users.
Disadvantages of Software Maintenance
•Cost: Software maintenance can be time-consuming and expensive, and
may require significant resources and expertise.
•Schedule disruptions: Maintenance can cause disruptions to the normal
schedule and operations of the software, leading to potential downtime and
inconvenience.
•Complexity: Maintaining and updating complex software systems can be
challenging, requiring specialized knowledge and expertise.
•Risk of introducing new bugs: The process of fixing bugs or adding new
features can introduce new bugs or problems, making it important to
thoroughly test the software after maintenance.
•User resistance: Users may resist changes or updates to the software,
leading to decreased satisfaction and adoption.
•Compatibility issues: Maintenance can sometimes cause compatibility
issues with other software or hardware, leading to potential integration
problems.
•Lack of documentation: Poor documentation or lack of documentation
can make software maintenance more difficult and time-consuming, leading
to potential errors or delays.
•Technical debt: Over time, software maintenance can lead to technical
debt, where the cost of maintaining and updating the software becomes
increasingly higher than the cost of developing a new system.
•Skill gaps: Maintaining software systems may require specialized skills or
expertise that may not be available within the organization, leading to
potential outsourcing or increased costs.
•Inadequate testing: Inadequate testing or incomplete testing after
maintenance can lead to errors, bugs, and potential security vulnerabilities.
•End-of-life: Eventually, software systems may reach their end-of-life,
making maintenance and updates no longer feasible or cost-effective. This
can lead to the need for a complete system replacement, which can be
costly and time-consuming.
Software Reliability
Software reliability is the probability that the software will operate failure-free
for a specific period of time in a specific environment. It is measured per some
unit of time.
•Software Reliability starts with many faults in the system when first
created.
•After testing and debugging enter a useful life cycle.
•Useful life includes upgrades made to the system which bring about new
faults.
•The system needs to then be tested to reduce faults.
•Software reliability cannot be predicted from any physical basis, since it
depends completely on the human factors in design.
Musa’s Basic Model
The Musa-Okumoto logarithmic model is a model used to predict computer
system performance. It is based on the idea that a system’s performance is
proportional to the amount of work it can do in a given amount of time. The
model is used to predict the performance of a system based on various factors
such as the number of processors, memory size, and processor speed. In the
field of computer engineering, the model is frequently used to design and
evaluate computer systems.
Software Quality Assurance (SQA)
Software Quality Assurance (SQA) is simply a way to assure quality in the
software. It is the set of activities which ensure processes, procedures as well
as standards are suitable for the project and implemented correctly.
Software Quality Assurance is a process which works parallel to development of
software. It focuses on improving the process of development of software so
that problems can be prevented before they become a major issue. Software
Quality Assurance is a kind of Umbrella activity that is applied throughout the
software process.
Software quality assurance focuses on:
• software’s portability
• software’s usability
• software’s reusability
• software’s correctness
• software’s maintainability
• software’s error control
Portability: A software device is said to be portable, if it can be freely made to
work in various operating system environments, in multiple machines, with
other software products, etc.
Usability: A software product has better usability if various categories of users
can easily invoke the functions of the product.
Reusability: A software product has excellent reusability if different modules
of the product can quickly be reused to develop new products.
Correctness: A software product is correct if various requirements as specified
in the SRS document have been correctly implemented.
Maintainability: A software product is maintainable if bugs can be easily
corrected as and when they show up, new tasks can be easily added to the
product, and the functionalities of the product can be easily modified, etc.
Software Quality Assurance has:-
1.A quality management approach
2.Formal technical reviews
3.Multi testing strategy
4.Effective software engineering technology
5.Measurement and reporting mechanism
Major Software Quality Assurance Activities:
SQA Management Plan: Make a plan for how you will carry out the SQA
throughout the project. Think about which set of software engineering
activities are the best for project. check level of SQA team skills.
Set The Check Points: SQA team should set checkpoints. Evaluate the
performance of the project on the basis of collected data on different check
points.
Measure Change Impact: The changes for making the correction of an
error sometimes re introduces more errors keep the measure of impact of
change on project. Reset the new change to check the compatibility of this
fix with whole project.
Multi testing Strategy: Do not depend on a single testing approach.
When you have a lot of testing approaches available use them.
Manage Good Relations: In the working environment managing good
relations with other teams involved in the project development is
mandatory. Bad relation of SQA team with programmers team will impact
directly and badly on project. Don’t play politics.
Managing Reports and Records: Document and share QA activities (test
cases, defects, client changes) for future reference and stakeholder
alignment.
Benefits of Software Quality Assurance (SQA):
• SQA produces high quality software.
• High quality application saves time and cost.
• SQA is beneficial for better reliability.
• SQA is beneficial in the condition of no maintenance for a long time.
• High quality commercial software increase market share of company.
• Improving the process of creating software.
• Improves the quality of the software.
• It cuts maintenance costs. Get the release right the first time, and your
company can forget about it and move on to the next big thing. Release a
product with chronic issues, and your business bogs down in a costly,
time-consuming, never-ending cycle of repairs.
Disadvantage of SQA:
There are a number of disadvantages of quality assurance. Some of them
include adding more resources, employing more workers to help maintain
quality and so much more.
ISO 9000 Certification
ISO (International Standards Organization) is a group or consortium of 63
countries established to plan and fosters standardization. ISO declared its 9000
series of standards in 1987. It serves as a reference for the contract between
independent parties. The ISO 9000 standard determines the guidelines for
maintaining a quality system. The ISO standard mainly addresses operational
methods and organizational methods such as responsibilities, reporting, etc.
ISO 9000 defines a set of guidelines for the production process and is not
directly concerned about the product itself.
Types of ISO 9000 Quality Standards
ISO 9001: This standard applies to the organizations engaged in design,
development, production, and servicing of goods. This is the standard
that applies to most software development organizations.
ISO 9002: This standard applies to those organizations which do not
design products but are only involved in the production. Examples of
these category industries contain steel and car manufacturing industries
that buy the product and plants designs from external sources and are
engaged in only manufacturing those products. Therefore, ISO 9002 does
not apply to software development organizations.
ISO 9003: This standard applies to organizations that are involved only
in the installation and testing of the products. For example, Gas
companies.
Software Engineering Institute Capability Maturity Model (SEICMM)
The Capability Maturity Model (CMM) is a procedure used to develop and refine
an organization's software development process.
The model defines a five-level evolutionary stage of increasingly organized and
consistently more mature processes.
CMM was developed and is promoted by the Software Engineering Institute
(SEI), a research and development center promote by the U.S. Department of
Defense (DOD).
Capability Maturity Model is used as a benchmark to measure the maturity of
an organization's software process.
Methods of SEICMM
There are two methods of SEICMM:
Capability Evaluation: Capability evaluation provides a way to assess the
software process capability of an organization. The results of capability
evaluation indicate the likely contractor performance if the contractor is
awarded a work. Therefore, the results of the software process capability
assessment can be used to select a contractor.
Software Process Assessment: Software process assessment is used by an
organization to improve its process capability. Thus, this type of evaluation is
for purely internal use.
SEI CMM categorized software development industries into the following five
maturity levels.
Level 1: Initial
Ad hoc activities characterize a software development organization at this
level. Very few or no processes are described and followed. Since software
production processes are not limited, different engineers follow their process
and as a result, development efforts become chaotic. Therefore, it is also called
a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost
and schedule are established. Size and cost estimation methods, like function
point analysis, COCOMO, etc. are used.
Level 3: Defined
At this level, the methods for both management and development activities are
defined and documented. There is a common organization-wide understanding
of operations, roles, and responsibilities. The ways through defined, the process
and product qualities are not measured. ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are
composed.
Product metrics measure the features of the product being developed, such
as its size, reliability, time complexity, understandability, etc.
Process metrics follow the effectiveness of the process being used, such as
average defect correction time, productivity, the average number of defects
found per hour inspection, the average number of failures detected during
testing per LOC, etc.
Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product
measurement data are evaluated for continuous process improvement.