KEMBAR78
04 Software Engineering | PDF | Software Testing | Software Development Process
0% found this document useful (0 votes)
3 views47 pages

04 Software Engineering

Uploaded by

Gaurav Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views47 pages

04 Software Engineering

Uploaded by

Gaurav Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

SE

Software Engineering
 Software engineering is the process of designing, developing, testing, and maintaining
software.
 It is a systematic and disciplined approach to software development that aims to create high-quality,
reliable, and maintainable software.
 Software engineering includes a variety of techniques, tools, and methodologies, including
requirements analysis, design, testing, and maintenance.
 Software engineering is a fully layered technology, to develop software we need to go from one
layer to another.

Software engineering tools provide a self-operating system for


processes and methods.
During the process of software development the answers to all
“how-to-do” questions are given by method.
The software process covers all the activities, actions, and tasks
required to be carried out for software development.
It defines the continuous process improvement principles of
software.
Waterfall Model
Advantages of Waterfall Model

 This model is very simple and is easy to understand.


 Phases in this model are processed one at a time.
 Each stage in the model is clearly defined.
 Process, actions and results are very well
documented.
 Reinforces good habits: define-before- design, design-
before-code.
 This model works well for smaller projects and
projects where requirements are well understood.

Drawbacks of Waterfall Model

 No feedback path
 Difficult to accommodate change requests
 No overlapping of phases
Incremental Model
 The incremental process model is also known as
the Successive version model.

Advantages-

 Prepares the software fast.


 Clients have a clear idea of the project.
 Changes are easy to implement.
 Provides risk handling support, because of its
iterations.

Disadvantages-

 A good team and proper planned execution are


required.
 Because of its continuous iterations the cost
increases.
Iterative Model
Advantages of Iterative Model:

 Testing and debugging during smaller iteration is


easy.
 A Parallel development can plan.
 It is easily acceptable to ever-changing needs of
the project.
 Risks are identified and resolved during iteration.
 Limited time spent on documentation and extra
time on designing.

Disadvantages of Iterative Model:

 It is not suitable for smaller projects.


 More Resources may be required.
 Design can be changed again and again because of
imperfect requirements.
 Requirement changes can cause over budget.
 Project completion date not confirmed because of
changing requirements.
 The Spiral model is called a Meta-Model because it
Spiral Model subsumes all the other SDLC models.
 A single loop spiral actually represents the Iterative
Waterfall Model.
 The spiral model incorporates the stepwise approach
of the Classical Waterfall Model.
 The spiral model uses the approach of the
Prototyping Model by building a prototype at the
start of each phase as a risk-handling technique.

Advantages of Spiral Model:

 Risk Handling
 Good for large projects
 Customer Satisfaction

Disadvantages of Spiral Model:

 Complex
 Expensive
 Too much dependability on Risk Analysis
 Difficulty in time management
Agile Model  The Agile model was primarily designed to help a
project adapt to change requests quickly.
 Agility is achieved by fitting the process to the
project and removing activities that may not be
essential for a specific project.
 Actually Agile model refers to a group of
development processes.

Agile SDLC models:


 Scrum: This methodology serves as a framework for
tackling complex projects and ensuring their
successful completion.
 Extreme programming (XP): It uses specific
practices like pair programming, continuous
integration, and test-driven development to achieve
these goals.
 Crystal: Crystal Agile methodology places a strong
emphasis on fostering effective communication and
collaboration among team members.
 Atern: This methodology is tailored for projects
with moderate to high uncertainty, where
requirements are prone to change frequently.
Extreme Programming

 Extreme programming (XP) is one of the most


important software development frameworks
of Agile models.
 It is used to improve software quality and
responsiveness to customer requirements.

Applications of Extreme Programming


(XP):

 Small projects
 Projects involving new technology or
Research projects
Rapid application development model (RAD)
Advantages:
 The use of reusable components helps to reduce the
cycle time of the project.
 Feedback from the customer is available at the
initial stages.
 Reduced costs as fewer developers are required.
 This model should be used for a system with known
requirements and requiring a short development
time.

Disadvantages:
 The use of powerful and efficient tools requires
highly skilled professionals.
 The absence of reusable components can lead to the
failure of the project.
 The systems which cannot be modularized suitably
cannot use this model.
 Customer involvement is required throughout the
life cycle.
 It is not meant for small-scale projects.
Evolutionary Model

 Evolutionary model is a combination of Iterative and


Incremental model of software development life cycle.
 The Evolutionary development model divides the
development cycle into smaller, incremental waterfall
models in which users are able to get access to the product
at the end of each cycle.

Advantages:
 In evolutionary model, a user gets a chance to experiment
partially developed system.
 It reduces the error because the core modules get tested
thoroughly.

Disadvantages:
 Sometimes it is hard to divide the problem into several
versions that would be acceptable to the customer which can
be incrementally implemented and delivered.
Prototype Model
 Prototyping is defined as the process of developing a
working replication of a product or system that has to be
engineered.
Advantages –
 New requirements can be easily accommodated as there is
scope for refinement.
 Missing functionalities can be easily figured out.
 Errors can be detected much earlier thereby saving a lot of
effort and cost, besides enhancing the quality of the
software.
 Flexibility in design.

Disadvantages –
 Costly w.r.t time as well as money.
 It is very difficult for developers to accommodate all the
changes demanded by the customer.
 There is uncertainty in determining the number of
iterations that would be required before the prototype is
finally accepted by the customer.
 The V-model is a type of SDLC model where process
V - Model executes in a sequential manner in V-shape.
 It is also known as Verification and Validation model.
It is based on the association of a testing phase for
each corresponding development stage.
 Development of each step directly associated with the
testing phase.

Advantages:
 V-Model is used for small projects where project
requirements are clear.
 Simple and easy to understand and use.
 It enables project management to track progress
accurately.

Disadvantages:
 High risk and uncertainty.
 It is not a good for complex and object-oriented
projects.
 It is not suitable for projects where requirements are
not clear and contains high risk of changing.
 This model does not support iteration of phases.
 It does not easily handle concurrent events.
Verification Vs Validation
Verification Validation
It includes checking documents, design, codes and
It includes testing and validating the actual product.
programs.

Verification is the static testing. Validation is the dynamic testing.

It does not include the execution of the code. It includes the execution of the code.

Methods used in verification are reviews, Methods used in validation are Black Box Testing, White
walkthroughs, inspections and desk-checking. Box Testing and non-functional testing.

It checks whether the software conforms to It checks whether the software meets the requirements and
specifications or not. expectations of a customer or not.

The goal of verification is application and software


The goal of validation is an actual product.
architecture and specification.

Validation is executed on software code with the help of


Quality assurance team does verification.
testing team.

It comes before validation. It comes after verification.

Verification means Are we building the product right? Validation means Are we building the right product?
Requirements Engineering Process

 Requirements engineering is the process of identifying, eliciting, analyzing, specifying,


validating, and managing the needs and expectations of stakeholders for a software system.
 The requirements engineering process is an iterative process that involves :
 Requirements elicitation & Analysis
 Requirements specification
 Requirements verification and validation
 Requirements management
Requirements Elicitation Techniques
Characteristics of a good SRS
1.Complete: The SRS should include all the requirements for the software system,
including both functional and non-functional requirements.

1.Consistent: The SRS should be consistent in its use of terminology and


formatting, and should be free of contradictions.

1.Unambiguous: The SRS should be clear and specific, and should avoid using
vague or imprecise language.

1.Traceable: The SRS should be traceable to other documents and artifacts, such as
use cases and user stories, to ensure that all requirements are being met.

1.Verifiable: The SRS should be verifiable, which means that the requirements can
be tested and validated to ensure that they are being met.

1.Modifiable: The SRS should be modifiable, so that it can be updated and changed
as the software development process progresses.

1.Testable: The SRS should be written in a way that allows the requirements to be
tested and validated.

Design Independence: SRS should not contain any implementation details.


Functional Vs Non-Functional Requirements
Functional Requirements Non Functional Requirements
A functional requirement defines a system or A non-functional requirement defines the quality
its component. attribute of a software system.
It specifies “What should the software system It places constraints on “How should the software
do?” system fulfill the functional requirements?”
Non-functional requirement is specified by
Functional requirement is specified by User. technical peoples e.g. Architect, Technical leaders
and software developers.
It is mandatory. It is not mandatory.
It is captured in use case. It is captured as a quality attribute.
Defined at a component level. Applied to a system as a whole.
Functional Testing like System, Integration, Non-Functional Testing like Performance, Stress,
End to End, API testing, etc are done. Usability, Security testing, etc are done.
Usually easy to define. Usually more difficult to define.
Software Design process
 Software Design is the process to transform the user requirements into some suitable form,
which helps the programmer in software coding and implementation.
 The aim of this phase is to transform the SRS document into the design document.
 The software design concept describes how you plan to solve the problem of designing software,
the logic, or thinking behind how you will design software.

Abstraction
Modularity Refactoring

Software Design Information


Architecture Concepts Hiding

Refinement Pattern
Cohesion
 Cohesion refers to the degree to which elements within a module work together to fulfill a
single, well-defined purpose.
 A good software design will have high cohesion.
 Types of cohesion
 Functional Cohesion: Every essential element for a single computation is contained in the
component.
 Sequential Cohesion: An element outputs some data that becomes the input for other
element.
 Communicational Cohesion: Two elements operate on the same input data or contribute
towards the same output data.
 Procedural Cohesion: Elements of procedural cohesion ensure the order of execution.
 Temporal Cohesion: A module connected with temporal cohesion all the tasks must be
executed in the same time span.
 Logical Cohesion: The elements are logically related and not functionally.
 Coincidental Cohesion: The elements are not related(unrelated). The elements have no
conceptual relationship other than location in source code.
Coupling
 Coupling is the measure of the degree of interdependence between the modules.
 A good software will have low coupling.
 Types of coupling
 Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled. Module
communications don’t contain tramp data.
 Stamp Coupling: In stamp coupling, the complete data structure is passed from one module
to another module. Therefore, it involves tramp data.
 Control Coupling: If the modules communicate by passing control information, then they
are said to be control coupled.
 External Coupling: In external coupling, the modules depend on other modules, external to
the software being developed or to a particular type of hardware.
 Common Coupling: The modules have shared data such as global data structures.
 Content Coupling: In a content coupling, one module can modify the data of another
module, or control flow is passed from one module to the other module.
Capability maturity model (CMM)
 It is not a software process model. It is a framework that is used to analyze the approach and
techniques followed by any organization to develop software products.
 Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).
Types of Maintenance

The adaptive changes in software


Corrective maintenance is done mainly focus on the infrastructure of
when the software is not functioning software. They are done in response
properly because of some acute issues to new hardware, new platforms,
like incorrect implementation, faulty new operating systems, or simply to
logic flow, and invalid tests. keep the program updated.

Perfective maintenance
Preventive software maintenance addresses the usability and
refers to software changes that are functionality of the software and
carried out to future-proof a product in involves changing the existing
advance. product functionality by deleting,
refining, and adding new features.
Software Testing
 Testing is the process of executing a program to find errors.
 To make software perform well it should be error-free.

 Principles of Testing:-

 All the tests should meet the customer requirements.


 To make our software testing should be performed by a third party.
 Exhaustive testing(testing for absolutely everything just to make sure that the product cannot be destroyed or
crashed by some random happenstance) is not possible. As we need the optimal amount of testing based on the
risk assessment of the application.
 All the tests to be conducted should be planned before implementing it .
 It follows the Pareto rule(80/20 rule) which states that 80% of errors come from 20% of program
components.
 Start testing with small parts and extend it to large parts.
Software testing can be divided into two steps:

1. Verification:
It refers to the set of tasks that ensure that the software correctly implements a specific function.

Verification: “Are we building the product right?”

2. Validation:
It refers to a different set of tasks that ensure that the software that has been built is traceable to
customer requirements.

Validation: “Are we building the right product?”


Levels of Testing:-

Done by Done by Done by Done by


Developer Tester Tester User
Types of Testing

White Box Testing /


Black Box Testing /
Implementation /
Functional / Behavioral /
Structural / Logic – driven
I/O driven Testing
/ Glass box Testing

Equivalence class Testing Control Flow Testing


Boundary Value Testing Data Flow Testing
Alpha Testing
Acceptance Testing Basic Path Testing
Beta Testing
State based Testing Statement Coverage
Cause Effect Testing Branch Coverage
Pair Wise Testing Path Coverage
Black box testing
 This examines the functionality of software without peering into its internal structure or coding
 The primary source of black box testing is a specification of requirements that is stated by the customer.

 Equivalence partitioning
 In this, input data divided into partitions of valid and invalid values, and it is mandatory that all
partitions must exhibit the same behavior.

 Boundary Value Technique


 It is used to test boundary values, boundary values are those that contain the upper and lower limit of a
variable.
 It tests, while entering boundary value whether the software is producing correct output or not.

 Acceptance testing
 It is formal testing based on user requirements and function processing.
 It determines whether the software is conforming specified requirements and user requirements or
not.
Alpha Testing
to determine the product in the development testing environment by a specialized testers team
usually called alpha testers.
Beta Testing:
 to assess the product by exposing it to the real end-users, usually called beta testers in their
environment.
 Feedback is collected from the users and the defects are fixed.
 Also, this helps in enhancing the product to give a rich user experience.

 State Based Technique


 It is used to capture the behavior of the software application when different input values are given to the
same function.
 This applies to those types of applications that provide the specific number of attempts to access the
application.

 Cause effect Graphing


 This technique establishes a relationship between logical input called causes with corresponding
actions called the effect.

 Pair- wise testing


 It is used to test all the possible discrete combinations of values.
 This combinational method is used for testing the application that uses checkbox input, radio button
input, list box, text box, etc.
White box testing
 It is based on inner workings of an application and revolves around internal structure testing.
 The primary goal of white box testing is to focus on the flow of inputs and outputs through the software and
strengthening the security of the software.

 Control flow testing

 This type of testing method is often used by developers to test their own code and own
implementation as the design, code and the implementation is better known to the developers.
 This testing method is implemented with the intention to test the logic of the code so that the user
requirements can be fulfilled.

 Data flow Testing

 It is a method that is used to find the test paths of a program according to the locations of definitions
and uses of variables in the program. It has nothing to do with data flow diagrams.
 It is concerned with:
• Statements where variables receive values,
• Statements where these values are used or referenced.
 Basis Path Testing

 It is a technique of selecting the paths in the control flow graph, that provide a basis set of execution
paths through the program or module.
 To design test cases using this technique, four steps are followed :
• Construct the Control Flow Graph
• Compute the Cyclomatic Complexity of the Graph
• Identify the Independent Paths
• Design Test cases from Independent Paths

 Statement Coverage

 This technique involves execution of all statements of the source code at least once.
 It is used to calculate the total number of executed statements in the source code out of total statements
present in the source code.
 Branch coverage

 It is used to cover all branches of the control flow graph.


 It covers all the possible outcomes (true and false) of each condition of decision point at least once.

 Path Coverage

 It is a structured testing technique for designing test cases with intention to examine all possible paths of
execution at least once.
 Creating and executing tests for all possible paths results in 100% statement coverage and 100%
branch coverage.
Example of Statement, Branch and Path coverage :

Statement : We can cover all the statements in the flowchart by writing


1 Test Case that follows the following route : 1A-2C-3D-E-4G-5H.

Branch : We can cover all the branches in the flowchart by writing 2


Test Cases that follow the following two routes: 1A-2C-3D-E-4G-5H
and 1A-2B-E-4F.

Path : We can cover all the paths in the flowchart by writing 4 Test
Cases that follow the following four routes: 1A-2B-E-4F, 1A-2B-E-4G-
5H, 1A-2C-3D-E-4G-5H and 1A-2C-3D-E-4F.
Cyclomatic Complexity
 It is a software metric that measures the logical complexity of the program code.
 It is the maximum number of independent paths through the program code.
 It depends only on the number of decisions in the program code.
 Insertion or deletion of functional statements from the code does not affect its cyclomatic complexity.
 It is always greater than or equal to 1.
 There are 3 commonly used methods for calculating the cyclomatic complexity-
 Cyclomatic Complexity = Total number of closed regions in the control flow graph + 1
 Cyclomatic Complexity = E – N + 2
Here, E = Total number of edges in the control flow graph, N = Total number of nodes in
the control flow graph
 Cyclomatic Complexity = P + 1
Here, P = Total number of predicate nodes contained in the control flow graph
McCall’s Quality Factors
Quality Factors Definitions
Correctness The extent to which a program satisfies its specifications and fulfills the user's
mission objectives.
Reliability The extent to which a program can be expected to perform its intended function with
required precision.
Efficiency The amount of computing resources and code required by a program to perform a
function.
Integrity The extent to which access to software or data by unauthorized persons can be
controlled.
Usability The effort required to learn, operate, prepare input, and interpret output of a
program.
Maintainability The effort required to locate and fix a defect in an operational program.
Testability The effort required to test a program to ensure that it performs its intended functions.
Flexibility The effort required to modify an operational program.
Portability The effort required to transfer a program from one hardware
and/ or software environment to another.
Reusability The extent to which parts of a software system can be reused in other applications.
Interoperability The effort required to couple one system with another.
Quality Assurance Vs Quality Control
Quality Assurance (QA) Quality Control (QC)
It focuses on providing assurance that the quality
It focuses on fulfilling the quality requested.
requested will be achieved.

It is the technique of managing quality. It is the technique to verify quality.

It does not include the execution of the program. It always includes the execution of the program.

It is a managerial tool. It is a corrective tool.

It is process oriented. It is product oriented.

The aim of quality control is to identify and improve


The aim of quality assurance is to prevent defects.
the defects.

It is a preventive technique. It is a corrective technique.

It is a proactive measure. It is a reactive measure.

It is responsible for the entire software


It is responsible for the software testing life cycle.
development life cycle.

Example: Verification Example: Validation


System Configuration Management (SCM)

 SCM is an arrangement of exercises which controls change in the current framework,


inspecting and revealing/reporting on the changes made.
 It uses the tools which keep that the necessary change has been implemented adequately to the
appropriate component.
 The SCM process defines a number of tasks:
 Identification of objects in the software configuration
 Version Control
 Change Control
 Configuration Audit
 Status Reporting
Clean Room Software Engineering

 Clean room software engineering is a software development approach to producing quality software.

 Some of the tasks which occur in clean room engineering process :


 Requirements gathering.
 Incremental planning.
 Formal design.
 Correctness verification.
 Code generation and inspection.
 Statical test planning.
 Statistical use testing.
 Certification.

 Benefits of Clean Room Software engineering :


 Delivers high-quality products.
 Increases productivity.
 Reduces development cost.
 Errors are found early.
 Reduces the overall project time.
 Saves resources.
Software Reuse
 It is the process of creating software systems from existing software systems, rather than
building software system from scratch.
REUSE MATURITY MODEL :
 LEVEL-1 : Single Project Source Based Reuse – At the very first maturity level, organizations
placed all their source code within a single project.
 LEVEL-2 : Multi Project Source Based Reuse – In this stage, source code is divided into multiple
projects and practice source-based reuse between projects.
 LEVEL-3 : Ad hoc Binary Reuse –Under this approach, project boundaries realign and there is no
longer mirror boundaries. Projects at this level, can correspond to applications.
 LEVEL-4 : Controlled Binary Reuse and the Reuse/Release Equivalence Principle –At this
level, each release of a project is controlled and tracked with a version number. At this level, when
the bug is discovered, the exact version of the component with the bug can be identified.
Software Re-engineering
 Software Re-engineering is a process of software
development which is done to improve the
maintainability of a software system.
 Re-engineering is the examination and alteration of a
system to reconstitute it in a new form.
 Re-engineering can be done for a variety of reasons, such as:
 To add new features
 To support new platforms.
 To improve maintainability
 To meet new regulations and compliance
 To improve the quality, performance, and
maintainability of existing software systems
Reverse Engineering
 Software Reverse Engineering is a process of recovering the design, requirement
specifications and functions of a product from an analysis of its code.
 The purpose of reverse engineering is to facilitate the maintenance work by improving the
understandability of a system and to produce the necessary documents for a legacy system.
 Reverse Engineering Goals:
 Cope with Complexity.
 Recover lost information.
 Detect side effects.
 Synthesise higher abstraction.
 Facilitate Reuse.
Software Maturity Index [Mf – (Fa + Fc + Fd)] / Mf
Where
Mf = the number of modules in the current release.
Fa = the number of modules in the current release that have been added.
Fc = the number of modules in the current release that have been changed.
Fd = the number of modules in the current release that have been deleted.
Effort a(KLOC)b here a,b value depends on type of COCOMO
Development c(Effort)d here c,d value depends on type of COCOMO
Availability MTBF/(MTBF + MTTR)
Integrity (1-threat(1-attack))
VAF(value Added 0.65 + 0.01 * σ14
𝑖=1 𝐹𝑖
factor)/CAF(Complexity Adjustment
factor)
UFP(Unadjusted Function Point) Multiply each individual function point
corresponding values in table
FP(Function Point) UFP * VAF

For calculation of UFP

Function Units Low Avg High


External Input 3 4 6

External Output 4 5 7

External Enquiry 3 4 6

Internal Logical File 7 10 15

External Interface 5 7 10

You might also like