KEMBAR78
Unit - I Notes | PDF | Software Testing | Software Development
0% found this document useful (0 votes)
13 views24 pages

Unit - I Notes

The document provides an overview of software testing, detailing its importance, types, and methodologies such as black-box and white-box testing. It emphasizes the necessity of testing to identify and fix defects early in the software development process to avoid costly errors later. Additionally, it outlines the benefits of software testing, including cost-effectiveness, security, product quality, and customer satisfaction.

Uploaded by

naveen0000000777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views24 pages

Unit - I Notes

The document provides an overview of software testing, detailing its importance, types, and methodologies such as black-box and white-box testing. It emphasizes the necessity of testing to identify and fix defects early in the software development process to avoid costly errors later. Additionally, it outlines the benefits of software testing, including cost-effectiveness, security, product quality, and customer satisfaction.

Uploaded by

naveen0000000777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

21IT1904 - SOFTWARE TESTING AND AUTOMATION

UNIT I
FOUNDATIONS OF SOFTWARE TESTING
Why do we test Software?, Black-Box Testing and White-Box Testing, Software Testing
Life Cycle, V-model of Software Testing, Program Correctness and Verification,
Reliability versus Safety, Failures, Errors and Faults (Defects), Software Testing
Principles, Program Inspections, Stages of Testing: Unit Testing, Integration Testing,
System Testing

1.1 INTRODUCTION
Software testing is a method for figuring out if the real piece of software meets
requirements and is error-free. It involves running software or system components
manually or automatically in order to evaluate one or more intriguing characteristics.
Finding faults; gaps or unfulfilled requirements in comparison to the documented
specifications is-the aim of software testing.
Some prefer to use the terms white box and black box testing to describe the
concept of software testing. To-put it simply, software testing is the process of
validating an application that is being tested.

1.1.1 What is Software Testing


Software testing is the process of determining if a piece of software is accurate by
taking into account all of its characteristics (reliability, scalability, portability, Re-usability
and usability) and analyzing how its various components operate in order to detect any bugs,
faults or flaws.
Software testing delivers assurance of the software's fitness and offers a detached
viewpoint and purpose of the programmer. It entails testing each component that makes up
the necessary services to see whether or not it satisfies the set criteria. Additionally, the
procedure informs the customer about the software's caliber.
In Simple words, “Testing is the process of executing a program with the intent of
finding faults.”
Testing is required because failure of the programmer at any point owing to a lack of
testing would be harmful. Software cannot be released to the end user without being tested.

1.1.2 What is Testing


Testing is a collection of methods to evaluate an application's suitability for use in
accordance with a predetermined script, however testing is not able to detect every
application flaw. The basic goal of testing is to find application flaws so that they may be
identified and fixed. It merely shows that a product doesn't work in certain particular
circumstances, not that it works correctly under all circumstances.
Testing offers comparisons between software behaviour and state and mechanisms
since mechanisms may identify problems in software. The method may incorporate
previous’ iterations of the same or similar items, comparable goods, expected-purpose
interfaces, pertinent standards or other criteria, but is not restricted to these.
Testing includes both the analysis and execution of the code in different settings and
environments, as well as the whole code analysis. A testing team may be independent from
the development team in the present software development scenario so that information
obtained from testing may be utilized to improve the software development process.

Panimalar Engineering College 1


21IT1904 - SOFTWARE TESTING AND AUTOMATION

The intended audience's adoption of the programed, its user-friendly graphical user
interface, its robust functionality load test, etc., is all factors in its success. For instance, the
target ‘market for banking and a video game are very different. As a result, an organization
can determine if a software product it produces will be useful to its customers and other
audience members.

1.1.3 Why Software Testing is Important?


Software testing is a very expensive and critical activity; but releasing the software
without testing is definitely more expensive and dangerous. We shall try to find more errors
in the early phases of software development. The cost of removal of such errors will be very
reasonable as compared to those errors which we may find in the later phases of software
development. The cost to fix errors increases drastically from the specification phase to the
test phase and finally to the maintenance phase as shown in Figure 1.1.

Figure 1.1 Phase wise cost of fixing an error

If an error is found and fixed in the specification and analysis phase, it hardly costs anything.
We may term this as ‘1 unit of cost’ for fixing an error during specifications and analysis
phase. The same error, if propagated to design, may cost 10 units and if, further propagated
to coding, may cost 100 units. If it is detected and fixed during the testing phase, it may lead
to 1000 units of cost. If it could not be detected even during testing and is found by the
customer after release, the cost becomes very high. We may not be able to predict the cost of
failure for a life critical system’s software. The world has seen many failures and these
failures have been costly to the software companies.
The fact is that we are releasing software that is full of errors, even after doing
sufficient testing. No software would ever be released by its developers if they are asked to
certify that the software is free of errors. Testing, therefore, continues to the point where it is
considered that the cost of testing processes significantly outweighs the returns.

Panimalar Engineering College 2


21IT1904 - SOFTWARE TESTING AND AUTOMATION

1.1.4. What is the Need of Testing?


Software flaws may be costly or even harmful, thus testing instances when software
defects led to financial and personal loss is crucial. History is replete with
 Over 300,000 traders in the financial markets were impacted after a software error
caused the London Bloomberg terminal to collapse in April 2015. It made the
government delay a 3-billion-pound debt auction.
 Nissan recalled nearly 1 million vehicles from the market because the airbag sensory
detectors’ software was flawed. Due to this software flaw, two accidents have been
documented.
 Starbucks' POS system malfunctioned, forcing them to shut nearly 60 % of its
locations in the united states and Canada. The shop once provided free coffee since
they couldn't complete the purchase.
 Due to a technical error, some of Amazon's third-party sellers had their product
prices slashed to 1p. They suffered severe losses as a result.
 A weakness in windows 10. Due to a defect in the win32k system, users are able to
bypass security sandboxes thanks to this issue.
 In 2015, a software flaw rendered the F-35 fighter jet incapable of accurately
detecting “targets. On April 26, 1994; an airbus A300 operated by China airlines
crashed due to a software error, killing 264 unintentional people.
 Three patients died and three others were badly injured in 1985 when a software
glitch caused Canada's Therac-25 radiation treatment system to fail and deliver
deadly radiation doses to patients.
 In May 1996, a software error led to the crediting of 920 million US dollars to the
bank accounts of 823 clients of a large U.S. bank.
 In April 1999, a software error resulted in the failure of a $1.2 billion military
satellite launch, making it the most expensive accident in history.

1.1.5 What are the Benefits of Software Testing?


The following are advantages of employing software testing:
One of the key benefits of software testing is that it is cost-effective. Timely testing
of any IT project enables you to make long-term financial savings. If flaws are found sooner
in the software testing process, fixing them is less expensive.
Security: This perilous and delicate advantage of software testing. People are searching for
reliable goods. It assists in eradicating hazards and issues early.
Product quality: Any software product must meet these criteria. Testing guarantees that
buyers get a high-quality product.
Customer satisfaction: Providing consumers with contentment is the primary goal of every
product. The optimum user experience is made guaranteed of through U/UX testing.

1.1.6 Type of Software Testing


1. Manual testing:
The process of checking the functionality of an application as per the customer needs
without taking any help of automation tools is known as manual testing. While performing
the manual testing on any application, we do not need any specific knowledge of any testing
tool, rather than have a proper understanding of the product so we can easily prepare the test
document.

Panimalar Engineering College 3


21IT1904 - SOFTWARE TESTING AND AUTOMATION

Manual testing can be further divided into three types of testing, which are as
follows:
 White box testing
 Black box testing
 Gray box testing.

Figure 1.2 Types of Testing


2. Automation testing:
Automation testing is a process of converting any manual test cases into the test scripts
with the help of automation tools or any programming language is known as
automation testing. ‘With the help of automation testing, we can enhance the speed of our
test execution because here, we do not require any human efforts. We need to write a test
script and execute those scripts.

1.2 BLACK-BOX TESTING AND WHITE-BOX TESTING


Black box testing (also called functional testing) is testing that ignores the internal mechanism
of a system or component and focuses solely on the outputs generated in response to selected
inputs and execution conditions. White box testing (also called structural testing and glass box
testing) is testing that takes into account the internal mechanism of a system or component.

1.2.1 What is White-Box Testing


 Testing a system in a "black box" is doing so without knowing anything about how it
operates within. A tester inputs data and monitors the output produced by the system
being tested. This allows for the identification of the system's reaction time, usability
difficulties and reliability concerns as well as how the system reacts to anticipated
and unexpected user activities.
 Because of the system's internal viewpoint, the phrase "white box" is employed. The

Panimalar Engineering College 4


21IT1904 - SOFTWARE TESTING AND AUTOMATION

term “clear box," "white box" or "transparent box" refers to the capability of seeing
the software's inner workings through its exterior layer.
 Developers carry it out before sending the programme to. the testing team, who then
conducts black-box testing. Testing the infrastructure of the application is the
primary goal of white-box testing. As it covers unit testing and integration testing, it
is performed at lower levels. Given that it primarily focuses on the code structure,
pathways, conditions and ‘branches of a programed or piece of software, it
necessitates programming skills. Focusing on the inputs and outputs via the
programme and enhancing its security are the main objectives of white-box testing:
 It is also referred to as transparent testing, code-based testing, structural testing and
clear box testing. It is a good fit and is recommended for testing algorithms.

1.2.1.1 Types of White Box Testing in Software Testing


White box testing is a type of software testing that examines the internal structure
and design of a program or application. The following are some common types of white box
testing:
 Unit testing: Tests individual units or components of the software to ensure they
function as intended.
 Integration testing: Tests the interactions between different units or components of
the software to ensure they work together correctly.
 Functional testing: Tests the functionality of the software to ensure it meets the
requirements and specifications.
 Performance testing: Tests the performance of the software under various loads and
conditions to ensure it meets performance requirements.
 Security testing: Tests the software for vulnerabilities and weaknesses to ensure it is
secure.
 Code coverage testing: Measures the percentage of code that is executed during
testing for ensure that all parts of the code are tested.
 Regression testing: Tests the software after changes have been made to ensure that
the changes did not introduce new bugs or issues.
1.2.1.2 Techniques of White Box Testing
There are some techniques which is used for white box testing -
 Statement coverage: This testing approach involves going over every statement in
the code to make sure that each one has been run at least once. As a result, the code is
checked line by line.
 Branch coverage: Is a testing approach in which test cases are created to ensure that
each ranch is tested at least once. This method examines all potential configurations
for the system.
 Path coverage: Path coverage is a software testing 2pproach that defines and covers
all potential pathways. From system entrance to exit points, pathways are statements
that may be executed. It takes a lot of time.
 Loop testing: With the help of this technique, loops and values in both independent
and dependent code are examined. Errors often happen at the start and conclusion of
loops. This method included testing loops, Concatenated loops, Simple loops, Nested
loops.

Panimalar Engineering College 5


21IT1904 - SOFTWARE TESTING AND AUTOMATION

 Basis path testing: Using this methodology, control flow diagrams are created from
code and subsequently calculations are made for cyclomatic complexity. For the
purpose of designing the fewest possible test cases, cyclomatic complexity specifies
the quantity of separate routes.
o Cyclomatic complexity is a software metric used to indicate the complexity of
a program. It is computed using the Control Flow Graph of the program.
1.2.1.3 Advantages of White Box Testing
 Complete coverage.
 Better understanding of the system.
 Improved code quality.
 Increase efficiency.
 Early detection of error.

1.2.1.4 Disadvantages of White Box Testing


 This testing is very expensive and time-consuming.
 Redesign of code needs test cases to be written again.
 Missing functionalities cannot be detected.
 This technique can be very complex and at times not realistic.
 White-box testing requires a programmer with a high level of knowledge due for the
complexity of the level of testing that needs to be done.

1.2.2 What is Black Box Testing


Testing a system in a "black box" is doing so without knowing anything about how it
operates within. A tester inputs data and monitors the output produced by the system being
tested. This allows for the identification of the system's reaction time, usability difficulties
and reliability concerns as well as how the system reacts to anticipated and unexpected user
activities.)
Because it tests a system from beginning to finish, black box testing is a potent
testing method. A tester may imitate user action to check if the system fulfills its promises,
much as end users "don't care” how a system is programmed or designed and expect to get a
suitable answer to their requests. A black box test assesses every important subsystem along
the route, including the UI/UX, database, dependencies and integrated systems, as well as the
web server or application server.

1.2.2.1 Black Box Testing Pros and Cons

S.No Pros Cons


1 Testers do not require technical Difficult to automate.
knowledge, programming of IT skills.
2 Testers do not need to learn Requires prioritization, typically
implementation details of the system. infeasible to tests all user paths.
3 Tests can be executed by Difficult to calculate test
crowdsourced or outsourced testers. coverages.
4 Low chance of false positives. If a test fails, it can be difficult to
understand the root cause of the
issues.

Panimalar Engineering College 6


21IT1904 - SOFTWARE TESTING AND AUTOMATION

5 Tests have lower complexity, since Tests may be conducted at low scale
they simply model common user or on a non-production like
behavior environment.

1.2.2.2 Types of Black Box Testing


Black box testing can be applied to three main types of tests: Functional, non-
functional and regression testing.
1. Functional Testing:
Specific aspects or operations of the programme that is being tested may be tested via
black box testing. For instance, make sure that the right user credentials may be used to log
in and that the incorrect ones cannot.
Functional testing might concentrate on the most important features of the
programme (smoke testing/sanity testing), on how well the system works as a whole (system
testing) or non the integration of its essential components.
2. Non-functional Testing:
 Beyond features and functioning, black box testing allows for the inspection of
extra software components. A non-functional test examines "how" rather than "if"
the programme can carry out a certain task.
 Black box testing may determine whether software is:
a) Usable and simple for its users to comprehend;
b) Performant under predicted or peak loads; Compatible with relevant devices,
screen sizes, browsers or operating systems;
c) Exposed to security flaws or frequent security threats.

3. Regression Testing:
To determine if a new software version displays a regression or a decrease in
capabilities, from one version to the next. black box testing may be employed. Regression
testing may be used to evaluate both functional and non-functional features of the
programme, such as when a particular feature no longer functions as anticipated in the new
version or when a formerly fast-performing action becomes much slower in the new version.
1.2.2.3 Black Box Testing Techniques
1. Equivalence partitioning:
Testing professionals may organize potential inputs into "partitions" and test just one
sample input from each category. For instance, it is sufficient for testers to verify one birth
date in the "under 18" group and one date in the "over 18" group if a system asks for a user's
birth date and returns the same answer for users under the age of 18 and a different response
for users over 18.

2. Boundary value analysis:


Testers can determine if a system responds differently around a certain boundary
value. For instance, a particular field could only support values in the range of 0 and 99.
Testing personnel may concentrate on the boundary values (1, 0, 99 and 100) to determine if
the system is appropriately accepting and rejecting inputs.

3. Decision Table Testing


Numerous systems provide results depending on a set of parameters. Once rules” that
are combinations of criteria have been identified, each rule's conclusion can then be
determined and test cases may then be created for each rule.

Panimalar Engineering College 7


21IT1904 - SOFTWARE TESTING AND AUTOMATION

1.2.3 Differences between Black Box Testing vs White Box Testing:

Black Box Testing White Box Testing


It is a way of software testing in which the It is a way of testing the software in which
internal structure or the program or the the tester has knowledge about the internal
code is hidden and nothing is known about structure or the code or the program of the
it. software.
Implementation of code is not needed for Code implementation is necessary for
black box testing. white box testing.
It is mostly done by software testers. It is mostly done by software developers.
No knowledge of implementation is Knowledge of implementation is
needed. required.
It can be referred to as outer or external It is the inner or the internal software
software testing. testing.
It is a functional test of the software. It is a structural test of the software.
This testing can be initiated based on the This type of testing of software is started
requirement specifications document. after a detail design document.
No knowledge of programming is It is mandatory to have knowledge of
required. programming.
It is the behavior testing of the software. It is the logic testing of the software.
It is applicable to the higher levels of testing It is generally applicable to the lower
of software. levels of software testing.
It is also called closed testing. It is also called as clear box testing.
It is least time consuming. It is most time consuming.
It is not suitable or preferred for
It is suitable for algorithm testing.
algorithm testing.
Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.
Example: Search something on google Example: By input to check and verify
by using keywords loops
Black-box test design techniques- White-box test design techniques-
 Decision table testing  Control flow testing
 All-pairs testing  Data flow testing
 Equivalence partitioning  Branch testing
 Error guessing
Types of Black Box Testing: Types of White Box Testing:
 Functional Testing  Path Testing
 Non-functional testing  Loop Testing
 Regression Testing  Condition testing

It is less exhaustive as compared to white It is comparatively more exhaustive than


box testing. black box testing.

1.3 SOFTWARE TESTING LIFE CYCLE


The Software Testing Life Cycle (STLC) is a systematic approach to testing a
software application to ensure that it meets the requirements and is free of defects. It is a
process that follows a series of steps or phases, and each phase has specific objectives and
deliverables. The STLC is used to ensure that the software is of high quality, reliable, and
meets the needs of the end-users.

Panimalar Engineering College 8


21IT1904 - SOFTWARE TESTING AND AUTOMATION

The main goal of the STLC is to identify and document any defects or issues in the
software application as early as possible in the development process. This allows for issues
to be addressed and resolved before the software is released to the public.
The stages of the STLC include Test Planning, Test Analysis, Test Design, Test
Environment Setup, Test Execution, Test Closure, and Defect Retesting. Each of these stages
includes specific activities and deliverables that help to ensure that the software is
thoroughly tested and meets the requirements of the end users.
Overall, the STLC is an important process that helps to ensure the quality of software
applications and provides a systematic approach to testing. It allows organizations to release
high-quality software that meets the needs of their customers, ultimately leading to customer
satisfaction and business success.

Phases of STLC:

1. Requirement Analysis: Requirement Analysis is the first step of the Software Testing
Life Cycle (STLC). In this phase quality assurance team understands the requirements like
what is to be tested. If anything is missing or not understandable then the quality assurance
team meets with the stakeholders to better understand the detailed knowledge of
requirements.
The activities that take place during the Requirement Analysis stage include:
• Reviewing the software requirements document (SRD) and other related documents
• Interviewing stakeholders to gather additional information
• Identifying any ambiguities or inconsistencies in the requirements
• Identifying any missing or incomplete requirements
• Identifying any potential risks or issues that may impact the testing process
Creating a requirement traceability matrix (RTM) to map requirements to test
cases
At the end of this stage, the testing team should have a clear understanding of the
software requirements and should have identified any potential issues that may impact the
testing process. This will help to ensure that the testing process is focused on the most

Panimalar Engineering College 9


21IT1904 - SOFTWARE TESTING AND AUTOMATION

important areas of the software and that the testing team is able to deliver high-quality
results.

2. Test Planning: Test Planning is the most efficient phase of the software testing life cycle
where all testing plans are defined. In this phase manager of the testing, team calculates the
estimated effort and cost for the testing work. This phase gets started once the requirement-
gathering phase is completed.
The activities that take place during the Test Planning stage include:
• Identifying the testing objectives and scope
• Developing a test strategy: selecting the testing methods and techniques that will be
used
• Identifying the testing environment and resources needed
• Identifying the test cases that will be executed and the test data that will be used
• Estimating the time and cost required for testing
• Identifying the test deliverables and milestones
• Assigning roles and responsibilities to the testing team
• Reviewing and approving the test plan
At the end of this stage, the testing team should have a detailed plan for the testing activities
that will be performed, and a clear understanding of the testing objectives, scope, and
deliverables. This will help to ensure that the testing process is well-organized and that the
testing team is able to deliver high-quality results.

3. Test Case Development: The test case development phase gets started once the test
planning phase is completed. In this phase testing team notes down the detailed test cases.
The testing team also prepares the required test data for the testing. When the test cases are
prepared then they are reviewed by the quality assurance team.
The activities that take place during the Test Case Development stage include:
• Identifying the test cases that will be developed
• Writing test cases that are clear, concise, and easy to understand
• Creating test data and test scenarios that will be used in the test cases
• Identifying the expected results for each test case
• Reviewing and validating the test cases
• Updating the requirement traceability matrix (RTM) to map requirements to test
cases
At the end of this stage, the testing team should have a set of comprehensive and
accurate test cases that provide adequate coverage of the software or application. This will
help to ensure that the testing process is thorough and that any potential issues are identified
and addressed before the software is released.

4. Test Environment Setup: Test environment setup is a vital part of the STLC. Basically,
the test environment decides the conditions on which software is tested. This is independent
activity and can be started along with test case development. In this process, the testing team
is not involved. either the developer or the customer creates the testing environment.

5. Test Execution: After the test case development and test environment setup test execution
phase gets started. In this phase testing team starts executing test cases based on prepared test
cases in the earlier step.
The activities that take place during the test execution stage of the Software Testing
Life Cycle (STLC) include:
• Test execution: The test cases and scripts created in the test design stage are run
against the software application to identify any defects or issues.
• Defect logging: Any defects or issues that are found during test execution are
logged in a defect tracking system, along with details such as the severity, priority, and

Panimalar Engineering College 10


21IT1904 - SOFTWARE TESTING AND AUTOMATION

description of the issue.


• Test data preparation: Test data is prepared and loaded into the system for test
execution
• Test environment setup: The necessary hardware, software, and network
configurations are set up for test execution
• Test execution: The test cases and scripts are run, and the results are collected and
analyzed.
• Test result analysis: The results of the test execution are analyzed to determine the
software’s performance and identify any defects or issues.
• Defect retesting: Any defects that are identified during test execution are retested
to ensure that they have been fixed correctly.
• Test Reporting: Test results are documented and reported to the relevant
stakeholders.
It is important to note that test execution is an iterative process and may need to be
repeated multiple times until all identified defects are fixed and the software is deemed fit for
release.

6. Test Closure: Test closure is the final stage of the Software Testing Life Cycle (STLC)
where all testing-related activities are completed and documented. The main objective of the
test closure stage is to ensure that all testing-related activities have been completed and that
the software is ready for release.
At the end of the test closure stage, the testing team should have a clear
understanding of the software’s quality and reliability, and any defects or issues that were
identified during testing should have been resolved. The test closure stage also includes
documenting the testing process and any lessons learned so that they can be used to improve
future testing processes
Test closure is the final stage of the Software Testing Life Cycle (STLC) where all
testing-related activities are completed and documented. The main activities that take place
during the test closure stage include:
• Test summary report: A report is created that summarizes the overall testing
process, including the number of test cases executed, the number of defects found, and the
overall pass/fail rate.
• Defect tracking: All defects that were identified during testing are tracked and
managed until they are resolved.
• Test environment clean-up: The test environment is cleaned up, and all test data
and test artifacts are archived.
• Test closure report: A report is created that documents all the testing-related
activities that took place, including the testing objectives, scope, schedule, and resources
used.
• Knowledge transfer: Knowledge about the software and testing process is shared
with the rest of the team and any stakeholders who may need to maintain or support the
software in the future.
• Feedback and improvements: Feedback from the testing process is collected and
used to improve future testing processes

1.4 V-MODEL OF SOFTWARE TESTING


V-Model also referred to as the Verification and Validation Model. In this, each
phase of SDLC must complete before the next phase starts. It follows a sequential design
process same as the waterfall model. Testing of the device is planned in parallel with a
corresponding stage of development.
Verification: It involves a static analysis method (review) done without executing
code. It is the process of evaluation of the product development process to find whether

Panimalar Engineering College 11


21IT1904 - SOFTWARE TESTING AND AUTOMATION

specified requirements meet.


Validation: It involves dynamic analysis method (functional, non-functional), testing
is done by executing code. Validation is the process to classify the software after the
completion of the development process to determine whether the software meets the
customer expectations and requirements.
So V-Model contains Verification phases on one side of the Validation phases on the
other side. Verification and Validation process is joined by coding phase in V-shape. Thus it
is known as V-Model.

There are the various phases of Verification Phase of V-model:

1. Business requirement analysis: This is the first step where product requirements
understood from the customer's side. This phase contains detailed communication to
understand customer's expectations and exact requirements.
2. System Design: In this stage system engineers analyze and interpret the business
of the proposed system by studying the user requirements document.
3. Architecture Design: The baseline in selecting the architecture is that it should
understand all which typically consists of the list of modules, brief functionality of each
module, their interface relationships, dependencies, database tables, architecture diagrams,
technology detail, etc. The integration testing model is carried out in a particular phase.
4. Module Design: In the module design phase, the system breaks down into small
modules. The detailed design of the modules is specified, which is known as Low-Level
Design
5. Coding Phase: After designing, the coding phase is started. Based on the
requirements, a suitable programming language is decided. There are some guidelines and
standards for coding. Before checking in the repository, the final build is optimized for better
performance, and the code goes through many code reviews to check the performance.

There are the various phases of Validation Phase of V-model:


1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the
module design phase. These UTPs are executed to eliminate errors at code level or unit level.
A unit is the smallest entity which can independently exist, e.g., a program module. Unit
testing verifies that the smallest entity can function correctly when isolated from the rest of
the codes/ units.
2. Integration Testing: Integration Test Plans are developed during the Architectural

Panimalar Engineering College 12


21IT1904 - SOFTWARE TESTING AND AUTOMATION

Design Phase. These tests verify that groups created and tested independently can coexist
and communicate among themselves.
3. System Testing: System Tests Plans are developed during System Design Phase.
Unlike Unit and Integration Test Plans, System Tests Plans are composed by the client?s
business team. System Test ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business requirement
analysis part. It includes testing the software product in user atmosphere. Acceptance tests
reveal the compatibility problems with the different systems, which is available within the
user atmosphere. It conjointly discovers the non-functional problems like load and
performance defects within the real user atmosphere.

When to use V-Model?


When the requirement is well defined and not ambiguous.
The V-shaped model should be used for small to medium-sized projects where
requirements are clearly defined and fixed.
The V-shaped model should be chosen when sample technical resources are available
with essential technical expertise.

Advantage (Pros) of V-Model:


1. Easy to Understand.
2. Testing Methods like planning, test designing happens well before coding.
3. This saves a lot of time. Hence a higher chance of success over the waterfall
model.
4. Avoids the downward flow of the defects.
5. Works well for small plans where requirements are easily understood.

Disadvantage (Cons) of V-Model:


1. Very rigid and least flexible.
2. Not a good for a complex project.
3. Software is developed during the implementation stage, so no early prototypes of
the software are produced.
4. If any changes happen in the midway, then the test documents along with the
required documents, has to be updated.

1.5 PROGRAM CORRECTNESS AND VERIFICATION


We discuss software correctness from the two perspectives of the operational and the
symbolic approach. To show that a program is correct
• from the operational perspective we use testing;
• from the symbolic perspective we use proof.
The two perspectives –and with them testing and proof– are tightly related and we
make ample use of this relationship.
Testing a Simple Fragment (Version 1)
Knowing about the relationship between values and facts we can formulate a simple
testing method for program fragments. The fragments have the following general shape,
consisting of three parts
initialise variables
carry out computation
check condition
The initialise variables part sets up the input values for the fragment. Usually, the
input values are chosen taking into account conditions concerning input values, e.g., to avoid
division by zero. The carry out computation part contains the “program”. The check
condition part specifies a condition to determine whether the program is correct.

Panimalar Engineering College 13


21IT1904 - SOFTWARE TESTING AND AUTOMATION

Abstracting from Values and Computations


In order to complete the abstraction from values and computations we are still
lacking a means to describe assumptions about values at the beginning of a fragment. The
corresponding construct is called assume. It specifies a fact that can be assumed to be true at
the locations where it is written. We can imagine assume to have the following effect during
program execution: if the condition assumed is true, assume does nothing; if the condition is
false, assume terminates the program “gently”, that is, it is not considered an error. Contrast
this to the assert statement, that aborts with an error when its condition is false. Now,
assume is not very useful for programming because we are interested in error conditions that
may be encountered during program execution, but assume is very useful for reasoning about
program fragments.
Consider the following fragment involving a devision.
val x: Z = randomInt()
assume(x >= 5)
val y: Z = 5 / x
assert(y <= 1)
As long as we know that the fact x >= 5 holds at the beginning we can be certain that
at the end y <= 1 holds. The use of assume will turn out to be a powerful tool for drafting
program fragments and also for determining test cases. In fact, in conjunction with assert it is
so useful, that a pair consisting of an assume statement followed by an assert statement is
given its own terminology. We call such a pair a contract. We will see contracts in different
shapes. Special syntax is used for different purposes, for instance, when specifying contracts
for functions. Note that if a program fragment with an assume statement was “called” and we
would like that it does not terminate gently, we would have to show that the condition of the
assume statement is met. We will come back to this point later.

Testing a Simple Fragment (Version 2)


Instead of giving an initialistion as in Version 1, we can also use an assert statement
to impose an initial condition for a test. Using the concept of a contract, we can specify,
assume initial condition on variables
specify computation
assert final condition on variables
The fragment terminates gently if the initial condition is not met and aborts with an
error if the initial condition was met but the final condition is is not. This way of specifying
tests turns out to be a foundation for deriving test cases according to Version 1. Reasoning
about contracts we can systematically develop test cases.

Program Correctness
Following the preceding discussion we base our notion of correctness on contracts.
We consider two variants in which contracts can be specified:
• Pairs of assume-assert-statements in program fragments and tests. These contracts
are executable and can be evaluated at run-time.
• Pairs of requires-ensures-clauses used for specifying functions. These contracts
are not executable. They are exclusively used for proving that functions satisfy their
contracts. By contrast, assume and assert can be used for proving properties and for their
runtime checking. Properties specified using requires and ensures are more expressive,
permitting statements over infinite value ranges, for instance, which cannot be evaluated at
run-time. We introduce function contracts in the next chapter.
We call the first component of a contract, its assume-statement or requires-clause, a
pre-condition. We call its second component, its assert-statement or ensures-clause, a post-
condition.
We say a program (or function) is correct if it satisfies its contract: Starting in a state

Panimalar Engineering College 14


21IT1904 - SOFTWARE TESTING AND AUTOMATION

satisfying the pre-condition, it terminates in a state satisfying its post-condition.

Program Verification
To demonstrate that a program is correct we verify it. We consider two princples
methods for verifying programs.
 Proof
Using logical deduction we show that any execution of a program starting in a
state satisfying the pre-condition, it terminates in a state satisfying its post-condition. In
other words, we show that the program is correct.
 Testing
Executing a program for specific states satisfying the pre-condition, we check
whether on termination a state is reached that satisfies the post-condition. It is up to us to
determine suitable pairs of states, called test cases. This approach does not show that a
program is correct. In practice, we conjecture that programs that have been subjected to a
sufficient number of tests is correct. This is kind of reasoning is called induction: from a
collection of tests that confirm correctness for precisely those tests, we infer that this is
the case for possible tests. Testing is a fallibe verification method: it is entirely possible
that all tests that we have provided appear to confirm correctness, but later we find a test
case that refutes the conjecture. Either the program contains an error or the test case is
wrong.

1.6 RELIABILITY VERSUS SAFETY


1.6.1 Software Reliability
Software reliability engineering involves much more than analyzing test results,
estimating remaining faults, and modeling future failure probabilities.
Although in most organizations software test is no longer an afterthought,
management is almost always surprised by the cost and schedule requirements of the test
program, and it is often downgraded in favor of design activities. Often adding a new feature
will seem more beneficial than performing a complete test on existing features. A good
software reliability engineering program, introduced early in the development cycle, will
mitigate these problems by:
• Preparing program management in advance for the testing effort and allowing them
to plan both schedule and budget to cover the required testing.
• Continuous review of requirements throughout the life cycle, particularly for
handling of exception conditions. If requirements are incomplete there will be no testing of
the exception conditions.
• Offering management a quantitative assessment of the dependence of reliability
metrics (software/system availability; software/system outages per day, etc.) on the effort
(time and cost) allotted to testing.
• Providing the most efficient test plan targeted to bringing the product to market in
the shortest time subject to the reliability requirements imposed by the customer or market
expectations.
• Performing continuous quantitative assessment of software/system reliability and
the effort/cost required to improve them by a specified amount.

Reliability Program Tasks:


1. Reliability Allocation
Reliability allocation is the task of defining the necessary reliability of a software
item. The item may be a part of an integrated hardware/software system, may be a relatively
independent software application, or, more and more rarely, a standalone software program.
In any of these cases, goal is to bring system reliability within either a strict constraint
required by a customer or an internally perceived readiness level, or optimize reliability
within schedule and cost constraints.

Panimalar Engineering College 15


21IT1904 - SOFTWARE TESTING AND AUTOMATION

2. Defining and Analyzing Operational Profiles


The reliability of software is strongly tied to the operational usage of an application -
much stronger than the reliability of hardware. A software fault may lead to a system failure
only if that fault is encountered during operational usage. If a fault is not accessed in a
specific operational mode, it will not cause failures at all. It will cause failure more often if it
is located in code that is part of a frequently used "operation" (An operation is defined as a
major logical task, usually repeated multiple times within an hour of application usage).
Therefore in software reliability engineering we focus on the operational profile of the
software which weighs the occurrence probabilities of each operation. Unless safety
requirements indicate a modification of this approach we will prioritize our testing according
to this profile.
Software engineers has to complete the following tasks required to generate a useable
operational profile:
• Determine the operational modes (high traffic, low traffic, high maintenance,
remote use, local use, etc)
• Determine operation initiators (components that initiate the operations in the
system)
• Determine and group "Operations" so that the list includes only operations that are
significantly different from each other (and therefore may present different faults)
• Determine occurrence rates for the different operations
• Construct the operational profile based on the individual operation probabilities of
occurrence.

3. Test Preparation and Plan


Test preparation is a crucial step in the implementation of an effective software
reliability program. A test plan that is based on the operational profile on the one hand, and
subject to the reliability allocation constraints on the other, will be effective in achieving the
program's reliability goals in the least amount of time and cost.
Software Reliability Engineering is concerned not only with feature and regression
test, but also with load test and performance test. All these should be planned based on the
activities outlined above.
The reliability program will inform and often determine the following test
preparation activities:
• Assessing the number of new test cases required for the current release
• New test case allocation among the systems (if multi-system)
• New test case allocation for each system among its new operations
• Specifying new test cases
• Adding the new test cases to the existing test cases from previous releases

4. Software Reliability Models


Software reliability engineering is often identified with reliability models, in
particular reliability growth models. These models, when applied correctly, are successful at
providing guidance to management decisions such as:
• Test schedule
• Test resource allocation
• Time to market
• Maintenance resource allocation
The application of reliability models to software testing results allows us to infer the
rate at which failures are encountered (depending on usage profile) and, more importantly,
the changes in this rate (reliability growth). The ability to make these inferences depends
critically on the quality of test results. It is essential that testing be performed in such a way
that each failure incident is accurately reported.

Panimalar Engineering College 16


21IT1904 - SOFTWARE TESTING AND AUTOMATION

1.6.2 Software Safety


As systems and products become more and more dependent on software components
it is no longer realistic to develop a system safety program that does not include the software
elements.

Does software fail?


We tend to believe that well written and well tested safety critical software would
never fail. Experience proves otherwise with software making headlines when it actually
does fail, sometimes critically. Software does not fail the same way as hardware does, and
the various failure behaviors we are accustomed to from the world of hardware are often not
applicable to software. However, software does fail, and when it does, it can be just as
catastrophic as hardware failures.

Safety-critical software
Safety-critical software is very different from both non-critical software and safety-
critical hardware. The difference lies in the massive testing program that such software
undergoes.

What are "software failure modes"?


Software, especially in critical systems, tends to fail where least expected. Software
does not "break" but it must be able to deal with "broken" input and conditions, which often
cause the "software failures". The task of dealing with abnormal/anomalous conditions and
inputs is handled by the exception code dispersed throughout the program. Setting up a test
plan and exhaustive test cases for the exception code is by definition difficult and somewhat
subjective.
Anomalous inputs can be due to:
• failed hardware
• timing problems
• harsh/unexpected environmental conditions
• multiple changes in conditions and inputs that are beyond what the hardware
is able to deal with
• unanticipated conditions during software mode changes
• bad or unexpected user input
Often the conditions most difficult to predict are multiple, coinciding, irregular
inputs and conditions.
Safety-critical software is usually tested to the point that no new critical failures are
observed. This of course does not mean that the software is fault-free at this point, only that
failures are no longer observed in test.
Why are the faults leading to these types of failures overseen in test? These are faults
that are not tested for any of the following reasons:
• Faults in code that is not frequently used and therefore not well represented in the
operational profiles used for testing
• Faults caused by multiple anomalous conditions that are difficult to test
• Faults related to interfaces and controls of failed hardware
• Faults due to missing requirements
It is clear why these types of faults may remain outside of a normal, reliability
focused, test plan.

1.7 FAILURES, ERRORS AND FAULTS (DEFECTS)


All terms are used interchangeably although error, mistake and defect are synonyms
in software testing terminology.
When we make an error during coding, we call this a ‘bug’. Hence, error / mistake /

Panimalar Engineering College 17


21IT1904 - SOFTWARE TESTING AND AUTOMATION

defect in coding is called a bug.


A fault is the representation of an error where representation is the mode of
expression such as data flow diagrams, ER diagrams, source code, use cases, etc. If fault is in
the source code, we call it a bug.
A failure is the result of execution of a fault and is dynamic in nature. When the
expected output does not match with the observed output, we experience a failure.
The program has to execute for a failure to occur. A fault may lead to many failures.
A particular fault may cause different failures depending on the inputs to the program.

1.8. SOFTWARE TESTING PRINCIPLES


Software testing is a process that involves putting software or an application to use
in order to find faults or flaws. Following certain guidelines can help us test software or
applications without creating any problems and it will also save the test engineers' time
and effort as they put their time and effort into doing so. We will learn about the seven
fundamental tenets of software testing in this part.
Let us see the seven different testing principles, one by one:

1. Testing shows the presence of defects:


• The application will be put through testing by the test engineer to ensure that there
are no bugs or flaws. We can only pinpoint the existence of problems in the application or
programme when testing. The main goal of testing is to find any flaws that might prevent the
product from fulfilling the client's needs by using a variety of methods and testing
techniques. Since the entire test should be able to be traced back to the customer
requirement.
• Testing reduces the amount of flaws in any programme, but this does not imply that
the application is defect-free since sometimes software seems to be bug-free despite
extensive testing. But if the end-user runs into flaws that weren't discovered during testing,
it's at the point of deployment on the production server.

2. Exhaustive testing is not possible:


It might often appear quite difficult to test all the modules and their features
throughout the real testing process using effective and ineffective combinations of the input

Panimalar Engineering College 18


21IT1904 - SOFTWARE TESTING AND AUTOMATION

data. Therefore, because it requires endless decisions and the majority of the hard labour is
unsuccessful, extensive testing is preferred instead. As a result, we may finish this sort of
variation in accordance with the significance of the modules as doing such testing scenarios
would violate the product timeframes.

3. Early testing:
• Here, early testing refers to the idea that all testing activities should begin in the
early stages of the requirement analysis stage of the software development life cycle in order
to identify the defects. If we find the bugs at an early stage, we can fix them right away,
which could end up costing us much less than if they are discovered in a later phase of the
testing process.
• Since we will need the requirement definition papers in order to conduct testing, if
the requirements are mistakenly specified now, they may be corrected later, perhaps during
the development process.

4. Defect clustering:
• The defect clustering specified that we can identify the quantities of problems that
are associated to a limited number of modules during the testing procedure: We have a
number of explanations for this, including the possibility of intricate modules, difficult code
and more.
• According to the pareto principle, which Suggests that we may determine that
approximately, these kinds of software or applications will follow, roughly? Twenty percent
of the modules contain eighty percent of the complexity. This allows us to locate the
ambiguous modules, but it has limitations if the same tests are. run often since they will not
be able to spot any newly introduced flaws.

5. Beware of the pesticide paradox


This is based on the theory that when you use pesticide repeatedly on crops, insects
will eventually build up an immunity, rendering it ineffective. Similarly, with testing, if the
same tests are run continuously then – while they might confirm the software is working –
eventually they will fail to find new issues. It is important to keep reviewing your tests and
modifying or adding to your scenarios to help prevent the pesticide paradox from occurring –
maybe using varying methods of testing techniques, methods and approaches in parallel.

6. Testing is context dependent


Testing is ALL about the context. The methods and types of testing carried out can
completely depend on the context of the software or systems – for example, an e-commerce
website can require different types of testing and approaches to an API application, or a
database reporting application. What you are testing will always affect your approach.

7. Absence-of-errors is a fallacy
If your software or system is unusable (or does not fulfill users’ wishes) then it does
not matter how many defects are found and fixed – it is still unusable. So in this sense, it is
irrelevant how issue- or error-free your system is; if the usability is so poor users are unable
to navigate, or/and it does not match business requirements then it has failed, despite having
few bugs.
It is important, therefore, to run tests that are relevant to the system’s requirements.
You should also be testing your software with users – this can be done against early
prototypes (at the usability testing phase), to gather feedback that can be used to ensure and
improve usability. Remember, just because there might be a low number of issues, it does not
mean your software is shippable – meeting client expectations and requirements are just as
important as ensuring quality.

Panimalar Engineering College 19


21IT1904 - SOFTWARE TESTING AND AUTOMATION

1.9. PROGRAM INSPECTIONS


Program or Software inspection refers to a peer review of software to identify bugs or
defects at the early stages of SDLC. It is a formal review that ensures the documentation
produced during a given stage is consistent with previous stages and conforms to pre-
established rules and standards.
Software inspection involves people examining the software product to discover
defects and inconsistencies. Since it doesn’t require system execution, inspection is usually
done before implementation.

Purpose of software inspection:


Software inspection aims to identify software defects and deviations, ensuring the
product meets customer requirements, wants, and needs. In a broader context, the objective
of the inspection is to inhibit defective software from flowing down the subsequent
operations, thereby preventing loss to the company.
Software inspection is designed to unravel defects or bugs, unlike testing, which is
done to make corrections. It is divided into two types:

1. Document inspection: Here, the documents produced for a given phase are inspected,
further focusing on their quality, correctness, and relevance.
2. Code inspection: The code, program source files, and test scenarios are inspected and
reviewed.

Who are the key parties involved?


Moderator: A facilitator who organizes and reports on inspection.
Author: A person who produces the report.
Reader: A person who guides the examination of software; more of a paraphraser.
Recorder: An inspector who logs all the defects on the defect list.
Inspector: The inspection team member responsible for identifying the defects.

Software Inspection Process:


Software inspection involves six steps – Planning, Overview, Preparation, Meeting, Rework,
and Follow-up.

1. Planning
The planning phase starts with the selection of a group review team. A moderator
plans the activities performed during the inspection and verifies that the software entry
criteria are met.

Panimalar Engineering College 20


21IT1904 - SOFTWARE TESTING AND AUTOMATION

2. Overview
The overview phase intends to disseminate information regarding the background of
the product under review. Here, a presentation is given to the inspector with some
background information needed to review the software product properly.

3. Preparation
In the individual preparation phase, the inspector collects all the materials needed for
inspection. Each reviewer studies the project individually and notes the issues they
encounter.

4. Meeting
The moderator conducts the meeting to collect and review defects. Here, the reader
reads through the product line by line while the inspector points out the flaws. All issues are
raised, and suggestions may be recorded.

5. Rework
Based on meeting notes, the author changes the work product.

6. Follow-up
In the last phase, the moderator verifies if necessary changes are made to the software
product, compiling a defect summary report.

1.10. STAGES OF TESTING


1.10.1 UNIT TESTING
A software development-approach known as unit testing involves checking the
functionality of the smallest testable components or units, of an application one by one. Unit
tests are carried out by software developers and sometimes by QA personnel. Unit testing's
primary goal is to separate written code for testing to see whether it functions as intended.
 Unit testing is a part of Test-Driven Development (TDD), a methodical strategy that
carefully constructs a product via ongoing testing and refinement. Prior to using
additional testing techniques like integration testing, this testing approach is also the
initial level of software testing.
 To make sure a unit doesn't depend on any external code or functionalities, unit tests
are often isolated. Unit tests should be run often by teams, whether manually or more
frequently automatically.

How Unit Tests Work:


 Three steps make up a unit test: Planning, developing test cases and running the test
itself. Developers or QA experts prepare and examine the unit test in the first stage.
They then go on to writing test cases and scripts. The code is tested in the third stage.
 ‘For test-driven development to work, unit tests must first be written that fail. As
soon as the test succeeds, they create code and restructure the application. TDD often
produces a code base that is clear and predictable.
 To confirm that the code has no dependencies, each test case is run separately in an
isolated environment. The software developer should utilise a testing framework to
record any failed tests and write criteria to validate each test case.
 The creation of tests for every line of code would be time-consuming for developers.
The code that could influence how the programme being created behaves should be
the focus of the tests that developers write.
 Only those properties that are essential to the operation of the unit being evaluated are

Panimalar Engineering College 21


21IT1904 - SOFTWARE TESTING AND AUTOMATION

included in unit testing.


 This enables developers to make modifications to the source code without worrying
about how they could effect the operation of other components or the programme as a
whole right away.
 Teams may use integration testing to assess bigger programme components after
every unit in the programme operates as effectively and error- free as feasible.
 Unit tests may be run manually or automatically by developers. An intuitive
document outlining each stage in the process may be developed for those using a
manual technique, however automation testing is the most popular approach for unit
testing.
 Automated methods often create test cases using a testing framework. In addition to
presenting a summary of the test cases, these frameworks are also configured to flag
and report any failed test cases.

Unit testing advantages:


There are many advantages to unit testing, including the following:
 Compound mistakes happen less often the sooner an issue is discovered.
 Fixing issues as they arise is often less expensive than waiting until they become
serious, Simplified debugging procedures.
 The codebase can be modified easily by developers,
 Code ray be transferred to new projects and reused by developers.
Unit testing disadvantages:
While unit testing is integral to any software development and testing strategy; there
are some aspects to be aware of. Disadvantages to unit testing include the following:
 Not all bugs will be found during tests.
 Unit testing does not identify integration flaws; it just checks data sets and their
functionality.
 To test one line of code, more lines of test code may need to be developed, which
might require additional time.
 To successfully apply unit testing, developers may need to pick up new skills, such as
how to utilize certain automated software tools.

1.10.2 INTEGRATION TESTING


The second stage of the software testing process, after unit testing, is known as
integration testing. Integration testing is the process of inspecting various parts or units of g
software project to reveal flaws and ensure that they function as intended.
The typical software project often comprises of multiple software modules, many of
which were created by various programmers. Integration testing demonstrates to the group
how effectively these dissimilar components interact. In spite of the fact that each
component.
Why perform integration testing?
 There are many particular reasons why developers should do integration testing, in
addition to the basic reality that they must test all software programmes before
making them available to the general public.
 Errors might result from incompatibility between programme components.
 Every software module must be able to communicate with the database and
requirements are subject to change as a result of customer feedback. Though if they
haven't been extensively tested yet, those additional needs should be.
 Every software developer has their own conceptual framework and coding logic.
Integrity testing guarantees that these diverse elements work together seamlessly.
 Modules often interface with third-party APIs or tools; thus we require integration

Panimalar Engineering College 22


21IT1904 - SOFTWARE TESTING AND AUTOMATION

testing to confirm that the data these tools receive is accurate.


 There may be possible hardware compatibility issues.

Advantages of Integration Testing


 Integration testing ensures that every integrated module functions correctly.
 Integration testing uncovers interface errors.
 Testers can initiate integration testing once a module is completed and doesn’t require
waiting for another module to be done and ready for testing.
 Testers can detect bugs, defects and security issues.
 Integration testing provides testers with a comprehensive analysis of the whole
system, dramatically reducing the likelihood of severe connectivity issues.

Challenges of Integration Testing:


Unfortunately, integration testing has some difficulties to overcome as well.
 Questions will arise about how components from two distinct systems produced by
two different suppliers will impact and interact with one another during testing.
 Integrating new and old systems requires extensive testing and possible revisions.
 Integration testing needs testing not just the integration connections but the
environment itself, adding another level of complexity to the process.
 This is because integration testing requires testing not only the integration links but
the environment itself.

1.10.3 SYSTEM TESTING


 System testing is a sort of software testing done on a whole integrated system to
determine if it complies with the necessary criteria.
 Integration testing successful components are used as input during system testing.
Integration testing's objective is to find any discrepancies between the Integrated
components.
 System testing finds flaws in the integrated modules as well as the whole system. A
component or system's observed behavior during testing is the outcome of system
testing. System testing is done on the whole system under the guidance of either
functional or system requirement specifications or under the guidance of both.
 The design, behavior and customer expectations of the system are all tested during
system testing. Beyond the parameters specified in the Software Requirements
Specification (SRS), it is used to test the system.
 In essence, system testing is carried out by a testing team that is separate from the
development team and helps to objectively assess the system's quality. It has been
tested in both functional and non- functional ways. Black-box testing is what system
testing is. After integration testing but before acceptance testing, system testing is
carried out.

Process for system testing:


The steps for system testing are as follows:
1. Setup of the test environment: Establish a test environment for higher-quality
testing.
2. Produce a test case: Produce a test case for the testing pes
3. Produce test data: Produce the data-that will be put to the test.
4. Execute test case: Test cases are carried out after the production of the test case and
the test data.
5. Defect reporting: System flaws are discovered.
6. Regression testing: This technique is used to examine the consequences of the

Panimalar Engineering College 23


21IT1904 - SOFTWARE TESTING AND AUTOMATION

testing procedure's side effects.


7. Log defects: In this stage, defects are corrected.
8. Retest: If the first test is unsuccessful, a second test is conducted.

Types of System Testing:


Performance testing: is a sort of software testing used to evaluate the speed,
scalability, stability and dependability of software applications and products.
Load testing: This sort of software testing is used to ascertain how a system or software
product will behave under high loads.
Stress testing: Stress testing is a sort of software testing carried out to examine the
system's resilience under changing loads.

Advantages of system testing:


 The testers don't need to have further programming experience to do this testing.
 "It wilts test the complete product or piece of software, allowing us to quickly find
any faults or flaws that slipped through integration and unit testing.
 The testing environment resembles a real-world production or commercial setting.
 It addresses the technical and business needs of customers and uses various test scrips
to verify the system's full operation.
 Following this testing, the product will have practically all potential flaws or faults
fixed, allowing the development team to safely go on to acceptance testing.

Disadvantages of system testing:


 Because this testing involves checking the complete product or piece of software, it
takes longer than other testing methods.
 Since the testing involves testing the complete piece of software, the cost will be
considerable.
 Without a proper debugging tool, the hidden faults won't be discovered.

Panimalar Engineering College 24

You might also like