Chapter 7: Software Testing
1 Tadele M.
Topics to be covered
What is testing? • Levels of Testing
Software Testing Terminologies • Unit, Integration, System
Software Testing Life Cycle and Acceptance Testing
Test case design • Test Plan
• Test case specifications,
Black Box testing
execution and Analysis
Requirement Based Testing
Equivalent Class partitioning
• Test automation
Boundary value analysis
• Limitations of Testing
• Debugging
White Box testing
Control Flow Based testing
2
3
What is testing?
Several definitions:
“Testing is the process of establishing confidence that a
program or system does what it is supposed to.” ( Hetzel
1973)
“Testing is any activity aimed at evaluating an attribute or
capability of a program or system and determining that it
meets its required results.” (Hetzel 1983)
“Testing is the process of executing a program or system with
the intent of finding errors.” (Myers 1979)
Testing is not
the process of demonstrating that errors are not present.
3
4
Background
Software testing process has two distinct goals:
To demonstrate to the developer and customer that the
software meets its requirements; (Validation testing)
To discover faults or defects in the software where its behavior
is incorrect or not in conformance with its specification;
(Defect testing)
Who is involved in testing?
Software Test Engineers and Testers
Test manager
Development Engineers
Quality Assurance Group and Engineers
4
5
Software Testing: Terminologies
Error, Mistake, Bug, Fault and Failure
An error is a mistake made by an engineer
This may be a syntax error, misunderstanding of specifications
or logical errors.
Bugs are coding mistakes/errors.
A fault/defect is the representation of an error, where
representation is the mode of expression, such as narrative text,
data flow diagrams, ER diagrams, source code etc.
A failure is an incorrect output/behavior that is caused by
executing a fault
A particular fault may cause different failures, depending on
how it has been exercised.
5
7
Software Testing: Terminologies…
In order to test a software, develop test cases and test suite
Test cases are specification of the inputs to test the system and the
expected outputs from the system plus a statement of what is being
tested.
During testing, a program is executed with a set of test cases
failure during testing shows presence of defects.
Test/Test Suite: A set of one or more test cases used to
test a module, group of modules, or entire system
Verification : the software should conform to its specification.
o i.e. Are we building the product right”.
Validation: The software should do what the user really requires.
o . i.e."Are we building the right product”.
Testing= Verification +Validation
6
8
Software Testing Life Cycle (STLC)
Software testing has its own life cycle that intersects with every
stage of the SDLC.
7
10
Test case design
Test case design involves designing the test cases (inputs and
outputs) used to test the system.
Two approaches to design test cases are
Functional/ behavioral/ black box testing
Structural or white box testing
8
12
Black Box testing
It is designed to validate functional requirements without regard
to the internal workings of a program
The test cases are decided solely on the basis of the requirements
or specifications of the program or module
No knowledge of internal design or code required.
The tester only knows the inputs that can be given to the
system and what output the system should give.
9
13
Black Box testing…
Black box testing focuses only on functionality
What the program does; not how it is implemented
Advantages
Tester can be non-technical.
Test cases can be designed as soon as the functional specifications are
complete
Disadvantages
The tester can never be sure of how much of the system under test has
been tested.
i.e. chances of having unidentified paths during this testing
The test inputs needs to be from large sample space.
1
14
0
Equivalence Class partitioning
Divide the input space into equivalent classes
If the software works for a test case from a class then it is likely
to work for all
Can reduce the set of test cases if such equivalent classes can be
identified
Getting ideal equivalent classes is impossible, without looking at
the internal structure of the program
For robustness, include equivalent classes for invalid inputs also
Example: Look at the following taxation table
Income Tax Percentage
Up to and including 500 0
More than 500, but less than 1,300 30
1,300 or more, but less than 5,000 40
1
17
1
Equivalence Class partitioning…
Based on the above table 3 valid and 4 invalid equivalent classes can be
found
Valid Equivalent Classes
Values between 0 to 500, 500 to 1300 and 1000 to 5000
Invalid Equivalent Classes 2100
Values less than 0, greater than 5000, no input at all and inputs
containing letters
From this classes we can generate the following test cases
Test Case ID Income Tax
1 200 0
2 1000 300
3 3500 1400
4 -4500 Income can’t be negative
5 6000 Tax rate not defined
6 Please enter income
1
18 7 98ty Invalid income
2
Boundary value analysis
It has been observed that programs that work correctly for a set of
values in an equivalence class fail on some special values.
These values often lie on the boundary of the equivalence class.
A boundary value test case is a set of input data that lies on the edge of
a equivalence class of input/output
Example
Using an example in ECP generate test cases that provides 100% BVA
coverage.
< 0 0<income<500 500<income<1300 1300<income<5000 > 5000
SO, we need from12 – 14 ( 2 for no and character entries) test cases to
1
achieve the aforementioned coverage
19
3
White box testing
White-box tests can be designed only after component-level
design (or source code) exists.
The logical details of the program must be available.
The aim of white box testing is to exercise different program
structures with the intent of uncovering errors.
1
26
4
Control flow based criteria
Considers the program as control flow graph - Nodes represent
code blocks – i.e. set of statements always executed together
An edge (i, j) represents a possible transfer of control from node i
to node j.
Any control flow graph has a start node and an end node
A complete path (or a path) is a path whose first node is the start
node and the last node is an exit node.
Control flow graph has a number of coverage criteria. These are
Statement Coverage Criterion
Branch coverage
Linearly Independent paths
(ALL) Path coverage criterion
1
27
5
Statement Coverage Criterion
The simplest coverage criteria is statement coverage;
Which requires that each statement of the program be executed at least
once during testing.
I.e. set of paths executed during testing should include all nodes
This coverage criterion is not very strong, and can leave errors
undetected.
Because it has a limitation in that it does not require a decision to
evaluate to false if no else clause
E.g. : If A = 3, B = 9
Number of executed statements = 5,
Total number of statements = 7
1
28
Statement Coverage: 5/7*100 = 71%
6
Branch coverage
A little more general coverage criterion is branch
coverage
which requires that each edge in the control flow graph be
traversed at least once during testing.
i.e. branch coverage requires that each decision in the program
be evaluated to true and false values at least once during
testing.
1
29
7
Linearly Independent paths
Prepare test cases that covers all linearly independent Paths
Binary search flow graph
Linearly independent paths are (1, 2, 8, 9), (1, 2, 3, 8, 9), (1, 2, 3, 4,
5, 7, 2 , 8, 9), and (1, 2, 3, 4, 6, 7, 2, 8, 9)
1 Test cases should be derived so that all of these paths are executed
30
8
(ALL) Path coverage criterion
The objective of path testing is to ensure that the set of test
cases is such that each path through the program is executed at
least once.
The starting point for path testing is a program flow graph that
shows nodes representing program decisions and arcs
representing the flow of control.
Statements with conditions are therefore nodes in the flow
graph.
The difficulty with this criterion is that programs that contain
loops can have an infinite number of possible paths.
1
31
9
Levels of Testing
User needs Acceptance testing
Requirement System testing
specification
Design Integration testing
code Unit testing
The major testing levels are similar for both object-oriented
and procedural-based software systems.
Basically the levels differ in
the element to be tested
responsible individual
39 testing goals
Different Levels of Testing…
Unit Testing
Element to be tested : individual component (method, class or
subsystem)
Responsible individual: Carried out by developers
Goal: Confirm that the component or subsystem is correctly coded and
carries out the intended functionality
Focuses on defects injected during coding: coding phase sometimes
called “coding and unit testing”
Integration Testing
Element to be tested : Groups of subsystems (collection of subsystems)
and eventually the entire system
Responsible individual: Carried out by developers
Goal: Test the interfaces among the subsystems.
i.e. for problems that arise from component interactions.
40
Different Levels of Testing…
System Testing
Element to be tested : The entire system
Responsible individual: Carried out by separate test team
Goal: Determine if the system meets the requirements (functional and
nonfunctional)
Most time consuming test phase
Acceptance Testing
Element to be tested : Evaluates the system delivered by developers
Responsible individual: Carried out by the client. May involve
executing typical transactions on site on a trial basis
Goal: Demonstrate that the system meets/satisfies user needs
Only after successful acceptance testing the software deployed.
41
Different Levels of Testing…
If the software has been developed for the mass market
(shrink wrapped software), then testing it for individual users is
not practical or even possible in most cases.
Very often this type of software undergoes two stages of
acceptance test: Alpha and Beta testing
Alpha testing
This test takes place at the developer’s site.
Testing done using simulated data in a lab setting
Developers observe the users and note problems.
Beta testing
the software is sent to a cross-section of users who install it and use it
under real world working conditions with real data.
The users send records of problems with the software to the
42 development organization
Different Levels of Testing…
Another level of testing, called regression testing, is performed
when some changes are made to an existing system.
Regression testing usually refers to testing activities during
software maintenance phase.
Regression testing
makes sure that the modification has not had introduced new
errors
ensures that the desired behavior of the old services is
maintained
Uses some test cases that have been executed on the old
system
43
Test Plan
Testing usually starts with test plan and ends with acceptance
testing.
Test plan is a general document that defines the scope and
approach for testing for the whole project
Inputs are SRS, project plan, design, code, …
Test plan identifies what levels of testing will be done, what
units will be tested, etc in the project
It usually contains
Test unit specifications: what units need to be tested separately
Features to be tested: these may include functionality, performance,
usability,…
Approach: criteria to be used, when to stop, how to evaluate, etc
Test deliverables
Schedule and task allocation
2
49
5
Test case specifications
Test plan focuses on approach; does not deal with details of testing
a unit.
Test case specification has to be done separately for each unit.
Based on the plan (approach, features,..) test cases are determined
for a unit
Expected outcome also needs to be specified for each test case
Together the set of test cases should detect most of the defects
Test data are Inputs which have
been devised to test the system
i.e. the larger set will detect most of the defects, and a smaller set cannot
2
50 catch these defects
6
Test automation
Testing is an expensive process phase.
Testing workbenches provide a range of tools to reduce the time
required and total testing costs.
Systems such as Junit, Selenium, TestComplete… support the
automatic execution of tests.
253
7
Limitations of Testing
Testing has its own limitations.
You cannot test a program completely - Exhaustive testing is impossible
You cannot test every path
You cannot test every valid input
You cannot test every invalid input
We can only test against system requirements
- May not detect errors in the requirements
- Incomplete or ambiguous requirements may lead to inadequate or incorrect
testing.
Time and budget constraints
You will run out of time before you run out of test cases
Even if you do find the last bug, you’ll never know it
255
8
Debugging
Debugging is the process of locating and fixing or bypassing bugs
(errors) in computer program code
To debug a program is to start with a problem, isolate the source of the
problem, and then fix it.
Testing does not include efforts associated with tracking down
bugs and fixing them.
Testing finds errors; debugging localizes and repairs them.
57