Software Testing
Strategies
Software Testing
Testing is the process of exercising a
program with the specific intent of finding
errors prior to delivery to the end user.
1
Who Tests the
Software?
developer independent tester
Understands the system Must learn about the system,
but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by quality
2
Testing Strategy
unit test integration
test
system validation
test test
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 3
Testing Strategy
We begin by ‘testing-in-the-small’ and move
toward ‘testing-in-the-large’
For conventional software
The module (component) is our initial focus
Integration of modules follows
For OO software
our focus when “testing in the small” changes from an
individual module (the conventional view) to an OO
class that encompasses attributes and operations and
implies communication and collaboration
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 4
Strategic Issues
State testing objectives explicitly.
Understand the users of the software and develop a profile
for each user category.
Develop a testing plan that emphasizes “rapid cycle
testing.”
Build “robust” software that is designed to test itself
Use effective formal technical reviews as a filter prior to
testing
Conduct formal technical reviews to assess the test
strategy and test cases themselves.
Develop a continuous improvement approach for the
testing process.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 5
Unit Testing
6
Unit Test
Environment
driver
interface
local data structures
Module boundary conditions
independent paths
error handling paths
stub stub
test cases
RESULTS
7
Integration Testing Strategies
Options:
• the “big bang” approach
• an incremental construction strategy
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 8
Type of Integration Testing Strategies
9
Object-Oriented
Testing
It can be argued that the review of OO analysis and design models is especially
useful because the same semantic constructs (e.g., classes, attributes, operations,
messages) appear at the analysis, design, and code level. Therefore, a problem
in the definition of class attributes that is uncovered during analysis will
circumvent side effects that might occur if the problem were not discovered
until design or code (or even the next iteration of analysis).
begins by evaluating the correctness and consistency of
the OOA and OOD models
testing strategy changes
the concept of the ‘unit’ broadens due to encapsulation
integration focuses on classes and their execution across a
‘thread’ or in the context of a usage scenario
validation uses conventional black box methods
test case design draws on conventional methods, but also
encompasses special features
10
OOT
Strategy
class testing is the equivalent of unit testing
operations within the class are tested
the state behavior of the class is examined
integration applied three different strategies
thread-based testing—integrates the set of classes
required to respond to one input or event
use-based testing—integrates the set of classes required to
respond to one use case
cluster testing—integrates the set of classes required to
demonstrate one collaboration
11
Smoke Testing
A common approach for creating “daily builds” for
product software
Smoke testing steps:
Software components that have been translated into code are
integrated into a “build.”
A build includes all data files, libraries, reusable modules, and
engineered components that are required to implement one or
more product functions.
A series of tests is designed to expose errors that will keep the
build from properly performing its function.
The intent should be to uncover “show stopper” errors that have
the highest likelihood of throwing the software project behind
schedule.
The build is integrated with other builds and the entire product
(in its current form) is smoke tested daily.
The integration approach may be top down or bottom up.
12
High Order Testing
Validation testing
Focus is on software requirements
System testing
Focus is on system integration
Alpha/Beta testing
Focus is on customer usage
Recovery testing
forces the software to fail in a variety of ways and verifies that recovery is properly
performed
Security testing
verifies that protection mechanisms built into a system will, in fact, protect it from
improper penetration
Stress testing
executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
Performance Testing
test the run-time performance of software within the context of an integrated system
13
14
15
Debugging Techniques
Debugging involves finding and fixing errors
in computer programs. Several techniques
are used, including brute force, testing,
backtracking, induction, and deduction.
Brute force involves trial and error, often
using print statements to observe program
behavior.
Testing involves comparing expected and
observed results, while backtracking traces
steps backward from an error to its origin.
Induction identifies potential causes from
observed results, and deduction uses general
theories to pinpoint the error.
16
Testabili
ty
Operability—it operates cleanly
Observability—the results of each test case are readily observed
Controllability—the degree to which testing can be automated and
optimized
Decomposability—testing can be targeted
Simplicity—reduce complex architecture and logic to simplify tests
Stability—few changes are requested during testing
Understandability—of the design
What is a “Good” Test?
A good test has a high probability of finding an error
A good test is not redundant.
A good test should be “best of breed”
A good test should be neither too simple nor too
complex
17
Test Case
Design
OBJECTIVE to uncover errors
CRITERIA in a complete manner
CONSTRAINT with a minimum of effort and time
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 18
Test Case Design Example
19
20
Software Testing
white-box black-box
methods methods
Methods
Strategies
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 21
22
23
Control Structure Testing
Condition testing — a test case design method that exercises the
logical conditions contained in a program module
Data flow testing — selects test paths of a program according to
the locations of definitions and uses of variables in the program
Loop Testing: Simple Loops
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through
the loop
where n is the maximum number of allowable passes
24
Loop Testing: Nested
Loops
Nested Loops
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
25
Black-Box Testing
How is functional validity tested?
How is system behavior and performance tested?
What classes of input will make good test cases?
Is the system particularly sensitive to certain
input values?
How are the boundaries of a data class isolated?
What data rates and data volume can the system
tolerate?
What effect will specific combinations of data have
on system operation?
26
Equivalence Partitioning
user output FK
queries formats input
mouse
picks data
prompts
27
Sample Equivalence
Classes
Valid data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)
Invalid data
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 28
Comparison Testing
Used only in situations in which the reliability of
software is absolutely critical (e.g., human-rated
systems)
Separate software engineering teams develop
independent versions of an application using the same
specification
Each version can be tested with the same test data to
ensure that all provide identical output
Then all versions are executed in parallel with real-time
comparison of results to ensure consistency
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 29
Testing Methods
Fault-based testing
The tester looks for plausible faults (i.e., aspects of the
implementation of the system that may result in defects). To
determine whether these faults exist, test cases are designed to
exercise the design or code.
Class Testing and the Class Hierarchy
Inheritance does not obviate the need for thorough testing of all
derived classes. In fact, it can actually complicate the testing
process.
Scenario-Based Test Design
Scenario-based testing concentrates on what the user does, not
what the product does. This means capturing the tasks (via use-
cases) that the user has to perform, then applying them and their
variants as tests.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 30
OOT Methods: Partition
Partition Testing Testing
reduces the number of test cases required to test a
class in much the same way as equivalence
partitioning for conventional software
state-based partitioning
categorize and test operations based on their ability to
change the state of a class
attribute-based partitioning
categorize and test operations based on the attributes
that they use
category-based partitioning
categorize and test operations based on the generic
function each performs
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 31
Testing Patterns
Pattern name: pair testing
Abstract: A process-oriented pattern, pair testing describes a
technique that is analogous to pair programming (Chapter 4) in
which two testers work together to design and execute a series of
tests that can be applied to unit, integration or validation testing
activities.
Pattern name: separate test interface
Abstract: There is a need to test every class in an object-oriented
system, including “internal classes” (i.e., classes that do not
expose any interface outside of the component that used them).
The separate test interface pattern describes how to create “a
test interface that can be used to describe specific tests on
classes that are visible only internally to a component.” [LAN01]
Pattern name: scenario testing
Abstract: Once unit and integration tests have been conducted,
there is a need to determine whether the software will perform in
a manner that satisfies users. The scenario testing pattern
describes a technique for exercising the software from the user’s
point of view. A failure at this level indicates that the software
has failed to meet a user visible requirement. [KAN01]
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005 32