RNSIT Notes Softwaretesting
RNSIT Notes Softwaretesting
CHANNASANDRA, BANGALORE - 98
SOFTWARE TESTING
NOTES FOR 8TH SEMESTER INFORMATION SCIENCE
PREPARED BY
DIVYA K NAMRATHA R
1RN09IS016 1RN09IS028
8th Semester 8th Semester
Information Science Information Science
divya.1rn09is016@gmail.com namratha.1rn09is028@gmail.com
SPECIAL THANKS TO
ANANG A BNMIT & CHETAK M - EWIT
TEXT BOOKS:
FOUNDATIONS OF SOFTWARE TESTING Aditya P Mathur, Pearson Education, 2008
SOFTWARE TESTING AND ANALYSIS: PROCESS, PRINCIPLES AND TECHNIQUES Mauro Pezze, Michal Young, John
Wiley and Sons, 2008
Notes have been circulated on self risk. Nobody can be held responsible if anything is wrong or is improper information or insuf icient information provided in it.
CONTENTS:
UNIT 1
BASICS OF SOFTWARE TESTING - 1
ERRORS AND TESTING
Humans make errors in their thoughts, in their actions, and in the products that might result from
their actions.
Humans can make errors in an ield.
Ex: observation, in speech, in medical prescription, in surgery, in driving, in sports, in love and
similarly even in software development.
Example:
o An instructor administers a test to determine how well the students have understood what
the instructor wanted to convey
o A tennis coach administers a test to determine how well the understudy makes a serve
The programmer might misinterpret the requirements and consequently write incorrect code. Upon execution,
the program might display behaviour that does not match with the expected behaviour, implying thereby that a
failure has occurred.
A fault in the program is also commonly referred to as a bug or a defect. The terms error and a bug or a
defect. The terms error and bug are by far the most common ways of referring to something wrong in the
program text that might lead to a failure. Faults are sometimes referred to as defects.
Test Automation:
Testing of complex systems, embedded and otherwise, can be a human intensive task.
Execution of many tests can be tiring as well as error-prone. Hence, there is a tremendous need for
software testing.
Most software development organizations, automate test-related tasks such as regression testing,
graphical user interface testing, and i/o device driver testing.
The process of test automation cannot be generalized.
General purpose tools for test automation might not be applicable in all test environments
Ex:
Eggplant
Marathon
Pounder for GUI testing
Load & performance testing tools
eloadExpert
DBMonster
JMeter
Dieseltest
WAPT
LoadRunner
Grinder
SOFTWARE QUALITY
Software quality is a multidimensional quantity and is measurable.
Quality Attributes
These can be divided to static and dynamic quality attributes.
Example: A poorly documented piece of code will be harder to understand and hence dif icult to modify.
A poorly structured code might be harder to modify and dif icult to test.
Reliability:
It refers to the probability of failure free operation.
Correctness:
Refers to the correct operation and is always with reference to some artefact.
For a Tester, correctness is w.r.t to the requirements
For a user correctness is w.r.t the user manual
Completeness:
Refers to the availability of all the features listed in the requirements or in the user manual.
An incomplete software is one that does not fuly implement all features required.
Consistency:
Refers to adherence to a common set of conventions and assumptions.
Ex: All buttons in the user interface might follow a common-color coding convention.
Usability:
Refer to ease with which an application can be used. This is an area in itself and there exist
techniques for usability testing.
Psychology plays an important role in the design of techniques for usability testing.
Usability testing is a testing done by its potential users.
The development organization invites a selected set of potential users and asks them to test the
product.
Users in turn test for ease of use, functionality as expected, performance, safety and security.
Users thus serve as an important source of tests that developers or testers within the organization
might not have conceived.
Usability testing is sometimes referred to as user-centric testing.
Performance:
Refers to the time the application takes to perform a requested task. Performance is considered as a
non-functional requirement.
Reliability:
(Software reliability is the probability of failure free operation of software over a given time interval
& under given conditions.)
Software reliability can vary from one operational pro ile to another. An implication is that one
Software reliability is the probability of failure free operation of software in its intended
environments.
The term environment refers to the software and hardware elements needed to execute the
application. These elements include the operating system(OS)hardware requirements and any
other applications needed for communication.
If the requirements are complete and unambiguous, it should be possible to determine the set of all
possible inputs.
character is entered irst followed by the sequence of integers to be sorted. The sequence is
terminated with a period.
De inition: Correctness
Example test plan: Consider following items such as the method used for testing, method for evaluating the
adequacy of test cases, and method to determine if a program has failed or not.
Test plan for sort:
The sort program is to be tested to meet the requirements given in example
1. Execute the program on at least two input seq
characters.
2. Execute the program on an empty input sequence
3.
4. All failures of the test program should be recorded in a suitable ile using the company failure report
form.
State vector: collecting the current values of program variables into a vector known as the state vector.
An indication of where the control of execution is at any instant of time can be given by using an identi ier
associated with the next program statement.
State sequence diagram can be used to specify the behavioural requirements. This same speci ication can then
be used during the testing to ensure if the application con irms to the requirements.
Construction of oracles:
Construction of automated oracles, such as the one to check a matrix multiplication program or a sort
program, Requires determination of I/O relationship. When tests are generated from models such as
inite-state machines(FSMs)or state charts, both inputs and the corresponding outputs are available.
This makes it possible to construct an oracle while generating the tests.
Example: Consider a program named Hvideo that allows one to keep track of home videos. In the data
entry mode, it displays a screen in which the user types in information about a DVD. In search mode, the
program displays a screen into which a user can type some attribute of the video being searched for and
set up a search criterion.
To test Hvideo we need to create an oracle that checks whether the program function correctly in data
entry and search nodes. The input generator generates a data entry request. The input generaor now
requests the oracle to test if Hvideo performed its task correctly on the input given for data entry.
The oracle uses the input to check if the information to be entered into the database has been entered
correctly or not. The oracle returns a pass or no pass to the input generator.
TEST METRICS
The term metric refers to a standard of measurement. In software testing, there exist a variety of metrics.
There are four general core areas that assist in the design of metrics schedule, quality, resources and size.
Size-related metrics:
Measure size of various objects such as the source code and number of tests in a test suite
Organizational metrics:
Metrics at the level of an organization are useful in overall project planning and management.
Ex: the number of defects reported after product release, averaged over a set of products developed and
marketed by an organization, is a useful metric of product quality at the organizational level.
Organizational metrics allow senior management to monitor the overall strength of the organization
and points to areas of weakness. Thus, these metrics help senior management in setting new goals and
plan for resources needed to realize these goals.
Project metrics:
Project metrics relate to a speci ic project, for example the I/O device testing project or a compiler
project. These are useful in the monitoring and control of a speci ic project.
1. Actual/planned system test effort is one project metrics. Test effort could be measured in terms
of the tester_man_months.
.
2. Project metric=
Process metrics:
Every project uses some test process. Big-bang approach well suited for small single person projects.
The goal of a process metric is to assess the goodness of the process.
Test process consists of several phases like unit test, integration test, system test, one can measure how
many defects were found in each phase. It is well known that the later a defect is found, the consttier it
is to ix.
Cyclomatic complexity
V(G)= E-N+2P
Program p containing N node, E edges and p connected procedures.
Larger value of V(G) higher program complexity & program more dif icult to understand &test than one
with a smaller values.
V(G) values 5 or less are recommended
Halstead complexity
Number of error(B) found using program size(S) and effort(E)
B= 7.6 0.667 0.33
Testability:
to which a system or component facilitates the
establishment of test criteria and the performance of tests to determine whether those criteria have
Two types:
static testability metrics
dynamic testability metrics
Dynamic metrics for testability includes various code based coverage criteria.
Ex: when it is dif icult to generate tests that satisfy the statement coverage criterion is considered to have low
testability them one for which it is easier to construct such tests.
UNIT 2
BASICS OF SOFTWARE TESTING - 2
SOFTWARE AND HARDWARE TESTING
There are several similarities and differences between techniques used for testing software and hardware
Software application Hardware product
Does not integrate over time Does integrate over time
Fault present in the application will remain and no VLSI chip, that might fail over time due to a fault that
new faults will creep in unless the application is did not exist at the time chip was manufactured and
changed tested
Built-in self test meant for hardware product, rarely, BIST intended to actually test for the correct
can be applied to software designs and code functioning of a circuit
It only detects faults that were present when the last Hardware testers generate test based on fault-models
change was made Ex: stuck_at fault model one can use a set of input
test patterns to test whether a logic gate is functioning
as expected
Software testers generate tests to test for correct functionality.
Sometimes such tests do not correspond to any general fault model
For example: to test whether there is a memory leak in an application, one performs a combination of
stress testing and code inspection
A variety of faults could lead to memory leaks
Hardware testers use a variety of fault models at different levels of abstraction
Example:
o transistor level faults low level
o gate level, circuit level, function level faults higher level
Software testers might not or might use fault models during test generation even though the model
exist
Mutation testing is a technique based on software fault models
Test Domain a major difference between tests for hardware and software is in the domain of tests
Tests for VLSI chips for example, take the form of a bit pattern. For combinational circuits, for example a
Multiplexer, a inite set of bit patterns will ensure the detection of any fault with respects to a circuit
level fault model.
For software, the domain of a test input is different than that for hardware. Even for the simplest of
programs, the domain could be an in inite set of tuples with each tuple consisting of one or more basic
data types such as integers and reals.
Example
be an incorrect assumption on the input conditions; incorrect assumptions might be made regarding
the components that interface with the program.
Thus, neither veri ication nor testing is a perfect technique for proving the correctness of program.
DEFECT MANAGEMENT
Defect Management is an integral part of a development and test process in many software development
organizations. It is a sub process of a the development process. It entails the following:
Detect prevention
Discovery
Recording and reporting
Classi ication
Resolution
Production
Defect Prevention
It is achieved through a variety of process and tools: They are,
Good coding techniques.
Unit test plans.
Code Inspections.
Defect Discovery
Defect discovery is the identi ication of defects in response to failures observed during dynamic testing
or found during static testing.
It involves debugging the code under test.
Low severity.
Example: Orthogonal defect classi ication is one of the defect classi ication scheme which exist called ODC, that
measures types of defects, this frequency, and Their location to the development phase and documents.
Resolution
scrutiny of the defects, identifying a ix if needed, implementing the ix, testing the ix, and inally closing the
defect indicating that every recorded defect is resolved prior to release.
Defect Prediction
Organizations often do source code Analysis to predict how many defects an application might contain
before it enters the testing the phase.
Advanced statistical techniques are used to predict defects during the test process.
Tools are existing for Recording defects, and computing and reporting defect related statistics.
o BugZilla - Open source
o Fog-Buzz - commercially available tools.
EXECUTION HISTORY
Execution history of a program, also known as execution trace, is an organized collection of information about
various elements of a program during a given execution. An execution slice is an executable subsequence of
execution history. There are several ways to represent an execution history,
Sequence in which the functions in a given program are executed against a given test input,
Sequence in which program blocks are executed.
Sequence of objects and the corresponding methods accessed for object oriented languages such as Java
An execution history may also included values of program variables.
A complete execution history recorded from the start of a execution until its termination
represents a single execution path through the program.
It is possible to get partial execution history also for some program elements or blocks or values of
variables are recorded along a portion of the complete path.
STATIC TESTING
Static testing is carried out without executing the application under test.
This is in contrast to dynamic testing that requires one or more executions of the application under test.
It is useful in that it may lead to the discovery of faults in the application, ambiguities and errors in the
requirements and other application-related document, at a relatively low cost,
This is especially so when dynamic testing expensive.
Static testing is complementary to dynamic testing.
This is carried out by an individual who did not write the code or by a team of individuals.
The test team responsible for static testing has access to requirenments document, application, and all
associated documents such as design document and user manual.
Team also has access to one or more static testing tools.
A static testing tool takes the application code as input and generates a variety of data useful in the test
process.
WALKTHROUGHS
Walkthroughs and inspections are an integral part of static testing.
Walkthrough are an integral part of static testing.
Walkthrough is an informal process to review any application-related document.
eg:
requirements are reviewed---->requirements walkthrough
code is reviewed---->code walkthrough
(or)
peer code review
Walkthrough begins with a review plan agreed upon by all members of the team.
Advantages:
improves understanding of the application.
Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 16
RNSIT SOFTWARE TESTING NOTES
INSPECTIONS
Inspection is a more formally de ined process than a walkthrough. This term is usually associated with
code.
Several organizations consider formal code inspections as a tool to improve code quality at a lower cost
than incurred when dynamic testing is used.
Inspection plan:
i. statement of purpose
ii. work product to be inspected this includes code and associated documents needed for inspection.
iii. team formation, roles, and tasks to be performed.
iv. rate at which the inspection task is to be completed
v. Data collection forms where the team will record its indings such as defects discovered, coding
standard violations and time spent in each task.
(a) CFG clearly shows that the de inition of x at block 1 is used at block-3 but not at block 5.In fact the de inition
of x at block 1 is considered killed due to its rede inition at block 4.
(b) CFG indicates the use of variable y in the block 3.If y is not de ined along the path from start to block 3,then
there is a data- low error as a variable is used before it is de ined.
Several such errors can be detected by static analysis tools.
->compute complexity metrics, used as a parameter in deciding which modules to inspect irst.
o Above diagram illustrates the process of model-checking. A model, usually inite state is extracted from
some source. The source could be the requirements and in some cases, the application code itself.
o One or more desired properties are then coded to a formal speci ication language. Often, such
properties are coded in temporal logic, a language for formally specifying timing properties. The model
and the desired properties are then input to a model checker. The model checker attempts to verify
whether the given properties are satis ied by the given model.
o For each property, the checker could come up with one of three possible answer:
o the property is satisfy
o the property is not satis ied.
o or unable to determine
o In the second case, the model checker provides a counter example showing why the property is not
satis ied.
o The third case might arise when the model checker is unable to terminate after an upper limit on the
number of iterations has reached.
o While model checking and model based testing use models, model checking uses inite state models
augmented with local properties that must hold at individual states. The local properties are known as
atomic propositions and augmented models as kripke structure.
Basic Block
Let P denotes a program written in a procedural programming language, be it high level as C or Java or
low level such as the 80x86 assembly. A basic block, or simply a block, in P is a sequence of consecutive
statements with a single entry and a single exit point.
Thus, a block has unique entry and exit points.
Control always enters a basic block at its entry point and exits from its exit point. There is no possibility
of exit or a halt at any point inside the basic block except at its exit point. The entry and exit points of a
basic block co inside when the block contains only one statement.
example: the following program takes two integers x and y and output x^y.
TYPES OF TESTING
Framework consists of a set of ive classi ies that serve to classify testing techniques that fall under the
dynamic testing category.Dynamic testing requires the excution of program under test.Static testing
consists of testing for the review and analysis of the program.
ive classi iers of testing:-
o 1.C1:source of test generation
o 2.C2:life cycle phase in which testing takes place
o 3.C3:goal of a speci ic testing activity.
o 4.C4:characteristics of the artifact under test
o 5.C5:test process
Interface testing:
Tests are often generated using a components interface.
Interface itself forms a part of the components requirements and hence this form of testing is black box
testing. However, the focus on the interface leads us to consider interface testing in its own right.
Techniques such as
o --->pairwise testing
o --->interface mutation
Pairwise testing:
Set of values for each input is obtained from the components requirement.
Interface mutation:
The interface itself, such as function coded in /c orCORBA component written in an IDL,serves to
extract the information needed to perform interface mutation.
o pairwise testing:is a black box testing
o interface mutation:is a white box testing
Ad-hoc testing:
In adhoc testing,a tester generates tests from requirements but without the use of any systematic
method.
Random testing:
Random testing uses a systematic method to generate tests.Generation of tests using random testing
requires modeling the input space randomly.
Spiral testing:
The term spiral testing is not to be confused with spiral model, through they both are similar in that
both can be visually represented as a spiral of activities.
In the spiral testing, the sophisticated of testing of test activities increases with the stages of an evolving
prototype.
In the early stages, when a prototype is used to evaluate how an application must evolve, one focuses on
test planning. The focus at this stage is on how testing will be performed in the remainder of the project.
Subsequent iterations re ine the prototype based on more precise set of requirements.
Further test planning takes place and unit & integration tests are performed.
In the inal stage ,when the requirements are well de ined, testers focus on system and acceptance
testing.
Agile testing:
Agile testing involves in addition to the usual steps such as test planning, test design and test execution.
Agile testing promotes the following ideas:
Include testing -related activities throughout a development project starting from the requirement phase.
Work collaboratively with the customer who speci ies requirements in terms of tests.
testers and development must collaborate with each other rather than serve as adversaries and
Test often and in small chunks.
Reliability in the igure refers to the probability of failure free operation of the application under test in
its intended environment.
The true reliability differs from the estimated reliability in that the latter is an estimate of the
application reliability obtained by using one of the many statistical methods.
o 0-indicates lowest possible con idence
o 1-the highest possible con idence
Similarly,
o 0-indicates the lowest possible true reliability
o 1-the highest possible true reliability.
Saturation region:
->assumes application A in the system test phase.
->the test team needs to generate tests, set up the test environment, and run A against the test.
1. Assume that the testers are generated using a suitable test generation method (TGAT 1) and that
each test either passes or fails.
2. If we measure the test effort as the combined effort of testing, debugging and ixing the errors the
true reliability increases as shown in the ig.
False sense of con identiality:
This false sense of con idence is due to the lack of discovery of new faults, which in turn is due to the
inability of the tests generated using TGA1 to exercise the application code in ways signi icantly
different from what has already been exercised.
Thus, in the saturation region, the robust states of the application are being exercised, perhaps
repeatedly, whereas the faults lie in the other states.
Reducing delta:
Empirical studies reveal that every single test generation method has its limitations in that the resulting
test set is unlikely to detect all faults in an application.
The more complex an application, the more unlikely it is that tests generated using any given method
will detect all faults.
This is one of the prime regions why tests use or must use multiple techniques for test generation.
Prepared By: DIVYA K [1RN09IS016] & NAMRATHA R [1RN09IS028] Page 25
RNSIT SOFTWARE TESTING NOTES
Impact on test process:
A knowledge and application of the saturation effect are likely to be of value of any test team while
designing and implementing a test process.
UNIT 3
TEST GENERATION FROM REQUIREMENTS-1
INTRODUCTION
A requirement speci ication can be informal, rigorous, formal, or a mix of these three approaches.
The more formal the speci ication, the higher are the chances of automating the test generation process.
The following igure shows variety of test generation techniques
Often, high level designs are also considered as a part of speci ication
Requirements serve as a source for the identi ication of a input domain of the application to be
developed
A variety of test generation techniques are available to select a subset of the input domain to serve as
test set against which the application will be tested
EQUIVALENCE PARTITIONING
Test selection using equivalence partitioning allows a tester to subdivide the input domain into
relatively small number of sub-domains, say N>1 refer ig(a) , which are disjoint, each subset is known
as an equivalence class.
The equivalence classes are created assuming that the program under test exhibits the same behavior
on all elements that is tests, within a class.
One test is selected from each equivalence class
tical, tests generated are different.
Fault targeted
The entire set of inputs can be divided into at least two subsets
One containing all expected(E) or legal inputs
Other containing all unexpected(u) or illegal inputs
E and u are further divided into subsets (refer ig below)
Example:
2, 3 ,.........., 120]
Set of input vales is now divided into E and u.
E=[1, 2,....., 120] u= the rest.
Furthermore, E is subdivided into [1, 2, ....., 61] and [162, 163, ......,120]
According to According to
requirement R1 requirement R2
Invalid inputs below 1 and above 120 are to be treated differently leading to subdivision of u
into two categories.
Test selected using equivalence partitioning technique aims at targeting faults in A w.r.t inputs
in any of the four regions.
1. begin
2. string w, f;
3. input (w, f);
4. if(!exists(f))[raise exception; return(0)};
5. if (length(w)==0){return(0)};
6. return(getcount(w, f));
7. end
using the partitioning method, we obtain the following eg:classes
E1: consists of pairs(w, f) where w is a string and f denotes a ile that exists.
E2: consists of pairs (w, f) where w is a string and f denotes a ile that does not exists.
Eq.class w f
So we note that the number of eq. Classes without any knowledge of program code is 2, whereas that
with the knowledge of partial code is 6.
Equivalence classes based on program output
Quest 1: does the program ever generate a 0?
Quest 2: what are the max and min possible values of the output?
These two questions lead to following eq. Classes
E1: output value v is 0
E2: output value v is, the max. Possible
E3: output value v is, the min. Possible
E4: All other output values.
Multidimensional partitioning
Another way is to consider the input domain I as the set product of the input variables and de ine a
relation on I.
This procedure creates one partition consisting of several equivalence classes.
We refer to this method as multidimensional equivalence partitioning
Example: consider the application that requires two integers input x and y. Each of these inputs is expected to
lie in the following ranges
For unidimensional partitioning, we apply the partitioning guidelines to x and y individually. This leads to six
equivalence classes
E1: x<3 E4: y<5
For multidimensional partitioning, we consider the input domain to be the set product XxY. This leads to 9
equivalence classes
Figure: geometric representation of equivalence classes derived using uni-dimensional partitioning based on x
and y in (a) and (b) respectively and using multi-dimensional partitioning as in (c)
Selection of option c forces the BCS to examine variable V. If V is set t GUI, the operator is asked to enter one
of the three commands via GUI. However, if V is set to ile, BCS obtains the command from a command line.
The command ile may contain any one of the three commands, together with the value of the temperature
to be changed if the command is temp. The ilename is obtained from the variable
Inputs for the boiler-control software. V and F are environment variables. Values of cmd (command) and
tempch (temperature change) are input via the GUI or a data ile depending on V. F speci ies the data ile.
F f_valid, f_invalid
value.
Discard infeasible equivalence classes:
Note that the GUI requests for the amount by which the boiler temp has to be changed only
when the operator selects temp for cmd. Thus all eq. Classes that match the following template
are infeasible.
{(V, F, {cancel, shut, cinvalid}, tvalid U tinvalid)}
none of the other tests will be able to reveal the missing-code error. By separating the correct and
incorrect values of different input variable we increase the possibility of detecting the missing-code
error.
UNIT 5
STRUCTURAL TESTING
OVERVIEW
Testing can reveal a fault only when execution of the faulty element causes a failure
Control low testing criteria are de ined for particular classes of elements by requiring the execution of
all such elements of the program
Control low elements include statements, branches, conditions and paths.
A set of correct program executions in which all control low elements are exercised does not guarantee
the absence of faults
Execution of a faulty statement may not always result in a failure
Control low testing complements functional testing by including cases that may not be identi ied from
speci ications alone
Test suites satisfying control low adequacy criteria would fail in revealing faults that can be caught
with functional criteria
Example missing path faults
Control low testing criteria are used to evaluate the thoroughness of test suites derived from functional
testing criteria by identifying elements of the programs
Unexecuted elements may be due to natural differences between speci ication and implementation, or
they may reveal laws of the software or its development process
Control low adequacy can be easily measured with automatic tools
Figure 5.2: Control Flow graph of function cgi decode from previous Figure
Table 5.1: Sample test suites for C function cgi decode from Figure 5.1
STATEMENT TESTING
Statements are nothing but the nodes of the control low graph.
Statement adequacy criterion:
Let T be a test suite for a program P. T satis ies the statement adequacy criterion for P, iff, for each
statement S of P, there exists at least one test case in T that causes the execution of S.
This is equivalent to stating that every node in the control low graph model of the program 1 is visited
by some execution path exercised by a test case in T.
Statement coverage:
The statement coverage Cstatement of T for P is the fraction of statements of program P executed by at
least one test case in T
=
T satis ies the statements adequacy criterion if Cstatement = 1
Basic block coverage: Nodes in a control low graph often represent basic blocks rather than individual
statements, and so some standards refers to basic coverage or node coverage
Examples: in program 1, it contains
A test suite To
17 10
= 18 = 94% or node coverage =11 = 91%
So it does not satisfy the statement adequacy criteria
A test suite 1
18
= =1 or 100%
18
Coverage is not monotone with respect to the size of the test suites, i.e., test suites that contain fewer
test cases may achieve a higher coverage than test suites that contain more test cases.
Criteria can be satis ied by many test suites of different sizes.
A test suite with fewer test cases may be more dif icult to generate or may be less helpful in debugging.
Designing complex test cases that exercise many different elements of a unit is seldom a good way to
optimize a test suite, although it may occasionally be justi iable when there is large and unavoidable
ixed cost (e.g., setting up equipment) for each test case regardless of complexity.
Control low coverage may be measured incrementally while executing a test suite.
The increment of coverage due to the execution of a speci ic test case does not measure the absolute
ef icacy of the test case.
Measures independent from the order of execution may be obtained by identifying independent
statements.
Figure 5.3: The control low graph of function cgi decode which is obtained from the program of Figure 5.1
0
BRANCH TESTING
A test suite can achieve complete statement coverage without executing all the possible branches in a
program.
Consider, for example, a faulty program obtained from program by removing line 41.
cgi d ecod e0 cgi d ecod e
In the new program there are no statements following the false branch exiting node
Branch adequacy criterion requires each branch of the program t be executed by at least one test case.
Let T be a test suite for a program P. T satis ies the branch adequacy criterion for P. Iff , for each branch B
of P, there exists at least one test case in T that causes the execution of B.
This is equivalent to stating that every edge the control low graph model of program P belongs to some
execution path exercised by a test case in T
The branch coverage of T for P is the fraction of branches of program P executed by at least one
test case in T
=
Test suite satis ies the branch adequacy criterion, and would reveal the fault. Intuitively, since traversing all
T2
edges of a graph causes all nodes to be visited, test suites that satisfy the branch adequacy criterion for a
program P also satisfy the statement adequacy criterion for the same program.
CONDITION TESTING
Branch coverage is useful for exercising faults in the way a computation has been decomposed into
cases. Condition coverage considers this decomposition in more detail, forcing exploration not only of
both possible results of a boolean expression controlling a branch, but also of different combinations of
the individual conditions in a compound boolean expression.
Condition adequacy criteria overcome this problem by requiring different basic conditions of the
decisions to be separately exercised.
Basic condition adequacy criterion: requires each basic condition to be covered
A test suite T for a program P covers all basic conditions of P, i.e. it satis ies the basic condition
adequacy criterion, iff each basic condition in P has a true outcome in at least one test case in T and a
false outcome in at least one test in T.
Basic condition coverage ( _ ) of T for is the fraction of the total no. of truth values assumed
.
by the basic in T _ =
2× .
Basic conditions versus branches
o Basic condition adequacy criterion can be satis ied without satisfying branch coverage
For ex: the test suite 4
Satis ies basic condition adequacy criterion, but not the branch condition adequacy
criterion. (Therefore the outcome of decision at line 27 is always false)
o Thus branch and basic condition adequacy criterion are not directly comparable (neither implies
the other)
Branch and condition adequacy criterion: A test suite satis ies the branch and condition adequacy criterion if it
satis ies both the branch adequacy criterion and the condition adequacy criterion.
A more complete extension that includes both the basic condition and the branch adequacy criteria is the
compound condition adequacy criterion, which requires a test for each possible combination of basic conditions.
For ex: the compound condition at line 27 would require covering the three paths in the following tree
Consider the number of cases required for compound condition coverage of the following two Boolean
expressions, each with ive basic conditions. For the expression a && b && c && d && e, compound
condition coverage requires:
MCDC
o An alternative approach that can be satis ied with the same number of test cases for boolean
expressions of a given length regardless of short-circuit evaluation is the modified condition adequacy
criterion, also known as modified condition / decision coverage or MCDC.
o Key idea: Test important combinations of conditions, avoiding exponential blowup
o the outcome of
each decision
o MC/DC can be satis ied with N+1 test cases, making it attractive compromise b/w no. of required test
cases & thoroughness of the test
PATH TESTING
Path adequacy criterion:
A test suite T for a program P satis ies the path adequacy criterion iff, for each path p of P there exists at
least one test case in T that causes the execution of p.
This is equivalent to stating that every path in the CFG model of prog p is exercised by a test case in T
Path coverage:
The path coverage of T for P is the fraction of path of program P executed by at least one test case
in T
Figure 5.4: Deriving a tree from a control low graph to derive sub-paths for boundary/interior testing. Part (i) is the control low graph
of the C function cgi decode, identical to Figure 14.1 but showing only node identi iers without source code. Part (ii) is a tree derived
from part (i) by following each path in the control low graph up to the irst repeated node. The set of paths from the root of the tree to
each leaf is the required set of sub-paths for boundary/ interior coverage.
Figure 5.6: The control low graph of C function search with move-to-front feature.
Figure 5.8: The subsumption relation among structural test adequacy criteria
Power and cost of structural test adequacy criteria described earlier can be formally compared using
the subsumes relation.
The relations among these criteria are illustrated in the above igure.
They are divide into two broad categories
Practical criteria
Theoretical criteria
[Explain more looking at the diagram]
UNIT 7
TEST CASE SELECTION AND ADEQUACY,
TEST EXECUTION
OVERVIEW
The key problem in software testing is selecting and evaluating test cases
one that ensures correctness of the product.
Unfortunately, the goal is not attainable.
The dif iculty of proving that some set of test cases is adequate in this sense is equivalent to the
dif iculty of proving that the program is correct. In other words,
this sense only if we could establish correctness without any testing at all.
So, in practice we settle for criteria that identify inadequacies in test suites.
If no test in the test suite executes a particular program statement, we might similarly conclude that the
test suite is inadequate to guard against faults in that statement.
Testing Terms
Test case
A test case is a set of inputs, execution conditions, and a pass/fail criterion.
Test obligation
Test suite
A test suite is a set of test cases. Typically, a method for functional testing is concerned with creating a test
suite. A test suite for a program, system, or individual unit may be made up of several test suites for individual
modules, subsystems or features.
Adequacy criterion
A test adequacy criterion is a predicate that is true (satis ied) or false (not satis ied) of a {program, test suite}
pair. Usually a test adequacy criterion is expressed in the form of a rule for deriving a set of test obligations
from another artefact, such as a program or speci ication. The adequacy criterion is then satis ied if every test
obligation is satis ied by at least one test case in the suite.
ADEQUACY CRITERIA
Adequacy criteria are the set of test obligations. We will use the term test obligation for test
speci ications imposed by adequacy criteria, to distinguish them from test speci ications that are
actually used to derive test cases.
Where do test obligations come from?
Functional (black box, speci ication based): from software speci ications.
Structural (white or glass box): from code.
Model based: from model of system.
Fault based: from hypothesized faults (common bugs).
A test suite satis ies an adequacy criterion if
All the tests succeed (pass).
Every test obligation in the criterion is satis ied by at least one of the test cases in the test suite.
Example: A statement coverage adequacy criterion is satis ied by a particular test suite for a program if
each executable statement in the program is executed by at least one test case in the test suite.
Satis iability:
Sometimes no test suite can satisfy criteria for a given program.
Example: if the program contains statements that can never be executed, then no test suite can satisfy
the statement coverage criterion.
Coping with unsatis iability:
Approach 1: Exclude any unsatis iable obligation from the criterion.
Example: modify statement coverage to require execution only of statements that can be executed.
Approach 2: Measure the extent to which a test suite approaches an adequacy criterion.
Example: If a test suite satis ies 85 of 100 obligations, we have reached 85% coverage.
o A coverage measure is the fraction of satis ied obligations.
o Coverage can be a useful indicator.
- Of progress toward a thorough test suite
o Coverage can also be a dangerous seduction
- Coverage is only a proxy for thoroughness or adequacy.
- It is easy to improve coverage without improving a test suite.
- The only measure that really matters is (cost) effectiveness.
COMPARING CRITERIA
Empirical approach: would be based on extensive studies of the effectiveness of different approaches to
testing in industrial practice, including controlled studies to determine whether the relative effectiveness of
different testing method, depends on the kind of software being tested, the kind of organization in which the
software is developed & tested, and a myriad of other potential confounding factors.
Analytical approach: answers to questions of relative effectiveness would describe conditions under which
one adequacy criterion is guaranteed to be more effective than another, or describe in statistical terms their
relative effectiveness.
Analytic comparisons of the strength of test coverage depend on a precise de inition of what it means for one
A test suite Ta that does not include all the test cases of another test suite T b may fail revealing the potential
failure exposed by the test cases that are in Tb but not in Ta.
Thus, if we consider only the guarantees that a test suite provides, the only way for one test suite T a to be
stronger than another suite Tb is to include all test cases of Tb plus additional ones
To compare criteria, then, we consider all the possible ways of satisfying the criteria.
if every test suite that satis ies some criterion A is a superset of some test suite that satis ies criterion B, or
Empirical studies of particular test adequacy criteria do suggest that there is value in pursuing stronger
criteria, particularly when the level of coverage attained is very high.
Adequacy criteria do not provide useful guarantees for fault detection, so comparing guarantees is not a
useful way to compare criteria
A rule of thumb is that, while test case design involves judgement and creativity, test case generation
should be a mechanical step.
Automatic generation of concrete test cases from more abstract test case speci ications reduce the
impact of small interface changes in the course of development.
Corresponding changes to the test suite are still required with each program change, but changes to test
case speci ications are likely to be smaller and more localized than changes to the concrete test cases.
Instantiating test cases that satisfy several constraints may be simple if the constraints are independent,
but becomes more dif icult to automate when multiple constraints apply to the same item.
SCAFFOLDING
Code developed to facilitate testing is called scaffolding, by analogy to the temporary structures erected
around a building during construction or maintenance.
Scaffoldings may include
Test drivers (substituting for a main or calling population)
Test harness (substituting for parts of the deployment environment)
Stubs (substituting for functionally called or used by the software under test)
The purpose of scaffolding is to provide controllability to execute test cases and observability to judge
the outcome of test execution.
Sometimes scaffolding is required to simply make module executable, but even in incremental
development with immediate integration of each module, scaffolding for controllability and
observability may be required because the external interfaces of the system may not provide suf icient
control to drive the module under test through test cases, or suf icient observability of the effect.
Example: consider an interactive program that is normally driven through a GUI. Assume that each
night the person goes through a fully automate and unattended cycle of integration compilation, and
test execution.
It is necessary to perform some testing through the interactive interface, but it is neither necessary nor
ef icient to execute all test cases that way. Small driver programs, independent of GUI can drive each
module through large test suites in a short time.
TEST ORACLES
In practice, the pass/fail criterion is usually imperfect.
A test oracle may apply a pass/fail criterion that re lects only a part of the actual program speci ication,
or is an approximation, and therefore passes some program executions it ought to fail
Several partial test oracles may be more cost-effective than one that is more comprehensive
A test oracle may also give false alarms, failing an execution that is ought to pass.
False alarms in test execution are highly undesirable.
The best oracle we can obtain is an oracle that detects deviations from expectation that may or may not
be actual failure.
Two types
Comparison based oracle
Fig: a test harness with a comparison based test oracle processes test cases consisting of (program input,
predicted output) pairs.
o With a comparison based oracle , we need predicted output for each input
o Oracle compares actual to predicted output, and reports failure if they differ.
o It is best suited for small number of hand generated test cases example: for handwritten Junit
test cases.
o They are used mainly for small, simple test cases
o Expected outputs can also be produced for complex test cases and large test suites
o Capture-replay testing, a special case in which the predicted output or behavior is preserved
from an earlier execution
o Often possible to judge output or behavior without predicting it
Partial oracle
o Oracles that check results without references to predicted output are often partial, in the sense
that they can detect some violations of the actual speci ication but not others.
o They check necessary but not suf icient conditions for correctness.
o A cheap partial oracle that can be used for a large number of test cases is often combined with a
more expensive comparison-based oracle that can be used with a smaller set of test cases for
which predicted output has been obtained
o Speci ications are often incomplete
o Automatic derivations of test oracles are impossible
SELF-CHECKS AS ORACLES
An oracle can also be written as self checks
-Often possible to judge correctness without predicting results.
Typically these self checks are in the form of assertions, but designed to be checked during execution.
It is generally considered good design practice to make assertions and self checks to be free of side
effects on program state.
Self checks in the form of assertions embedded in program code are useful primarily for checking
module and subsystem-level speci ication rather than all program behaviour.
Devising the program assertions that correspond in a natural way to speci ications poses two main
challenges:
Bridging the gap between concrete execution values and abstractions used in speci ication
Dealing in a reasonable way with quanti ication over collection of values
Structural invariants are good candidates for self checks implemented as assertions
They pertain directly to the concrete data structure implementation
It is sometimes straight-forward to translate quanti ication in a speci ication statement into iteration in
a program assertion
A run time assertion system must manage ghost variables
They must ensure that they have no side effects outside assertion checking
Advantages:
-Usable with large, automatically generated test suites.
Limits:
-often it is only a partial check.
-recognizes many or most failures, but not all.