Chap 3.
Regression Testing
WHAT IS REGRESSION TESTING?
When we develop software, we use development testing to obtain confidence in the
correctness of the software. Development testing involves constructing a test plan that
describes how we should test the software and then, designing and running a suite of test
cases that satisfy the requirements of the test plan. When we modify software, we
typically re-test it. This process of re-testing is called regression testing.
Hence, “regression testing is the process of re-testing the modified parts of the software
and ensuring that no new errors have been introduced into previously tested source code
due to these modifications”. Therefore, regression testing tests both the modified source
code and other parts of the source code that may be affected by the change. It serves
several purposes like:
Increases confidence in the correctness of the modified
program.
Locates errors in the modified program.
Preserves the quality and reliability of the software.
Ensures the software’s continued operation.
We typically think of regression testing as a software maintenance activity; however, we
also perform regression testing during the latter stage of software development. This latter
stage starts after we have developed test plans and test suites and used them initially to test
the software. During this stage of development, we fine-tune the source code and correct
errors in it, hence these activities resemble maintenance activities. The comparison of
development testing and regression testing is given in Table 3.1.
Regression Testing Process
Regression testing is a very costly process and consumes a significant amount of
resources. The question is “how to reduce this cost?” Whenever a failure is experienced, it
is reported to the software team. The team may like to debug the source code to know the
reason(s) for this failure. After identification of the reason(s), the source code is modified
and we generally do not expect the same failure again. In order to ensure this correctness,
we re-test the source code with a focus on modified portion(s) of the source code and also
on affected portion(s) of the source code due to modifications. We need test cases that
target the modified and affected portions of the source code. We may write new test cases,
which may be a ‘time and effort consuming’ activity. We neither have enough time nor
reasonable resources to write new test cases for every failure. Another option is to use the
existing test cases which were designed for development testing and some of them might
have been used during development testing. The existing test suite may be useful and may
reduce the cost of regression testing. As we all know, the size of the existing test suite may
be very large and it may not be possible to execute all tests. The greatest challenge is to
reduce the size of the existing test suite for a particular failure. The various steps are
shown in Figure 3.1. Hence, test case selection for a failure is the main key for regression
testing.
Figure 3.1. Steps of regression testing process
Selection of Test Cases
We want to use the existing test suite for regression testing. How should we select an
appropriate number of test cases for a failure? The range is from “one test case” to “all test
cases”. A ‘regression test cases’ selection technique may help us to do this selection
process. The effectiveness of the selection technique may decide the selection of the most
appropriate test cases from the test suite. Many techniques have been developed for
procedural and object oriented programming languages. Testing professionals are, however,
reluctant to omit any test case from a test suite that might expose a fault in the modified
program. We consider a program given in Figure 3.2 along with its modified version where
the modification is in line 6 (replacing operator ‘*’ by ‘-‘). A test suite is also given in
Table 3.2.
1. main( ) 1. main ( )
2. { 2. {
3. int a, b, x, y, z; 3. int a, b, x, y, z;
4. scanf (“%d, %d”, &a, &b); 4. scanf (“%d, %d”, &a, &b);
5. x = a + b ; 5. x = a + b;
6. y = a* b; 6. y = a – b;
8. z = x / y ; 8. z = x / y ;
9. } 9. }
10 else { 10 else {
. .
11 z=x* y; 11 z=x* y;
. .
12 } 12 }
. .
13 printf (“z = %d \ n”, z ); 13 printf (“z = %d \ n“, z);
. .
14 } 14 }
. Original program with fault in line .
(a) 6.
Table 3.2. Test suite for program given in Figure 3.2
Set of Test Cases
S. No. Inputs Execution History
a b
1 2 1 1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 14
2 1 1 1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 14
3 3 2 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14
4 3 3 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14
In this case, the modified line is line number 6 where ‘a*b’ is replaced by ‘a-b’. All four
test cases of the test suite execute this modified line 6. We may decide to execute all four
tests for the modified program. If we do so, test case 2 with inputs a = 1 and b = 1 will
experience a ‘divide by zero’ problem, whereas others will not. However, we may like to
reduce the number of test cases for the modified program. We may select all test cases
which are executing the modified line. Here, line number 6 is modified. All four test cases
are executing the modified line (line number 6) and hence are selected. There is no
reduction in terms of the number of test cases. If we see the execution history, we find that
test case 1 and test case 2 have the same execution history. Similarly, test case 3 and test
case 4 have the same execution history. We choose any one test case of the same execution
history to avoid repetition. For execution history 1 (i.e. 1, 2, 3, 4, 5, 6, 7, 8, 10, 11), if we
select test case 1, the program will execute well, but if we select test case 2, the program
will experience ‘divide by zero’ problem. If several test cases execute a particular
modified line, and all of these test cases reach a particular affected source code segment,
minimization methods require selection of only one such test case, unless they select the
others for coverage elsewhere. Therefore, either test case 1 or test case 2 may have to be
selected. If we select test case 1, we miss the opportunity to detect the fault that test case 2
detects. Minimization techniques may omit some test cases that might expose fault(s) in
the modified program. Hence, we should be very careful in the process of minimization of
test cases and always try to use safe regression test selection technique (if at all it is
possible). A safe regression test selection technique should select all test cases that can
expose faults in the modified program.
REGRESSION TEST CASES SELECTION
Test suite design is an expensive process and its size can grow quite large. Most of the
times, running an entire test suite is not possible as it requires a significant amount of time
to run all test cases. Many techniques are available for the selection of test cases for the
purpose of regression testing.
Select All Test Cases
This is the simplest technique where we do not want to take any risk. We want to run all
test cases for any change in the program. This is the safest technique, without any risk. A
program may fail many times and every time we will execute the entire test suite. This
technique is practical only when the size of the test suite is small. For any reasonable or
large sized test suite, it becomes impractical to execute all test cases.
Select Test Cases Randomly
We may select test cases randomly to reduce the size of the test suite. We decide how
many test cases are required to be selected depending upon time and available resources.
When we decide the number, the same number of test cases is selected randomly. If the
number is large, we may get a good number of test cases for execution and testing may be
of some use. But, if the number is small, testing may not be useful at all. In this technique,
our assumption is that all test cases are equally good in their fault detection ability.
However, in most of the situations, this assumption may not be true. We want to re-test the
source code for the purpose of checking the correctness of the modified portion of the
program. Many randomly selected test cases may not have any relationship with the
modified portion of the program. However, random selection may be better than no
regression testing at all.
Select Modification Traversing Test Cases
We select only those test cases that execute the modified portion of the program and the
portion which is affected by the modification(s). Other test cases of the test suite are
discarded.
Actually, we want to select all those test cases that reveal faults in the modified program.
These test cases are known as fault revealing test cases. There is no effective technique by
which we can find fault revealing test cases for the modified program. This is the best
selection approach, which we want, but we do not have techniques for the same. Another
lower objective may be to select those test cases that reveal the difference in the output of
the original program and the modified program. These test cases are known as
modification revealing test cases. These test cases target that portion of the source code
which makes the output of the original program and the modified program differ.
Unfortunately, we do not have any effective technique to do this. Therefore, it is difficult
to find fault revealing test cases and modification revealing test cases.
The reasonable objective is to select all those test cases that traverse the modified source
code and the source code affected by modification(s). These test cases are known as
modification traversing test cases. It is easy to develop techniques for modification
traversing test cases and some are available too. Out of all modification traversing test
cases, some may be modification revealing test cases and out of some modification
revealing test cases, some may be fault revealing test cases. Many modification traversing
techniques are available but their applications are limited due to the varied nature of
software projects.
We may effectively implement any test case selection technique with the help of a
testing tool. The modified source code and source code affected by modification(s) may
have to be identified systematically and this selected area of the source code becomes the
concern of test case selection. As the size of the source code increases, the complexity also
increases, and need for an efficient technique also increases accordingly.
REDUCING THE NUMBER OF TEST CASES
Test case reduction is an essential activity and we may select those test cases that execute
the modification(s) and the portion of the program that is affected by the modification(s).
We may minimize the test suite or prioritize the test suite in order to execute the selected
number of test cases.
Minimization of Test Cases
We select all those test cases that traverse the modified portion of the program and the
portion that is affected by the modification(s). If we find the selected number very large,
we may still reduce this using any test case minimization technique. These test case
minimization techniques attempt to find redundant test cases. A redundant test case is one
which achieves an objective which has already been achieved by another test case. The
objective may be source code coverage, requirement coverage, variables coverage, branch
coverage, specific lines of source code coverage, etc. A minimization technique may
further reduce the size of the selected test cases based on some criteria. We should always
remember that any type of minimization is risky and may omit some fault revealing test
cases.
Prioritization of Test Cases
We may indicate the order with which a test case may be addressed. This process is known
as prioritization of test cases. A test case with the highest rank has the highest priority and
the test case with the second highest rank has the second highest priority and as so on.
Prioritization does not discard any test case. The efficiency of the regression testing is
dependent upon the criteria of prioritization. There are two varieties of test case
prioritization i.e. general test case prioritization and version specific test case
prioritization. In general test case prioritization, for a given program with its test suite, we
prioritize the test cases that will be useful over a succession of subsequent modified
versions of the original program without any knowledge of modification(s). In the version
specific test case prioritization, we prioritize the test cases, when the original program is
changed to the modified program, with the knowledge of the changes that have been made
in the original program.
Prioritization guidelines should address two fundamental issues like:
(i) What functions of the software must be tested?
(ii) What are the consequences if some functions are not tested?
Every reduction activity has an associated risk. All prioritization guidelines should be
designed on the basis of risk analysis. All risky functions should be tested on higher
priority. The risk analysis may be based on complexity, criticality, impact of failure, etc.
The most important is the ‘impact of failure’ which may range from ‘no impact’ to ‘loss of
human life’ and must be studied very carefully.
The simplest priority category scheme is to assign a priority code to every test case. The
priority code may be based on the assumption that “test case of priority code 1 is more
important than test case of priority code 2.” We may have priority codes as follows:
Priority code 1 : Essential test case Priority
code 2 : Important test case Priority
code 3 : Execute, if time permits
Priority code 4 : Not important test case
Priority code 5 : Redundant test case
There may be other ways for assigning priorities based on customer requirements or
market conditions like:
Priority code 1 : Important for the customer
Priority code 2 : Required to increase customer satisfaction
Priority code 3 : Help to increase market share of the product
We may design any priority category scheme, but a scheme based on technical
considerations always improves the quality of the product and should always be encouraged.
RISK ANALYSIS
Unexpected behaviours of a software program always carry huge information and most of
the time they disturb every associate person. No one likes such unexpected behaviour and
everyone prays that they never face these situations in their professional career. In
practice, the situation is entirely different and developers do face such unexpected
situations frequently and, moreover, work hard to find the solutions of the problems
highlighted by these unexpected behaviours.
We may be able to minimize these situations, if we are able to minimize the risky areas
of the software. Hence, risk analysis has become an important area and in most of the
projects we are doing it to minimize the risk.
What is Risk?
Tomorrow’s problems are today’s risks. Therefore, a simple definition of risk is a problem
that may cause some loss or threaten the success of the project, but, which has not
happened yet. Risk is defined as the “probability of occurrence of an undesirable event and
the impact of occurrence of that event.” To understand whether an event is really risky
needs an understanding of the potential consequences of the occurrences / non-occurrences
of that event. Risks may delay and over-budget a project. Risky projects may also not meet
specified quality levels. Hence, there are two things associated with risk as given below:
(i) Probability of occurrence of a problem (i.e. an event)
(ii) Impact of that problem
Risk analysis is a process of identifying the potential problems and then assigning a
‘probability of occurrence of the problem’ value and ‘impact of that problem’ value for
each identified problem. Both of these values are assigned on a scale of 1 (low) to 10
(high). A factor ‘risk exposure’ is calculated for every problem which is the product of
‘probability of occurrence of the problem’ value and ‘impact of that problem’ value. The
risks may be ranked on the basis of its risk exposure. A risk analysis table may be prepared
as given in Table 3.3. These values may be calculated on the basis of historical data, past
experience, intuition and criticality of the problem. We should not confuse with the
mathematical scale of probability values which is from 0 to 1. Here, the scale of 1 to 10 is
used for assigning values to both the components of the risk exposure.
The case study of ‘University Registration System’ given in last chapter is considered and its
potential problems are identified. Risk exposure factor for every problem is calculated on the
basis of ‘probability of occurrence of the problem’ and ‘impact of that problem’. The risk
analysis is given in Table 3.4.
Table 3.4. Risk analysis table of ‘University Registration System’
S. Potential Problems Probability Impact of Risk
No. of that Exposure
occurrence Problem
of
problem
1. Issued password not available 2 3 6
2. Wrong entry in students detail form 6 2 12
3. Wrong entry in scheme detail form 3 3 9
4. Printing mistake in registration card 2 2 4
5. Unauthorized access 1 10 10
6. Database corrupted 2 9 18
7. Ambiguous documentation 8 1 8
8. Lists not in proper format 3 2 6
9. 2 1 2
10. School not available in the 2 4 8
database
The potential problems ranked by risk exposure are 6, 2, 5, 3, 7, 10, 1, 8, 4 and 9.
Risk Matrix
Risk matrix is used to capture identified problems, estimate their probability of occurrence
with impact and rank the risks based on this information. We may use the risk matrix to
assign thresholds that group the potential problems into priority categories. The risk matrix
is shown in Figure 3.3 with four quadrants. Each quadrant represents a priority category.
4
3
2
1
0
Figure 3.3. Threshold by quadrant
The priority category in defined as:
Priority category 1 (PC-1) = High probability value and high impact value
Priority category 2 (PC-2) = High probability value and low impact value
Priority category 3 (PC-3) = Low probability value and high impact value
Priority category 4 (PC-4) = Low probability value and low impact value
In this case, a risk with high probability value is given more importance than a problem with high impact
value. We may change this and may decide to give more importance to high impact value over the high
probability value and is shown in Figure 3.4. Hence, PC-2 and PC-3 will swap, but PC-1 and PC-4 will
remain the same.
5
4
3
2
Figure 3.4. Alternative threshold by quadrant
There may be situations where we do not want to give importance to any value and
assign equal importance. In this case, the diagonal band prioritization scheme may be more
suitable as shown in Figure 3.5. This scheme is more appropriate in situations where we
have difficulty in assigning importance to either ‘probability of occurrence of the problem’
value or ‘impact of that problem’ value.
We may also feel that high impact value must be given highest priority irrespective of
the ‘probability of occurrence’ value. A high impact problem should be addressed first,
irrespective of its probability of occurrence value. This prioritization scheme is given in
Figure 3.6. Here, the highest priority (PC-1) is assigned to high impact value and for the
other four quadrants; any prioritization scheme may be selected. We may also assign high
priority to high ‘probability of occurrence’ values irrespective of the impact value as
shown in Figure 3.7. This scheme may not be popular in practice. Generally, we are afraid
of the impact of the problem. If the impact value is low, we are not much concerned. In the
risk analysis table (see Table 3.4), ambiguous documentations (S. No. 7) have high
‘probability of occurrence of problem’ value (8), but impact value is very low (1).
Hence, these faults are not considered risky faults as compared to ‘unauthorized access’
(S. No. 5) where ‘probability of occurrence’ value is very low (1) and impact value is very
high (10)
5
4
Figure 3.5. Threshold by diagonal quadrant
10
9
8
7
6
5
4
3
2
1
Figure 3.6. Threshold based on high ‘Impact of Problem’ value
4
3
Figure 3.7. Threshold based on high ‘probability of occurrence of problem’ value
After the risks are ranked, the high priority risks are identified. These risks are required
to be managed first and then other priority risks in descending order. These risks should be
discussed in a team and proper action should be recommended to manage these risks. A
risk matrix has become a powerful tool for designing prioritization schemes. Estimating
the probability of occurrence of a problem may be difficult in practice. Fortunately, all that
matters when using a risk matrix is the relative order of the probability estimates (which
risks are more likely to occur) on the scale of 1 to 10. The impact of the problem may be
critical, serious, moderate, minor or negligible. These two values are essential for risk
exposure which is used to prioritize the risks.
CODE COVERAGE PRIORITIZATION TECHNIQUE
This type of prioritization is based on code coverage i.e. test cases are prioritized based on
their code coverage.
Total Statement Coverage Prioritization – In this technique, the total number of
statements covered by a test case is used as a factor to prioritize test cases. For
example, a test case covering 10 statements will be given higher priority than a test
case covering 5 statements.
Additional Statement Coverage Prioritization – This technique involves
iteratively selecting a test case with maximum statement coverage, then selecting a
test case that covers statements that were left uncovered by the previous test case.
This process is repeated till all statements have been covered.
Total Branch Coverage Prioritization – Using total branch coverage as a factor
for ordering test cases, prioritization can be achieved. Here, branch coverage refers
to coverage of each possible outcome of a condition.
Additional Branch Coverage Prioritization – Similar to the additional statement
coverage technique, it first selects a test case with maximum branch coverage and
then iteratively selects a test case that covers branch outcomes that were left
uncovered by the previous test case.
Total Fault-Exposing-Potential Prioritization – Fault-exposing-potential (FEP)
refers to the ability of the test case to expose fault. Statement and Branch Coverage
Techniques do not take into account the fact that some bugs can be more easily
detected than others and also that some test cases have more potential to detect bugs
than others.
FEP depends on:
1. Whether test cases cover faulty statements or not.
2. Probability that faulty statement will cause test case to fail.
Object Oriented Testing
What is Object-Oriented Testing?
Object-Oriented Testing (OOT) is a software testing technique that focuses on testing the
individual objects within an object-oriented system. This approach is based on the principles
of object-oriented programming (OOP), which emphasizes encapsulation, inheritance, and
polymorphism. By testing objects in isolation and in combination, OOT helps ensure the
quality and reliability of object-oriented software systems.
Developing Test Cases in Object-Oriented Testing
To effectively develop test cases for object-oriented testing, it’s crucial to consider the
following aspects:
Constructor Testing: Verify that objects are initialized correctly.
Method Testing: Test each method’s functionality, input validation, error handling, and
output correctness.
Attribute Testing: Ensure that attributes are assigned and accessed correctly.
Valid State Testing: Verify that objects can be created in valid states.
Invalid State Testing: Test how objects handle invalid states and recover gracefully.
State Transition Testing: Ensure that objects can transition between different states
correctly.
Inheritance Hierarchy Testing: Verify that inheritance relationships are defined correctly.
Method Overriding Testing: Test that overridden methods behave as expected.
Polymorphism Testing: Ensure that polymorphic behavior is implemented correctly.
Interface Compliance Testing: Verify that objects adhere to the specified interfaces.
Interface Method Testing: Test the functionality of interface methods.
Object Interaction Testing: Test how objects interact with each other, including message
passing and data exchange
Dependency Testing: Verify that objects depend on the correct components and services.
Key Aspects of Object-Oriented Testing
Encapsulation: Test the public interface of objects, ensuring that private implementation
details are not exposed.
Inheritance: Test the inheritance hierarchy, including method overriding and polymorphism.
Polymorphism: Test the ability of objects to take on different forms at runtime.
Abstraction: Test the abstract classes and interfaces to ensure they are well-defined and
implemented correctly.
Object-Oriented Testing Strategy
A comprehensive object-oriented testing strategy should include the following elements:
Test Planning: Identify the specific objects to be tested.
Determine the testing techniques to be used (e.g., unit testing, integration testing, system
testing).
Develop a test plan that outlines the testing scope, objectives, and schedule.
Test Case Design: Create test cases that cover all relevant scenarios, Object-Oriented
Testing including boundary value analysis, equivalence partitioning, and decision table
testing. Consider the object’s state, behavior, and interactions with other objects.
Test Execution: Execute test cases systematically, recording the results.
Use appropriate testing tools to automate test execution and reporting.
Test Reporting: Generate detailed test reports that summarize the test results, defects found,
and overall test coverage.
Defect Tracking: Track and manage defects identified during testing. Prioritize and assign
defects to developers for resolution.
Test Automation: Use test automation tools to automate the execution of repetitive test
cases. Automate the generation of test data and test reports.
Continuous Integration and Continuous Delivery (CI/CD): Integrate object-oriented
testing into the CI/CD pipeline to ensure that code changes are tested automation testing.
Path Testing in Software Engineering
Path Testing is a method that is used to design the test cases. In the path testing method,
the control flow graph of a program is designed to find a set of linearly independent paths
of execution. In this method, Cyclomatic Complexity is used to determine the number of
linearly independent paths and then test cases are generated for each path.
It gives complete branch coverage but achieves that without covering all possible paths of
the control flow graph. McCabe’s Cyclomatic Complexity is used in path testing. It is a
structural testing method that uses the source code of a program to find every possible
executable path.
Path Testing Process
Control Flow Graph:
Draw the corresponding control flow graph of the program in which all the executable
paths are to be discovered.
Cyclomatic Complexity:
After the generation of the control flow graph, calculate the cyclomatic complexity of
the program using the following formula.
McCabe's Cyclomatic Complexity = E - N + 2P
Where,
E = Number of edges in the control flow graph
N = Number of vertices in the control flow graph
P = Program factor
Make Set:
Make a set of all the paths according to the control flow graph and calculate cyclomatic
complexity. The cardinality of the set is equal to the calculated cyclomatic complexity.
Create Test Cases:
Create a test case for each path of the set obtained in the above step.
State Transition Testing
Software testing is broadly categorized into the white box testing and black box testing. In
black box testing, the output generated from a software is tested against the input data set.
State transition testing is one of the concepts under black box testing. “It is a test design
technique which can be used to derive test cases for software features that goes through
multiple states”.
What is Software State Transition Testing?
The software state transition testing is conducted to verify the change in states of a software
under varying inputs. If the circumstances under which the provided input are modified,
then there are updates to the states of the software.
The software state transition testing comes under the black box testing and is performed to
verify the characteristics of the software for multiple sets of input conditions that are fed to
it in a specific order. It includes verification of both positive and negative flows. This type
of testing is adopted where various state transitions of the software need to be verified.
Objectives of Software State Transition Testing
The objectives of the state transition testing are listed below –
The state transition testing is done to detect multiple phases of the software. These
phases are interlinked to specific criteria within the software.
The state transition testing assists in creating the state transition diagram that
describes the software’s multiple phases and their changes.
The state transition testing helps to verify if the software moves to proper states
under different circumstances.
The state transition testing validates if the starting and ending states of the software
are correct.
The state transition testing is conducted to check how the software reacts to
unexpected circumstances.
Parts of Software State Transition Testing
The parts of the software state transition testing are listed below −
States − They are denoted by rectangles with rounded corners and describe the conditions
of a software. Each state represents a node in the state transition diagram, where one node
points to a specific state/condition.
Transitions − They are denoted by arrows and are used to indicate changes from one state
to another when the software reacts to an event.
Events − They are labelled above the transition arrows. An event is an activity which
brings in change in the software state.
Actions − They are denoted by message boxes. An action is a characteristic that the
software generates when its state is modified.
Example
Let us take an example of a banking application where we would create a state transition
diagram on login module features listed below –
User enters correct credentials in the first attempt, user logs into the banking system.
User enters correct credentials in the second attempt, user logs into the banking
system.
User enters correct credentials in the third attempt, user logs into the banking
system.
User enters correct credentials in the four attempts, user credentials are locked.
The total number of test cases that can be designed from the above state transition diagram
are listed below −
Test Case 1 − User is in the Login Page, and then he enters the correct credentials in the
first attempt, resulting in the successful login.
Test Case 2 − User is in the Login Page, and then he enters the correct credentials in the
second attempt, resulting in the successful login.
Test Case 3 − User is in the Login Page, and then he enters the correct credentials in the
third attempt, resulting in the successful login.
Test Case 4 − User is in the Login Page, and then he enters the incorrect credentials in the
third attempt, resulting in the account locked.
Advantages of Software State Transition Testing
The advantages of the software state transition testing are listed below –
The state transition testing helps to create the state transition diagram which gives a
clear picture of all the software states, and to come up with superior communication,
documentation, and understanding of the complete software.
The state transition testing is a good test case design technique and it can
incorporate both the positive and negative flows.
The state transition testing helps to detect defects in the early stages of the software
development life cycle (SDLC) by verifying every transition in states of the
software.
Disadvantages of Software State Transition Testing
The disadvantages of the software state transition testing are listed below –
The state transition testing may miss some of the states of the software.
The state transition testing does not include testing of all possible combinations of
input data sets.
If any states are missed in the state transition testing, it results in incomplete test
coverage.
CLASS TESTING
A class is very important in object oriented programming. Every instance of a class is
known as an object. Testing of a class is very significant and critical in object oriented
testing where we want to verify the implementation of a class with respect to its
specifications. If the implementation is as per specifications, then it is expected that every
instance of the class may behave in the specified way. Class testing is similar to the unit
testing of a conventional system. We require stubs and drivers for testing a ‘unit’ and
sometimes, it may require significant effort. Similarly, classes also cannot be tested in
isolation. They may also require additional source code (similar to stubs and drivers) for
testing independently.
How Should We Test a Class?
We want to test the source code of a class. Validation and verification techniques are
equally applicable to test a class. We may review the source code during verification and
may be able to detect a good number of errors. Reviews are very common in practice, but
their effectiveness is heavily dependent on the ability of the reviewer(s).
Another type of testing is validation where we may execute a class using a set of test
cases. This is also common in practice but significant effort may be required to write test
drivers and sometime this effort may be more than the effort of developing the ‘unit’ under
test. After writing test cases for a class, we must design a test driver to execute each of the
test cases and record the output of every test case. The test driver creates one or more
instances of a class to execute a test case. We should always remember that classes are
tested by creating instances and testing the behaviour of those instances.
Issues Related to Class Testing
How should we test a class? We may test it independently, as a unit or as a group of a
system. The decision is dependent on the amount of effort required to develop a test
driver, severity of class in the system and associated risk with it and so on. If a class has
been developed to be a part of a class library, thorough testing is essential even if the cost
of developing a test driver is very high.
Classes should be tested by its developers after developing a test driver. Developers are
familiar with the internal design, complexities and other critical issues of a class under test
and this knowledge may help to design test cases and develop test driver(s). Class should
be tested with respect to its specifications. If some unspecified behaviours have been
implemented, we may not be able to test them. We should always be very careful for
additional functionalities which are not specified. Generally, we should discourage this
practice and if it has been implemented in the SRS document, it should immediately be
specified. A test plan with a test suite may discipline the testers to follow a predefined
path. This is particularly essential when developers are also the testers.
Generating Test Cases
One of the methods of generating test cases is from pre and post conditions specified in the
use cases. As discussed earlier, use cases help us to generate very effective test cases. The
pre and post conditions of every method may be used to generate test cases for class
testing. Every method of a class has a pre-condition that needs to be satisfied before the
execution. Similarly, every method of a class has a post-condition that is the resultant state
after the execution of the method. Consider a class ‘stack’ given in Figure with two
attributes (x and top) and three methods (stack(), push(x), pop()).
Stack
x:
integer
top:
integer
Stack(
)
push(x
) pop()
Figure- Specification for the class stack
We should first specify the pre and post conditions for every operation/method of a
class. We may identify requirements for all possible combinations of situations in which a
pre- condition can hold and post-conditions can be achieved. We may generate test cases
to address what happens when a pre-condition is violated. We consider the stack class
given in Figure and identify the following pre and post conditions of all the methods in the
class:
(i) Stack::Stack()
(a) Pre=true
(b) Post: top=0
(ii) Stack::push(x)
(a) Pre: top<MAX
(b) Post: top=top+1
(iii) Stack::pop()
(a) Pre: top>0
(b) Post: top=top-1
After the identification of pre and post conditions, we may establish logical
relationships between pre and post conditions. Every logical relationship may generate a
test case. We consider the push() operation and establish the following logical
relationships:
1. (pre condition: top<MAX; post condition: top=top+1)
2. (pre condition: not (top<MAX) ; post condition: exception)
Similarly for pop() operation, the following logical relationships are established:
3. (pre condition: top>0; post condition: top=top-1)
4. (pre condition: not (top>0) ; post condition: exception)
We may identify test cases for every operation/method using pre and post conditions.
We should generate test cases when a pre-condition is true and false. Both are equally
important to verify the behaviour of a class. We may generate test cases for push(x) and
pop() operations (refer Table A and Table B).
Table A. Test cases of function push()
Test input Condition Expected output
23 top<MAX Element ’23’ inserted successfully
34 top=MAX
Table B. Test cases of function pop()
Test input Condition Expected output
- top>0 23
- top=0
Example- Consider the example of withdrawing cash from an ATM machine given in
example. Generate test cases using class testing.
Solution:
The class ATMWithdrawal is given in Figure.
ATMWithdrawal
accountID:
integer
amount:
integer
ATMWithdrawal
(accid, amt)
Withdraw()
Figure- Class ATM withdrawal
The pre and post conditions of function Withdraw() are given as:
ATMWirthdrawal::Withdraw()
Pre: true
Post: if(PIN is valid) then
if (balance>=amount) then
balance=balance-amount
else
Display “Insufficient balance”
else
Display “Invalid PIN”
(true, PIN is valid and balance>=amount)
(true, PIN is valid and balance<amount)
(true, PIN is invalid)
Test cases are given in Table.
Table 9.13. Test cases for function withdraw()
S. No. AccountID Amount Expected output
1. 4321 1000 Balance update/Debit account
2. 4321 2000
3. 4322 - Invalid PIN